text
stringlengths
9
226k
source
stringlengths
1
131
Apollo is one of the Olympian deities in classical Greek and Roman religion and Greek and Roman mythology. The national divinity of the Greeks, Apollo has been recognized as a god of archery, music and dance, truth and prophecy, healing and diseases, the Sun and light, poetry, and more. One of the most important and complex of the Greek gods, he is the son of Zeus and Leto, and the twin brother of Artemis, goddess of the hunt. Seen as the most beautiful god and the ideal of the kouros (ephebe, or a beardless, athletic youth), Apollo is considered to be the most Greek of all the gods. Apollo is known in Greek-influenced Etruscan mythology as Apulu. As the patron deity of Delphi (Apollo Pythios), Apollo is an oracular god—the prophetic deity of the Delphic Oracle. Apollo is the god who affords help and wards off evil; various epithets call him the "averter of evil". Delphic Apollo is the patron of seafarers, foreigners and the protector of fugitives and refugees. Medicine and healing are associated with Apollo, whether through the god himself or mediated through his son Asclepius. Apollo delivered people from epidemics, yet he is also a god who could bring ill-health and deadly plague with his arrows. The invention of archery itself is credited to Apollo and his sister Artemis. Apollo is usually described as carrying a golden bow and a quiver of silver arrows. Apollo's capacity to make youths grow is one of the best attested facets of his panhellenic cult persona. As the protector of young (kourotrophos), Apollo is concerned with the health and education of children. He presided over their passage into adulthood. Long hair, which was the prerogative of boys, was cut at the coming of age (ephebeia) and dedicated to Apollo. Apollo is an important pastoral deity, and was the patron of herdsmen and shepherds. Protection of herds, flocks and crops from diseases, pests and predators were his primary duties. On the other hand, Apollo also encouraged founding new towns and establishment of civil constitution. He is associated with dominion over colonists. He was the giver of laws, and his oracles were consulted before setting laws in a city. As the god of mousike, Apollo presides over all music, songs, dance and poetry. He is the inventor of string-music, and the frequent companion of the Muses, functioning as their chorus leader in celebrations. The lyre is a common attribute of Apollo. In Hellenistic times, especially during the 5th century BCE, as Apollo Helios he became identified among Greeks with Helios, the personification of the sun. In Latin texts, however, there was no conflation of Apollo with Sol among the classical Latin poets until 1st century CE. Apollo and Helios/Sol remained separate beings in literary and mythological texts until the 5th century CE. Etymology Apollo (Attic, Ionic, and Homeric Greek: , Apollōn ( ); Doric: , Apellōn; Arcadocypriot: , Apeilōn; Aeolic: , Aploun; ) The name Apollo—unlike the related older name Paean—is generally not found in the Linear B (Mycenean Greek) texts, although there is a possible attestation in the lacunose form ]pe-rjo-[ (Linear B: ]-[) on the KN E 842 tablet, though it has also been suggested that the name might actually read "Hyperion" ([u]-pe-rjo-[ne]). The etymology of the name is uncertain. The spelling ( in Classical Attic) had almost superseded all other forms by the beginning of the common era, but the Doric form, Apellon (), is more archaic, as it is derived from an earlier . It probably is a cognate to the Doric month Apellaios (), and the offerings apellaia () at the initiation of the young men during the family-festival apellai (). According to some scholars, the words are derived from the Doric word apella (), which originally meant "wall," "fence for animals" and later "assembly within the limits of the square." Apella () is the name of the popular assembly in Sparta, corresponding to the ecclesia (). R. S. P. Beekes rejected the connection of the theonym with the noun apellai and suggested a Pre-Greek proto-form *Apalyun. Several instances of popular etymology are attested from ancient authors. Thus, the Greeks most often associated Apollo's name with the Greek verb (apollymi), "to destroy". Plato in Cratylus connects the name with (apolysis), "redemption", with (apolousis), "purification", and with ([h]aploun), "simple", in particular in reference to the Thessalian form of the name, , and finally with (aeiballon), "ever-shooting". Hesychius connects the name Apollo with the Doric (apella), which means "assembly", so that Apollo would be the god of political life, and he also gives the explanation (sekos), "fold", in which case Apollo would be the god of flocks and herds. In the ancient Macedonian language (pella) means "stone," and some toponyms may be derived from this word: (Pella, the capital of ancient Macedonia) and (Pellēnē/Pellene). A number of non-Greek etymologies have been suggested for the name, The Hittite form Apaliunas (d) is attested in the Manapa-Tarhunta letter. The Hittite testimony reflects an early form , which may also be surmised from comparison of Cypriot with Doric . The name of the Lydian god Qλdãns /kʷʎðãns/ may reflect an earlier /kʷalyán-/ before palatalization, syncope, and the pre-Lydian sound change *y > d. Note the labiovelar in place of the labial /p/ found in pre-Doric Ἀπέλjων and Hittite Apaliunas. A Luwian etymology suggested for Apaliunas makes Apollo "The One of Entrapment", perhaps in the sense of "Hunter". Greco-Roman epithets Apollo's chief epithet was Phoebus ( ; , Phoibos ), literally "bright". It was very commonly used by both the Greeks and Romans for Apollo's role as the god of light. Like other Greek deities, he had a number of others applied to him, reflecting the variety of roles, duties, and aspects ascribed to the god. However, while Apollo has a great number of appellations in Greek myth, only a few occur in Latin literature. Sun Aegletes ( ; Αἰγλήτης, Aiglētēs), from , "light of the sun" Helius ( ; , Helios), literally "sun" Lyceus ( ; , Lykeios, from Proto-Greek *), "light". The meaning of the epithet "Lyceus" later became associated with Apollo's mother Leto, who was the patron goddess of Lycia () and who was identified with the wolf (). Phanaeus ( ; , Phanaios), literally "giving or bringing light" Phoebus ( ; , Phoibos), literally "bright", his most commonly used epithet by both the Greeks and Romans Sol (Roman) (), "sun" in Latin Wolf Lycegenes ( ; , Lukēgenēs), literally "born of a wolf" or "born of Lycia" Lycoctonus ( ; , Lykoktonos), from , "wolf", and , "to kill" Origin and birth Apollo's birthplace was Mount Cynthus on the island of Delos. Cynthius ( ; , Kunthios), literally "Cynthian" Cynthogenes ( ; , Kynthogenēs), literally "born of Cynthus" Delius ( ; Δήλιος, Delios), literally "Delian" Didymaeus ( ; , Didymaios) from δίδυμος, "twin", as the twin of Artemis Place of worship Delphi and Actium were his primary places of worship. Acraephius ( ; , Akraiphios, literally "Acraephian") or Acraephiaeus ( ; , Akraiphiaios), "Acraephian", from the Boeotian town of Acraephia (), reputedly founded by his son Acraepheus. Actiacus ( ; , Aktiakos), literally "Actian", after Actium () Delphinius ( ; , Delphinios), literally "Delphic", after Delphi (Δελφοί). An etiology in the Homeric Hymns associated this with dolphins. Epactaeus, meaning "god worshipped on the coast", in Samos. Pythius ( ; , Puthios, from Πυθώ, Pythō), from the region around Delphi Smintheus ( ; , Smintheus), "Sminthian"—that is, "of the town of Sminthos or Sminthe" near the Troad town of Hamaxitus Napaian Apollo (Ἀπόλλων Ναπαῖος), from the city of Nape at the island of Lesbos Healing and disease Acesius ( ; , Akesios), from , "healing". Acesius was the epithet of Apollo worshipped in Elis, where he had a temple in the agora. Acestor ( ; , Akestōr), literally "healer" Culicarius (Roman) ( ), from Latin culicārius, "of midges" Iatrus ( ; , Iātros), literally "physician" Medicus (Roman) ( ), "physician" in Latin. A temple was dedicated to Apollo Medicus at Rome, probably next to the temple of Bellona. Paean ( ; , Paiān), physician, healer Parnopius ( ; , Parnopios), from , "locust" Founder and protector Agyieus ( ; , Aguīeus), from , "street", for his role in protecting roads and homes Alexicacus ( ; , Alexikakos), literally "warding off evil" Apotropaeus ( ; , Apotropaios), from , "to avert" Archegetes ( ; , Arkhēgetēs), literally "founder" Averruncus (Roman) ( ; from Latin āverruncare), "to avert" Clarius ( ; , Klārios), from Doric , "allotted lot" Epicurius ( ; , Epikourios), from , "to aid" Genetor ( ; , Genetōr), literally "ancestor" Nomius ( ; , Nomios), literally "pastoral" Nymphegetes ( ; , Numphēgetēs), from , "Nymph", and , "leader", for his role as a protector of shepherds and pastoral life Patroos from , "related to one's father," for his role as father of Ion and founder of the Ionians, as worshipped at the Temple of Apollo Patroos in Athens Sauroctunos, “lizard killer”, possibly a reference to his killing of Python Prophecy and truth Coelispex (Roman) ( ), from Latin coelum, "sky", and specere "to look at" Iatromantis ( ; , Iātromantis,) from , "physician", and , "prophet", referring to his role as a god both of healing and of prophecy Leschenorius ( ; , Leskhēnorios), from , "converser" Loxias ( ; , Loxias), from , "to say", historically associated with , "ambiguous" Manticus ( ; , Mantikos), literally "prophetic" Proopsios (), meaning "foreseer" or "first seen" Music and arts Musagetes ( ; Doric , Mousāgetās), from , "Muse", and "leader" Musegetes ( ; , Mousēgetēs), as the preceding Archery Aphetor ( ; , Aphētōr), from , "to let loose" Aphetorus ( ; , Aphētoros), as the preceding Arcitenens (Roman) ( ), literally "bow-carrying" Argyrotoxus ( ; , Argyrotoxos), literally "with silver bow" Clytotoxus ( ; , Klytótoxos), "he who is famous for his bow", the renowned archer. Hecaërgus ( ; , Hekaergos), literally "far-shooting" Hecebolus ( ; , Hekēbolos), "far-shooting" Ismenius ( ; , Ismēnios), literally "of Ismenus", after Ismenus, the son of Amphion and Niobe, whom he struck with an arrow Amazons Amazonius (), Pausanias at the Description of Greece writes that near Pyrrhichus there was a sanctuary of Apollo, called Amazonius () with image of the god said to have been dedicated by the Amazons. Celtic epithets and cult titles Apollo was worshipped throughout the Roman Empire. In the traditionally Celtic lands, he was most often seen as a healing and sun god. He was often equated with Celtic gods of similar character. Apollo Atepomarus ("the great horseman" or "possessing a great horse"). Apollo was worshipped at Mauvières (Indre). Horses were, in the Celtic world, closely linked to the sun. Apollo Belenus ("bright" or "brilliant"). This epithet was given to Apollo in parts of Gaul, Northern Italy and Noricum (part of modern Austria). Apollo Belenus was a healing and sun god. Apollo Cunomaglus ("hound lord"). A title given to Apollo at a shrine at Nettleton Shrub, Wiltshire. May have been a god of healing. Cunomaglus himself may originally have been an independent healing god. Apollo Grannus. Grannus was a healing spring god, later equated with Apollo. Apollo Maponus. A god known from inscriptions in Britain. This may be a local fusion of Apollo and Maponus. Apollo Moritasgus ("masses of sea water"). An epithet for Apollo at Alesia, where he was worshipped as god of healing and, possibly, of physicians. Apollo Vindonnus ("clear light"). Apollo Vindonnus had a temple at Essarois, near Châtillon-sur-Seine in present-day Burgundy. He was a god of healing, especially of the eyes. Apollo Virotutis ("benefactor of mankind"). Apollo Virotutis was worshipped, among other places, at Fins d'Annecy (Haute-Savoie) and at Jublains (Maine-et-Loire). Origins The cult centers of Apollo in Greece, Delphi and Delos, date from the 8th century BCE. The Delos sanctuary was primarily dedicated to Artemis, Apollo's twin sister. At Delphi, Apollo was venerated as the slayer of the monstrous serpent Python. For the Greeks, Apollo was the most Greek of all the gods, and through the centuries he acquired different functions. In Archaic Greece he was the prophet, the oracular god who in older times was connected with "healing". In Classical Greece he was the god of light and of music, but in popular religion he had a strong function to keep away evil. Walter Burkert discerned three components in the prehistory of Apollo worship, which he termed "a Dorian-northwest Greek component, a Cretan-Minoan component, and a Syro-Hittite component." Healer and god-protector from evil In classical times, his major function in popular religion was to keep away evil, and he was therefore called "apotropaios" (, "averting evil") and "alexikakos" ( "keeping off ill"; from v. + n. ). Apollo also had many epithets relating to his function as a healer. Some commonly-used examples are "paion" ( literally "healer" or "helper") "epikourios" (, "succouring"), "oulios" (, "healer, baleful") and "loimios" (, "of the plague"). In later writers, the word, "paion", usually spelled "Paean", becomes a mere epithet of Apollo in his capacity as a god of healing. Apollo in his aspect of "healer" has a connection to the primitive god Paean (), who did not have a cult of his own. Paean serves as the healer of the gods in the Iliad, and seems to have originated in a pre-Greek religion. It is suggested, though unconfirmed, that he is connected to the Mycenaean figure pa-ja-wo-ne (Linear B: ). Paean was the personification of holy songs sung by "seer-doctors" (), which were supposed to cure disease. Homer illustrated Paeon the god and the song both of apotropaic thanksgiving or triumph. Such songs were originally addressed to Apollo and afterwards to other gods: to Dionysus, to Apollo Helios, to Apollo's son Asclepius the healer. About the 4th century BCE, the paean became merely a formula of adulation; its object was either to implore protection against disease and misfortune or to offer thanks after such protection had been rendered. It was in this way that Apollo had become recognized as the god of music. Apollo's role as the slayer of the Python led to his association with battle and victory; hence it became the Roman custom for a paean to be sung by an army on the march and before entering into battle, when a fleet left the harbour, and also after a victory had been won. In the Iliad, Apollo is the healer under the gods, but he is also the bringer of disease and death with his arrows, similar to the function of the Vedic god of disease Rudra. He sends a plague () to the Achaeans. Knowing that Apollo can prevent a recurrence of the plague he sent, they purify themselves in a ritual and offer him a large sacrifice of cows, called a hecatomb. Dorian origin The Homeric Hymn to Apollo depicts Apollo as an intruder from the north. The connection with the northern-dwelling Dorians and their initiation festival apellai is reinforced by the month Apellaios in northwest Greek calendars. The family-festival was dedicated to Apollo (Doric: ). Apellaios is the month of these rites, and Apellon is the "megistos kouros" (the great Kouros). However it can explain only the Doric type of the name, which is connected with the Ancient Macedonian word "pella" (Pella), stone. Stones played an important part in the cult of the god, especially in the oracular shrine of Delphi (Omphalos). Minoan origin George Huxley regarded the identification of Apollo with the Minoan deity Paiawon, worshipped in Crete, to have originated at Delphi. In the Homeric Hymn, Apollo appeared as a dolphin and carried Cretan priests to Delphi, where they evidently transferred their religious practices. Apollo Delphinios or Delphidios was a sea-god especially worshipped in Crete and in the islands. Apollo's sister Artemis, who was the Greek goddess of hunting, is identified with Britomartis (Diktynna), the Minoan "Mistress of the animals". In her earliest depictions she was accompanied by the "Master of the animals", a bow-wielding god of hunting whose name has been lost; aspects of this figure may have been absorbed into the more popular Apollo. Anatolian origin A non-Greek origin of Apollo has long been assumed in scholarship. The name of Apollo's mother Leto has Lydian origin, and she was worshipped on the coasts of Asia Minor. The inspiration oracular cult was probably introduced into Greece from Anatolia, which is the origin of Sibyl, and where some of the oldest oracular shrines originated. Omens, symbols, purifications, and exorcisms appear in old Assyro-Babylonian texts. These rituals were spread into the empire of the Hittites, and from there into Greece. Homer pictures Apollo on the side of the Trojans, fighting against the Achaeans, during the Trojan War. He is pictured as a terrible god, less trusted by the Greeks than other gods. The god seems to be related to Appaliunas, a tutelary god of Wilusa (Troy) in Asia Minor, but the word is not complete. The stones found in front of the gates of Homeric Troy were the symbols of Apollo. A western Anatolian origin may also be bolstered by references to the parallel worship of Artimus (Artemis) and Qλdãns, whose name may be cognate with the Hittite and Doric forms, in surviving Lydian texts. However, recent scholars have cast doubt on the identification of Qλdãns with Apollo. The Greeks gave to him the name agyieus as the protector god of public places and houses who wards off evil and his symbol was a tapered stone or column. However, while usually Greek festivals were celebrated at the full moon, all the feasts of Apollo were celebrated at the seventh day of the month, and the emphasis given to that day (sibutu) indicates a Babylonian origin. The Late Bronze Age (from 1700 to 1200 BCE) Hittite and Hurrian Aplu was a god of plague, invoked during plague years. Here we have an apotropaic situation, where a god originally bringing the plague was invoked to end it. Aplu, meaning the son of, was a title given to the god Nergal, who was linked to the Babylonian god of the sun Shamash. Homer interprets Apollo as a terrible god () who brings death and disease with his arrows, but who can also heal, possessing a magic art that separates him from the other Greek gods. In Iliad, his priest prays to Apollo Smintheus, the mouse god who retains an older agricultural function as the protector from field rats. All these functions, including the function of the healer-god Paean, who seems to have Mycenean origin, are fused in the cult of Apollo. Proto-Indo-European The Vedic Rudra has some similar functions with Apollo. The terrible god is called "the archer" and the bow is also an attribute of Shiva. Rudra could bring diseases with his arrows, but he was able to free people of them and his alternative Shiva is a healer physician god. However the Indo-European component of Apollo does not explain his strong relation with omens, exorcisms, and with the oracular cult. Oracular cult Unusually among the Olympic deities, Apollo had two cult sites that had widespread influence: Delos and Delphi. In cult practice, Delian Apollo and Pythian Apollo (the Apollo of Delphi) were so distinct that they might both have shrines in the same locality. Lycia was sacred to the god, for this Apollo was also called Lycian. Apollo's cult was already fully established when written sources commenced, about 650 BCE. Apollo became extremely important to the Greek world as an oracular deity in the archaic period, and the frequency of theophoric names such as Apollodorus or Apollonios and cities named Apollonia testify to his popularity. Oracular sanctuaries to Apollo were established in other sites. In the 2nd and 3rd century CE, those at Didyma and Claros pronounced the so-called "theological oracles", in which Apollo confirms that all deities are aspects or servants of an all-encompassing, highest deity. "In the 3rd century, Apollo fell silent. Julian the Apostate (359–361) tried to revive the Delphic oracle, but failed." Oracular shrines Apollo had a famous oracle in Delphi, and other notable ones in Claros and Didyma. His oracular shrine in Abae in Phocis, where he bore the toponymic epithet Abaeus (, Apollon Abaios), was important enough to be consulted by Croesus. His oracular shrines include: Abae in Phocis. Bassae in the Peloponnese. At Clarus, on the west coast of Asia Minor; as at Delphi a holy spring which gave off a pneuma, from which the priests drank. In Corinth, the Oracle of Corinth came from the town of Tenea, from prisoners supposedly taken in the Trojan War. At Khyrse, in Troad, the temple was built for Apollo Smintheus. In Delos, there was an oracle to the Delian Apollo, during summer. The Hieron (Sanctuary) of Apollo adjacent to the Sacred Lake, was the place where the god was said to have been born. In Delphi, the Pythia became filled with the pneuma of Apollo, said to come from a spring inside the Adyton. In Didyma, an oracle on the coast of Anatolia, south west of Lydian (Luwian) Sardis, in which priests from the lineage of the Branchidae received inspiration by drinking from a healing spring located in the temple. Was believed to have been founded by Branchus, son or lover of Apollo. In Hierapolis Bambyce, Syria (modern Manbij), according to the treatise De Dea Syria, the sanctuary of the Syrian Goddess contained a robed and bearded image of Apollo. Divination was based on spontaneous movements of this image. At Patara, in Lycia, there was a seasonal winter oracle of Apollo, said to have been the place where the god went from Delos. As at Delphi the oracle at Patara was a woman. In Segesta in Sicily. Oracles were also given by sons of Apollo. In Oropus, north of Athens, the oracle Amphiaraus, was said to be the son of Apollo; Oropus also had a sacred spring. in Labadea, east of Delphi, Trophonius, another son of Apollo, killed his brother and fled to the cave where he was also afterwards consulted as an oracle. Temples of Apollo Many temples were dedicated to Apollo in Greece and the Greek colonies. They show the spread of the cult of Apollo and the evolution of the Greek architecture, which was mostly based on the rightness of form and on mathematical relations. Some of the earliest temples, especially in Crete, do not belong to any Greek order. It seems that the first peripteral temples were rectangular wooden structures. The different wooden elements were considered divine, and their forms were preserved in the marble or stone elements of the temples of Doric order. The Greeks used standard types because they believed that the world of objects was a series of typical forms which could be represented in several instances. The temples should be canonic, and the architects were trying to achieve this esthetic perfection. From the earliest times there were certain rules strictly observed in rectangular peripteral and prostyle buildings. The first buildings were built narrowly in order to hold the roof, and when the dimensions changed some mathematical relations became necessary in order to keep the original forms. This probably influenced the theory of numbers of Pythagoras, who believed that behind the appearance of things there was the permanent principle of mathematics. The Doric order dominated during the 6th and the 5th century BC but there was a mathematical problem regarding the position of the triglyphs, which couldn't be solved without changing the original forms. The order was almost abandoned for the Ionic order, but the Ionic capital also posed an insoluble problem at the corner of a temple. Both orders were abandoned for the Corinthian order gradually during the Hellenistic age and under Rome. The most important temples are: Greek temples Thebes, Greece: The oldest temple probably dedicated to Apollo Ismenius was built in the 9th century B.C. It seems that it was a curvilinear building. The Doric temple was built in the early 7th century B.C., but only some small parts have been found A festival called Daphnephoria was celebrated every ninth year in honour of Apollo Ismenius (or Galaxius). The people held laurel branches (daphnai), and at the head of the procession walked a youth (chosen priest of Apollo), who was called "daphnephoros". Eretria: According to the Homeric hymn to Apollo, the god arrived to the plain, seeking for a location to establish its oracle. The first temple of Apollo Daphnephoros, "Apollo, laurel-bearer", or "carrying off Daphne", is dated to 800 B.C. The temple was curvilinear hecatombedon (a hundred feet). In a smaller building were kept the bases of the laurel branches which were used for the first building. Another temple probably peripteral was built in the 7th century B.C., with an inner row of wooden columns over its Geometric predecessor. It was rebuilt peripteral around 510 B.C., with the stylobate measuring 21,00 x 43,00 m. The number of pteron column was 6 x 14. Dreros (Crete). The temple of Apollo Delphinios dates from the 7th century B.C., or probably from the middle of the 8th century B.C. According to the legend, Apollo appeared as a dolphin, and carried Cretan priests to the port of Delphi. The dimensions of the plan are 10,70 x 24,00 m and the building was not peripteral. It contains column-bases of the Minoan type, which may be considered as the predecessors of the Doric columns. Gortyn (Crete). A temple of Pythian Apollo, was built in the 7th century B.C. The plan measured 19,00 x 16,70 m and it was not peripteral. The walls were solid, made from limestone, and there was single door on the east side. Thermon (West Greece): The Doric temple of Apollo Thermios, was built in the middle of the 7th century B.C. It was built on an older curvilinear building dating perhaps from the 10th century B.C., on which a peristyle was added. The temple was narrow, and the number of pteron columns (probably wooden) was 5 x 15. There was a single row of inner columns. It measures 12.13 x 38.23 m at the stylobate, which was made from stones. Corinth: A Doric temple was built in the 6th century B.C. The temple's stylobate measures 21.36 x 53.30 m, and the number of pteron columns was 6 x 15. There was a double row of inner columns. The style is similar with the Temple of Alcmeonidae at Delphi. The Corinthians were considered to be the inventors of the Doric order. Napes (Lesbos): An Aeolic temple probably of Apollo Napaios was built in the 7th century B.C. Some special capitals with floral ornament have been found, which are called Aeolic, and it seems that they were borrowed from the East. Cyrene, Libya: The oldest Doric temple of Apollo was built in c. 600 B.C. The number of pteron columns was 6 x 11, and it measures 16.75 x 30.05 m at the stylobate. There was a double row of sixteen inner columns on stylobates. The capitals were made from stone. Naukratis: An Ionic temple was built in the early 6th century B.C. Only some fragments have been found and the earlier, made from limestone, are identified among the oldest of the Ionic order. Syracuse, Sicily: A Doric temple was built at the beginning of the 6th century B.C. The temple's stylobate measures 21.47 x 55.36 m and the number of pteron columns was 6 x 17. It was the first temple in Greek west built completely out of stone. A second row of columns were added, obtaining the effect of an inner porch. Selinus (Sicily):The Doric Temple C dates from 550 B.C., and it was probably dedicated to Apollo. The temple's stylobate measures 10.48 x 41.63 m and the number of pteron columns was 6 x 17. There was portico with a second row of columns, which is also attested for the temple at Syracuse. Delphi: The first temple dedicated to Apollo, was built in the 7th century B.C. According to the legend, it was wooden made of laurel branches. The "Temple of Alcmeonidae" was built in c. 513 B.C. and it is the oldest Doric temple with significant marble elements. The temple's stylobate measures 21.65 x 58.00 m, and the number of pteron columns as 6 x 15. A fest similar with Apollo's fest at Thebes, Greece was celebrated every nine years. A boy was sent to the temple, who walked on the sacred road and returned carrying a laurel branch (dopnephoros). The maidens participated with joyful songs. Chios: An Ionic temple of Apollo Phanaios was built at the end of the 6th century B.C. Only some small parts have been found and the capitals had floral ornament. Abae (Phocis). The temple was destroyed by the Persians in the invasion of Xerxes in 480 B.C., and later by the Boeotians. It was rebuilt by Hadrian. The oracle was in use from early Mycenaean times to the Roman period, and shows the continuity of Mycenaean and Classical Greek religion. Bassae (Peloponnesus):A temple dedicated to Apollo Epikourios ("Apollo the helper"), was built in 430 B.C. and it was designed by Iktinos.It combined Doric and Ionic elements, and the earliest use of column with a Corinthian capital in the middle. The temple is of a relatively modest size, with the stylobate measuring 14.5 x 38.3 metres containing a Doric peristyle of 6 x 15 columns. The roof left a central space open to admit light and air. Delos: A temple probably dedicated to Apollo and not peripteral, was built in the late 7th century B.C., with a plan measuring 10,00 x 15,60 m. The Doric Great temple of Apollo, was built in c. 475 B.C. The temple's stylobate measures 13.72 x 29.78 m, and the number of pteron columns as 6 x 13. Marble was extensively used. Ambracia: A Doric peripteral temple dedicated to Apollo Pythios Sotir was built in 500 B.C., and It is lying at the centre of the Greek city Arta. Only some parts have been found, and it seems that the temple was built on earlier sanctuaries dedicated to Apollo. The temple measures 20,75 x 44,00 m at the stylobate. The foundation which supported the statue of the god, still exists. Didyma (near Miletus): The gigantic Ionic temple of Apollo Didymaios started around 540 B.C. The construction ceased and then it was restarted in 330 B.C. The temple is dipteral, with an outer row of 10 x 21 columns, and it measures 28.90 x 80.75 m at the stylobate. Clarus (near ancient Colophon): According to the legend, the famous seer Calchas, on his return from Troy, came to Clarus. He challenged the seer Mopsus, and died when he lost. The Doric temple of Apollo Clarius was probably built in the 3rd century B.C., and it was peripteral with 6 x 11 columns. It was reconstructed at the end of the Hellenistic period, and later from the emperor Hadrian but Pausanias claims that it was still incomplete in the 2nd century B.C. Hamaxitus (Troad): In Iliad, Chryses the priest of Apollo, addresses the god with the epithet Smintheus (Lord of Mice), related with the god's ancient role as bringer of the disease (plague). Recent excavations indicate that the Hellenistic temple of Apollo Smintheus was constructed at 150–125 B.C., but the symbol of the mouse god was used on coinage probably from the 4th century B.C. The temple measures 40,00 x 23,00 m at the stylobate, and the number of pteron columns was 8 x 14. Pythion (), this was the name of a shrine of Apollo at Athens near the Ilisos river. It was created by Peisistratos, and tripods placed there by those who had won in the cyclic chorus at the Thargelia. Setae (Lydia): The temple of Apollo Aksyros located in the city. Apollonia Pontica: There were two temples of Apollo Healer in the city. One from the Late Archaic period and the other from the Early Classical period. Ikaros island in the Persian Gulf (modern Failaka Island): There was a temple of Apollo on the island. Etruscan and Roman temples Veii (Etruria): The temple of Apollo was built in the late 6th century B.C. and it indicates the spread of Apollo's culture (Aplu) in Etruria. There was a prostyle porch, which is called Tuscan, and a triple cella 18,50 m wide. Falerii Veteres (Etruria): A temple of Apollo was built probably in the 4th-3rd century B.C. Parts of a teraccotta capital, and a teraccotta base have been found. It seems that the Etruscan columns were derived from the archaic Doric. A cult of Apollo Soranus is attested by one inscription found near Falerii. Pompeii (Italy): The cult of Apollo was widespread in the region of Campania since the 6th century B.C. The temple was built in 120 B.V, but its beginnings lie in the 6th century B.C. It was reconstructed after an earthquake in A.D. 63. It demonstrates a mixing of styles which formed the basis of Roman architecture. The columns in front of the cella formed a Tuscan prostyle porch, and the cella is situated unusually far back. The peripteral colonnade of 48 Ionic columns was placed in such a way that the emphasis was given to the front side. Rome: The temple of Apollo Sosianus and the temple of Apollo Medicus. The first temple building dates to 431 B.C., and was dedicated to Apollo Medicus (the doctor), after a plague of 433 B.C. It was rebuilt by Gaius Sosius, probably in 34 B.C. Only three columns with Corinthian capitals exist today. It seems that the cult of Apollo had existed in this area since at least to the mid-5th century B.C. Rome:The temple of Apollo Palatinus was located on the Palatine hill within the sacred boundary of the city. It was dedicated by Augustus on 28 B.C. The façade of the original temple was Ionic and it was constructed from solid blocks of marble. Many famous statues by Greek masters were on display in and around the temple, including a marble statue of the god at the entrance and a statue of Apollo in the cella. Melite (modern Mdina, Malta): A Temple of Apollo was built in the city in the 2nd century A.D. Its remains were discovered in the 18th century, and many of its architectural fragments were dispersed among private collections or reworked into new sculptures. Parts of the temple's podium were rediscovered in 2002. Mythology Apollo appears often in the myths, plays and hymns. As Zeus' favorite son, Apollo had direct access to the mind of Zeus and was willing to reveal this knowledge to humans. A divinity beyond human comprehension, he appears both as a beneficial and a wrathful god. Birth Apollo was the son of Zeus, the king of the gods, and Leto, his previous wife or one of his mistresses. Growing up, Apollo was nursed by the nymphs Korythalia and Aletheia, the personification of truth. When Zeus' wife Hera discovered that Leto was pregnant, she banned Leto from giving birth on terra firma. Leto sought shelter in many lands, only to be rejected by them. Finally, the voice of unborn Apollo informed his mother about a floating island named Delos that had once been Asteria, Leto's own sister. Since it was neither a mainland nor an island, Leto was readily welcomed there and gave birth to her children under a palm tree. All the goddesses except Hera were present to witness the event. It is also stated that Hera kidnapped Eileithyia, the goddess of childbirth, to prevent Leto from going into labor. The other gods tricked Hera into letting her go by offering her a necklace of amber 9 yards (8.2 m) long. When Apollo was born, clutching a golden sword, everything on Delos turned into gold and the island was filled with ambrosial fragrance. Swans circled the island seven times and the nymphs sang in delight. He was washed clean by the goddesses who then covered him in white garment and fastened golden bands around him. Since Leto was unable to feed him, Themis, the goddess of divine law, fed him with nectar, or ambrosia. Upon tasting the divine food, Apollo broke free of the bands fastened onto him and declared that he would be the master of lyre and archery, and interpret the will of Zeus to humankind. Zeus, who had calmed Hera by then, came and adorned his son with a golden headband. Apollo's birth fixed the floating Delos to the earth. Leto promised that her son would be always favorable towards the Delians. According to some, Apollo secured Delos to the bottom of the ocean after some time. This island became sacred to Apollo and was one of the major cult centres of the god. Apollo was born on the seventh day (, hebdomagenes) of the month Thargelion—according to Delian tradition—or of the month Bysios—according to Delphian tradition. The seventh and twentieth, the days of the new and full moon, were ever afterwards held sacred to him. Mythographers agree that Artemis was born first and subsequently assisted with the birth of Apollo or was born on the island of Ortygia then helped Leto cross the sea to Delos the next day to give birth to Apollo. Hyperborea Hyperborea, the mystical land of eternal spring, venerated Apollo above all the gods. The Hyperboreans always sang and danced in his honor and hosted Pythian games. There, a vast forest of beautiful trees was called "the garden of Apollo". Apollo spent the winter months among the Hyperboreans. His absence from the world caused coldness and this was marked as his annual death. No prophecies were issued during this time. He returned to the world during the beginning of the spring. The Theophania festival was held in Delphi to celebrate his return. It is said that Leto came to Delos from Hyperborea accompanied by a pack of wolves. Henceforth, Hyperborea became Apollo's winter home and wolves became sacred to him. His intimate connection to wolves is evident from his epithet Lyceus, meaning wolf-like. But Apollo was also the wolf-slayer in his role as the god who protected flocks from predators. The Hyperborean worship of Apollo bears the strongest marks of Apollo being worshipped as the sun god. Shamanistic elements in Apollo's cult are often liked to his Hyperborean origin, and he is likewise speculated to have originated as a solar shaman. Shamans like Abaris and Aristeas were also the followers of Apollo, who hailed from Hyperborea. In myths, the tears of amber Apollo shed when his son Asclepius died became the waters of the river Eridanos, which surrounded Hyperborea. Apollo also buried in Hyperborea the arrow which he had used to kill the Cyclopes. He later gave this arrow to Abaris. Childhood and youth As a child, Apollo is said to have built a foundation and an altar on Delos using the horns of the goats that his sister Artemis hunted. Since he learnt the art of building when young, he later came to be known as Archegetes, the founder (of towns) and god who guided men to build new cities. From his father Zeus, Apollo had also received a golden chariot drawn by swans. In his early years when Apollo spent his time herding cows, he was reared by Thriae, the bee nymphs, who trained him and enhanced his prophetic skills. Apollo is also said to have invented the lyre, and along with Artemis, the art of archery. He then taught to the humans the art of healing and archery. Phoebe, his grandmother, gave the oracular shrine of Delphi to Apollo as a birthday gift. Themis inspired him to be the oracular voice of Delphi thereon. Python Python, a chthonic serpent-dragon, was a child of Gaia and the guardian of the Delphic Oracle, whose death was foretold by Apollo when he was still in Leto's womb. Python was the nurse of the giant Typhon. In most of the traditions, Apollo was still a child when he killed Python. Python was sent by Hera to hunt the pregnant Leto to death, and had assaulted her. To avenge the trouble given to his mother, Apollo went in search of Python and killed it in the sacred cave at Delphi with the bow and arrows that he had received from Hephaestus. The Delphian nymphs who were present encouraged Apollo during the battle with the cry "Hie Paean". After Apollo was victorious, they also brought him gifts and gave the Corycian cave to him. According to Homer, Apollo had encountered and killed the Python when he was looking for a place to establish his shrine. According to another version, when Leto was in Delphi, Python had attacked her. Apollo defended his mother and killed Python. Euripides in his Iphigenia in Aulis gives an account of his fight with Python and the event's aftermath. You killed him, o Phoebus, while still a baby, still leaping in the arms of your dear mother, and you entered the holy shrine, and sat on the golden tripod, on your truthful throne distributing prophecies from the gods to mortals. A detailed account of Apollo's conflict with Gaia and Zeus' intervention on behalf of his young son is also given. But when Apollo came and sent Themis, the child of Earth, away from the holy oracle of Pytho, Earth gave birth to dream visions of the night; and they told to the cities of men the present, and what will happen in the future, through dark beds of sleep on the ground; and so Earth took the office of prophecy away from Phoebus, in envy, because of her daughter. The lord made his swift way to Olympus and wound his baby hands around Zeus, asking him to take the wrath of the earth goddess from the Pythian home. Zeus smiled, that the child so quickly came to ask for worship that pays in gold. He shook his locks of hair, put an end to the night voices, and took away from mortals the truth that appears in darkness, and gave the privilege back again to Loxias. Apollo also demanded that all other methods of divination be made inferior to his, a wish that Zeus granted him readily. Because of this, Athena, who had been practicing divination by throwing pebbles, cast her pebbles away in displeasure. However, Apollo had committed a blood murder and had to be purified. Because Python was a child of Gaia, Gaia wanted Apollo to be banished to Tartarus as a punishment. Zeus didn't agree and instead exiled his son from Olympus, and instructed him to get purified. Apollo had to serve as a slave for nine years. After the servitude was over, as per his father's order, he travelled to the Vale of Tempe to bath in waters of Peneus. There Zeus himself performed purificatory rites on Apollo. Purified, Apollo was escorted by his half sister Athena to Delphi where the oracular shrine was finally handed over to him by Gaia. According to a variation, Apollo had also travelled to Crete, where Carmanor purified him. Apollo later established the Pythian games to appropriate Gaia. Henceforth, Apollo became the god who cleansed himself from the sin of murder and, made men aware of their guilt and purified them. Soon after, Zeus instructed Apollo to go to Delphi and establish his law. But Apollo, disobeying his father, went to the land of Hyperborea and stayed there for a year. He returned only after the Delphians sang hymns to him and pleaded him to come back. Zeus, pleased with his son's integrity, gave Apollo the seat next to him on his right side. He also gave to Apollo various gifts, like a golden tripod, a golden bow and arrows, a golden chariot and the city of Delphi. Soon after his return, Apollo needed to recruit people to Delphi. So, when he spotted a ship sailing from Crete, he sprang aboard in the form of a dolphin. The crew was awed into submission and followed a course that led the ship to Delphi. There Apollo revealed himself as a god. Initiating them to his service, he instructed them to keep righteousness in their hearts. The Pythia was Apollo's high priestess and his mouthpiece through whom he gave prophecies. Pythia is arguably the constant favorite of Apollo among the mortals. Tityos Hera once again sent another giant, Tityos to rape Leto. This time Apollo shot him with his arrows and attacked him with his golden sword. According to other version, Artemis also aided him in protecting their mother by attacking Tityos with her arrows. After the battle Zeus finally relented his aid and hurled Tityos down to Tartarus. There, he was pegged to the rock floor, covering an area of , where a pair of vultures feasted daily on his liver. Admetus Admetus was the king of Pherae, who was known for his hospitality. When Apollo was exiled from Olympus for killing Python, he served as a herdsman under Admetus, who was then young and unmarried. Apollo is said to have shared a romantic relationship with Admetus during his stay. After completing his years of servitude, Apollo went back to Olympus as a god. Because Admetus had treated Apollo well, the god conferred great benefits on him in return. Apollo's mere presence is said to have made the cattle give birth to twins. Apollo helped Admetus win the hand of Alcestis, the daughter of King Pelias, by taming a lion and a boar to draw Admetus' chariot. He was present during their wedding to give his blessings. When Admetus angered the goddess Artemis by forgetting to give her the due offerings, Apollo came to the rescue and calmed his sister. When Apollo learnt of Admetus' untimely death, he convinced or tricked the Fates into letting Admetus live past his time. According to another version, or perhaps some years later, when Zeus struck down Apollo's son Asclepius with a lightning bolt for resurrecting the dead, Apollo in revenge killed the Cyclopes, who had fashioned the bolt for Zeus. Apollo would have been banished to Tartarus for this, but his mother Leto intervened, and reminding Zeus of their old love, pleaded him not to kill their son. Zeus obliged and sentenced Apollo to one year of hard labor once again under Admetus. The love between Apollo and Admetus was a favored topic of Roman poets like Ovid and Servius. Niobe The fate of Niobe was prophesied by Apollo while he was still in Leto's womb. Niobe was the queen of Thebes and wife of Amphion. She displayed hubris when she boasted that she was superior to Leto because she had fourteen children (Niobids), seven male and seven female, while Leto had only two. She further mocked Apollo's effeminate appearance and Artemis' manly appearance. Leto, insulted by this, told her children to punish Niobe. Accordingly, Apollo killed Niobe's sons, and Artemis her daughters. According to some versions of the myth, among the Niobids, Chloris and her brother Amyclas were not killed because they prayed to Leto. Amphion, at the sight of his dead sons, either killed himself or was killed by Apollo after swearing revenge. A devastated Niobe fled to Mount Sipylos in Asia Minor and turned into stone as she wept. Her tears formed the river Achelous. Zeus had turned all the people of Thebes to stone and so no one buried the Niobids until the ninth day after their death, when the gods themselves entombed them. When Chloris married and had children, Apollo granted her son Nestor the years he had taken away from the Niobids. Hence, Nestor was able to live for 3 generations. Building the walls of Troy Once Apollo and Poseidon served under the Trojan king Laomedon in accordance to Zeus' words. Apollodorus states that the gods willingly went to the king disguised as humans in order to check his hubris. Apollo guarded the cattle of Laomedon in the valleys of mount Ida, while Poseidon built the walls of Troy. Other versions make both Apollo and Poseidon the builders of the wall. In Ovid's account, Apollo completes his task by playing his tunes on his lyre. In Pindar's odes, the gods took a mortal named Aeacus as their assistant. When the work was completed, three snakes rushed against the wall, and though the two that attacked the sections of the wall built by the gods fell down dead, the third forced its way into the city through the portion of the wall built by Aeacus. Apollo immediately prophesied that Troy would fall at the hands of Aeacus's descendants, the Aeacidae (i.e. his son Telamon joined Heracles when he sieged the city during Laomedon's rule. Later, his great grandson Neoptolemus was present in the wooden horse that lead to the downfall of Troy). However, the king not only refused to give the gods the wages he had promised, but also threatened to bind their feet and hands, and sell them as slaves. Angered by the unpaid labour and the insults, Apollo infected the city with a pestilence and Posedion sent the sea monster Cetus. To deliver the city from it, Laomedon had to sacrifice his daughter Hesione (who would later be saved by Heracles). During his stay in Troy, Apollo had a lover named Ourea, who was a nymph and daughter of Poseidon. Together they had a son named Ileus, whom Apollo loved dearly. Trojan War Apollo sided with the Trojans during the Trojan War waged by the Greeks against the Trojans. During the war, the Greek king Agamemnon captured Chryseis, the daughter of Apollo's priest Chryses, and refused to return her. Angered by this, Apollo shot arrows infected with the plague into the Greek encampment. He demanded that they return the girl, and the Achaeans (Greeks) complied, indirectly causing the anger of Achilles, which is the theme of the Iliad. Receiving the aegis from Zeus, Apollo entered the battlefield as per his father's command, causing great terror to the enemy with his war cry. He pushed the Greeks back and destroyed many of the soldiers. He is described as "the rouser of armies" because he rallied the Trojan army when they were falling apart. When Zeus allowed the other gods to get involved in the war, Apollo was provoked by Poseidon to a duel. However, Apollo declined to fight him, saying that he wouldn't fight his uncle for the sake of mortals. When the Greek hero Diomedes injured the Trojan hero Aeneas, Aphrodite tried to rescue him, but Diomedes injured her as well. Apollo then enveloped Aeneas in a cloud to protect him. He repelled the attacks Diomedes made on him and gave the hero a stern warning to abstain himself from attacking a god. Aeneas was then taken to Pergamos, a sacred spot in Troy, where he was healed. After the death of Sarpedon, a son of Zeus, Apollo rescued the corpse from the battlefield as per his father's wish and cleaned it. He then gave it to Sleep (Hypnos) and Death (Thanatos). Apollo had also once convinced Athena to stop the war for that day, so that the warriors can relieve themselves for a while. The Trojan hero Hector (who, according to some, was the god's own son by Hecuba) was favored by Apollo. When he got severely injured, Apollo healed him and encouraged him to take up his arms. During a duel with Achilles, when Hector was about to lose, Apollo hid Hector in a cloud of mist to save him. When the Greek warrior Patroclus tried to get into the fort of Troy, he was stopped by Apollo. Encouraging Hector to attack Patroclus, Apollo stripped the armour of the Greek warrior and broke his weapons. Patroclus was eventually killed by Hector. At last, after Hector's fated death, Apollo protected his corpse from Achilles' attempt to mutilate it by creating a magical cloud over the corpse. Apollo held a grudge against Achilles throughout the war because Achilles had murdered his son Tenes before the war began and brutally assassinated his son Troilus in his own temple. Not only did Apollo save Hector from Achilles, he also tricked Achilles by disguising himself as a Trojan warrior and driving him away from the gates. He foiled Achilles' attempt to mutilate Hector's dead body. Finally, Apollo caused Achilles' death by guiding an arrow shot by Paris into Achilles' heel. In some versions, Apollo himself killed Achilles by taking the disguise of Paris. Apollo helped many Trojan warriors, including Agenor, Polydamas, Glaucus in the battlefield. Though he greatly favored the Trojans, Apollo was bound to follow the orders of Zeus and served his father loyally during the war. Heracles After Heracles (then named Alcides) was struck with madness and killed his family, he sought to purify himself and consulted the oracle of Apollo. Apollo, through the Pythia, commanded him to serve king Eurystheus for twelve years and complete the ten tasks the king would give him. Only then would Alcides be absolved of his sin. Apollo also renamed him as Heracles. To complete his third task, Heracles had to capture the Ceryneian Hind, a hind sacred to Artemis, and bring back it alive. After chasing the hind for one year, the animal eventually got tired, and when it tried crossing the river Ladon, Heracles captured it. While he was taking it back, he was confronted by Apollo and Artemis, who were angered at Heracles for this act. However, Heracles soothed the goddess and explained his situation to her. After much pleading, Artemis permitted him to take the hind and told him to return it later. After he was freed from his servitude to Eurystheus, Heracles fell in conflict with Iphytus, a prince of Oechalia, and murdered him. Soon after, he contracted a terrible disease. He consulted the oracle of Apollo once again, in hope of ridding himself of the disease. The Pythia, however, denied to give any prophesy. In anger, Heracles snatched the sacred tripod and started walking away, intending to start his own oracle. However, Apollo did not tolerate this and stopped Heracles; a duel ensued between them. Artemis rushed to support Apollo, while Athena supported Heracles. Soon, Zeus threw his thunderbolt between the fighting brothers and separated them. He reprimanded Heracles for this act of violation and asked Apollo to give a solution to Heracles. Apollo then ordered the hero to serve under Omphale, queen of Lydia for one year in order to purify himself. Periphas Periphas was an Attican king and a priest of Apollo. He was noble, just and rich. He did all his duties justly. Because of this people were very fond of him and started honouring him to the same extent as Zeus. At one point, they worshipped Periphas in place of Zeus and set up shrines and temples for him. This annoyed Zeus, who decided to annihilate the entire family of Periphas. But because he was a just king and a good devotee, Apollo intervened and requested his father to spare Periphas. Zeus considered Apollo's words and agreed to let him live. But he metamorphosed Periphas into an eagle and made the eagle the king of birds. When Periphas' wife requested Zeus to let her stay with her husband, Zeus turned her into a vulture and fulfilled her wish. Plato's concept of soulmates A long time ago, there were three kinds of human beings: male, descended from the sun; female, descended from the earth; and androgynous, descended from the moon. Each human being was completely round, with four arms and fours legs, two identical faces on opposite sides of a head with four ears, and all else to match. They were powerful and unruly. Otis and Ephialtes even dared to scale Mount Olympus. To check their insolence, Zeus devised a plan to humble them and improve their manners instead of completely destroying them. He cut them all in two and asked Apollo to make necessary repairs, giving humans the individual shape they still have now. Apollo turned their heads and necks around towards their wounds, he pulled together their skin at the abdomen, and sewed the skin together at the middle of it. This is what we call navel today. He smoothened the wrinkles and shaped the chest. But he made sure to leave a few wrinkles on the abdomen and around the navel so that they might be reminded of their punishment. "As he [Zeus] cut them one after another, he bade Apollo give the face and the half of the neck a turn... Apollo was also bidden to heal their wounds and compose their forms. So Apollo gave a turn to the face and pulled the skin from the sides all over that which in our language is called the belly, like the purses which draw in, and he made one mouth at the centre [of the belly] which he fastened in a knot (the same which is called the navel); he also moulded the breast and took out most of the wrinkles, much as a shoemaker might smooth leather upon a last; he left a few wrinkles, however, in the region of the belly and navel, as a memorial of the primeval state. Nurturer of the young Apollo Kourotrophos is the god who nurtures and protects children and the young, especially boys. He oversees their education and their passage into adulthood. Education is said to have originated from Apollo and the Muses. Many myths have him train his children. It was a custom for boys to cut and dedicate their long hair to Apollo after reaching adulthood. Chiron, the abandoned centaur, was fostered by Apollo, who instructed him in medicine, prophecy, archery and more. Chiron would later become a great teacher himself. Asclepius in his childhood gained much knowledge pertaining to medicinal arts by his father. However, he was later entrusted to Chiron for further education. Anius, Apollo's son by Rhoeo, was abandoned by his mother soon after his birth. Apollo brought him up and educated him in mantic arts. Anius later became the priest of Apollo and the king of Delos. Iamus was the son of Apollo and Evadne. When Evadne went into labour, Apollo sent the Moirai to assist his lover. After the child was born, Apollo sent snakes to feed the child some honey. When Iamus reached the age of education, Apollo took him to Olympia and taught him many arts, including the ability to understand and explain the languages of birds. Idmon was educated by Apollo to be a seer. Even though he foresaw his death that would happen in his journey with the Argonauts, he embraced his destiny and died a brave death. To commemorate his son's bravery, Apollo commanded Boeotians to build a town around the tomb of the hero, and to honor him. Apollo adopted Carnus, the abandoned son of Zeus and Europa. He reared the child with the help of his mother Leto and educated him to be a seer. When his son Melaneus reached the age of marriage, Apollo asked the princess Stratonice to be his son's bride and carried her away from her home when she agreed. Apollo saved a shepherd boy (name unknown) from death in a large deep cave, by the means of vultures. To thank him, the shepherd built Apollo a temple under the name Vulturius. God of music Immediately after his birth, Apollo demanded a lyre and invented the paean, thus becoming the god of music. As the divine singer, he is the patron of poets, singers and musicians. The invention of string music is attributed to him. Plato said that the innate ability of humans to take delight in music, rhythm and harmony is the gift of Apollo and the Muses. According to Socrates, ancient Greeks believed that Apollo is the god who directs the harmony and makes all things move together, both for the gods and the humans. For this reason, he was called Homopolon before the Homo was replaced by A. Apollo's harmonious music delivered people from their pain, and hence, like Dionysus, he is also called the liberator. The swans, which were considered to be the most musical among the birds, were believed to be the "singers of Apollo". They are Apollo's sacred birds and acted as his vehicle during his travel to Hyperborea. Aelian says that when the singers would sing hymns to Apollo, the swans would join the chant in unison. Among the Pythagoreans, the study of mathematics and music were connected to the worship of Apollo, their principal deity. Their belief was that the music purifies the soul, just as medicine purifies the body. They also believed that music was delegated to the same mathematical laws of harmony as the mechanics of the cosmos, evolving into an idea known as the music of the spheres. Apollo appears as the companion of the Muses, and as Musagetes ("leader of Muses") he leads them in dance. They spend their time on Parnassus, which is one of their sacred places. Apollo is also the lover of the Muses and by them he became the father of famous musicians like Orpheus and Linus. Apollo is often found delighting the immortal gods with his songs and music on the lyre. In his role as the god of banquets, he was always present to play music in weddings of the gods, like the marriage of Eros and Psyche, Peleus and Thetis. He is a frequent guest of the Bacchanalia, and many ancient ceramics depict him being at ease amidst the maenads and satyrs. Apollo also participated in musical contests when challenged by others. He was the victor in all those contests, but he tended to punish his opponents severely for their hubris. Apollo's lyre The invention of lyre is attributed either to Hermes or to Apollo himself. Distinctions have been made that Hermes invented lyre made of tortoise shell, whereas the lyre Apollo invented was a regular lyre. Myths tell that the infant Hermes stole a number of Apollo's cows and took them to a cave in the woods near Pylos, covering their tracks. In the cave, he found a tortoise and killed it, then removed the insides. He used one of the cow's intestines and the tortoise shell and made his lyre. Upon discovering the theft, Apollo confronted Hermes and asked him to return his cattle. When Hermes acted innocent, Apollo took the matter to Zeus. Zeus, having seen the events, sided with Apollo, and ordered Hermes to return the cattle. Hermes then began to play music on the lyre he had invented. Apollo fell in love with the instrument and offered to exchange the cattle for the lyre. Hence, Apollo then became the master of the lyre. According to other versions, Apollo had invented the lyre himself, whose strings he tore in repenting of the excess punishment he had given to Marsyas. Hermes' lyre, therefore, would be a reinvention. Contest with Pan Once Pan had the audacity to compare his music with that of Apollo and to challenge the god of music to a contest. The mountain-god Tmolus was chosen to umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction to himself and his faithful follower, Midas, who happened to be present. Then, Apollo struck the strings of his lyre. It was so beautiful that Tmolus at once awarded the victory to Apollo, and everyone was pleased with the judgement. Only Midas dissented and questioned the justice of the award. Apollo did not want to suffer such a depraved pair of ears any longer, and caused them to become the ears of a donkey. Contest with Marsyas Marsyas was a satyr who was punished by Apollo for his hubris. He had found an aulos on the ground, tossed away after being invented by Athena because it made her cheeks puffy. Athena had also placed a curse upon the instrument, that whoever would pick it up would be severely punished. When Marsyas played the flute, everyone became frenzied with joy. This led Marsyas to think that he was better than Apollo, and he challenged the god to a musical contest. The contest was judged by the Muses, or the nymphs of Nysa. Athena was also present to witness the contest. Marsyas taunted Apollo for "wearing his hair long, for having a fair face and smooth body, for his skill in so many arts". He also further said, 'His [Apollo] hair is smooth and made into tufts and curls that fall about his brow and hang before his face. His body is fair from head to foot, his limbs shine bright, his tongue gives oracles, and he is equally eloquent in prose or verse, propose which you will. What of his robes so fine in texture, so soft to the touch, aglow with purple? What of his lyre that flashes gold, gleams white with ivory, and shimmers with rainbow gems? What of his song, so cunning and so sweet? Nay, all these allurements suit with naught save luxury. To virtue they bring shame alone!' The Muses and Athena sniggered at this comment. The contestants agreed to take turns displaying their skills and the rule was that the victor could "do whatever he wanted" to the loser. According to one account, after the first round, they both were deemed equal by the Nysiads. But in the next round, Apollo decided to play on his lyre and add his melodious voice to his performance. Marsyas argued against this, saying that Apollo would have an advantage and accused Apollo of cheating. But Apollo replied that since Marsyas played the flute, which needed air blown from the throat, it was similar to singing, and that either they both should get an equal chance to combine their skills or none of them should use their mouths at all. The nymphs decided that Apollo's argument was just. Apollo then played his lyre and sang at the same time, mesmerising the audience. Marsyas could not do this. Apollo was declared the winner and, angered with Marsyas' haughtiness and his accusations, decided to flay the satyr. According to another account, Marsyas played his flute out of tune at one point and accepted his defeat. Out of shame, he assigned to himself the punishment of being skinned for a wine sack. Another variation is that Apollo played his instrument upside down. Marsyas could not do this with his instrument. So the Muses who were the judges declared Apollo the winner. Apollo hung Marsyas from a tree to flay him. Apollo flayed the limbs of Marsyas alive in a cave near Celaenae in Phrygia for his hubris to challenge a god. He then gave the rest of his body for proper burial and nailed Marsyas' flayed skin to a nearby pine-tree as a lesson to the others. Marsyas' blood turned into the river Marsyas. But Apollo soon repented and being distressed at what he had done, he tore the strings of his lyre and threw it away. The lyre was later discovered by the Muses and Apollo's sons Linus and Orpheus. The Muses fixed the middle string, Linus the string struck with the forefinger, and Orpheus the lowest string and the one next to it. They took it back to Apollo, but the god, who had decided to stay away from music for a while, laid away both the lyre and the pipes at Delphi and joined Cybele in her wanderings to as far as Hyperborea. Contest with Cinyras Cinyras was a ruler of Cyprus, who was a friend of Agamemnon. Cinyras promised to assist Agamemnon in the Trojan war, but did not keep his promise. Agamemnon cursed Cinyras. He invoked Apollo and asked the god to avenge the broken promise. Apollo then had a lyre-playing contest with Cinyras, and defeated him. Either Cinyras committed suicide when he lost, or was killed by Apollo. Patron of sailors Apollo functions as the patron and protector of sailors, one of the duties he shares with Poseidon. In the myths, he is seen helping heroes who pray to him for safe journey. When Apollo spotted a ship of Cretan sailors that was caught in a storm, he quickly assumed the shape of a dolphin and guided their ship safely to Delphi. When the Argonauts faced a terrible storm, Jason prayed to his patron, Apollo, to help them. Apollo used his bow and golden arrow to shed light upon an island, where the Argonauts soon took shelter. This island was renamed "Anaphe", which means "He revealed it". Apollo helped the Greek hero Diomedes, to escape from a great tempest during his journey homeward. As a token of gratitude, Diomedes built a temple in honor of Apollo under the epithet Epibaterius ("the embarker"). During the Trojan War, Odysseus came to the Trojan camp to return Chriseis, the daughter of Apollo's priest Chryses, and brought many offerings to Apollo. Pleased with this, Apollo sent gentle breezes that helped Odysseus return safely to the Greek camp. Arion was a poet who was kidnapped by some sailors for the rich prizes he possessed. Arion requested them to let him sing for the last time, to which the sailors consented. Arion began singing a song in praise of Apollo, seeking the god's help. Consequently, numerous dolphins surrounded the ship and when Arion jumped into the water, the dolphins carried him away safely. Wars Titanomachy Once Hera, out of spite, aroused the Titans to war against Zeus and take away his throne. Accordingly, when the Titans tried to climb Mount Olympus, Zeus with the help of Apollo, Artemis and Athena, defeated them and cast them into tartarus. Trojan War Apollo played a pivotal role in the entire Trojan War. He sided with the Trojans, and sent a terrible plague to the Greek camp, which indirectly led to the conflict between Achilles and Agamemnon. He killed the Greek heroes Patroclus, Achilles, and numerous Greek soldiers. He also helped many Trojan heroes, the most important one being Hector. After the end of the war, Apollo and Poseidon together cleaned the remains of the city and the camps. Telegony war A war broke out between the Brygoi and the Thesprotians, who had the support of Odysseus. The gods Athena and Ares came to the battlefield and took sides. Athena helped the hero Odysseus while Ares fought alongside of the Brygoi. When Odysseus lost, Athena and Ares came into a direct duel. To stop the battling gods and the terror created by their battle, Apollo intervened and stopped the duel between them . Indian war When Zeus suggested that Dionysus defeat the Indians in order to earn a place among the gods, Dionysus declared war against the Indians and travelled to India along with his army of Bacchantes and satyrs. Among the warriors was Aristaeus, Apollo's son. Apollo armed his son with his own hands and gave him a bow and arrows and fitted a strong shield to his arm. After Zeus urged Apollo to join the war, he went to the battlefield. Seeing several of his nymphs and Aristaeus drowning in a river, he took them to safety and healed them. He taught Aristaeus more useful healing arts and sent him back to help the army of Dionysus. Theban war During the war between the sons of Oedipus, Apollo favored Amphiaraus, a seer and one of the leaders in the war. Though saddened that the seer was fated to be doomed in the war, Apollo made Amphiaraus' last hours glorious by "lighting his shield and his helm with starry gleam". When Hypseus tried to kill the hero by a spear, Apollo directed the spear towards the charioteer of Amphiaraus instead. Then Apollo himself replaced the charioteer and took the reins in his hands. He deflected many spears and arrows away them. He also killed many of the enemy warriors like Melaneus, Antiphus, Aetion, Polites and Lampus. At last when the moment of departure came, Apollo expressed his grief with tears in his eyes and bid farewell to Amphiaraus, who was soon engulfed by the Earth. Slaying of giants Apollo killed the giants Python and Tityos, who had assaulted his mother Leto. Gigantomachy During the gigantomachy, Apollo and Heracles blinded the giant Ephialtes by shooting him in his eyes, Apollo shooting his left and Heracles his right. He also killed Porphyrion, the king of giants, using his bow and arrows. Aloadae The Aloadae, namely Otis and Ephialtes, were twin giants who decided to wage war upon the gods. They attempted to storm Mt. Olympus by piling up mountains, and threatened to fill the sea with mountains and inundate dry land. They even dared to seek the hand of Hera and Artemis in marriage. Angered by this, Apollo killed them by shooting them with arrows. According to another tale, Apollo killed them by sending a deer between them; as they tried to kill it with their javelins, they accidentally stabbed each other and died. Phorbas Phorbas was a savage giant king of Phlegyas who was described as having swine like features. He wished to plunder Delphi for its wealth. He seized the roads to Delphi and started harassing the pilgrims. He captured the old people and children and sent them to his army to hold them for ransom. And he challenged the young and sturdy men to a match of boxing, only to cut their heads off when they would get defeated by him. He hung the chopped off heads to an oak tree. Finally, Apollo came to put an end to this cruelty. He entered a boxing contest with Phorbas and killed him with a single blow. Other stories In the first Olympic games, Apollo defeated Ares and became the victor in wrestling. He outran Hermes in the race and won first place. Apollo divides months into summer and winter. He rides on the back of a swan to the land of the Hyperboreans during the winter months, and the absence of warmth in winters is due to his departure. During his absence, Delphi was under the care of Dionysus, and no prophecies were given during winters. Molpadia and Parthenos Molpadia and Parthenos were the sisters of Rhoeo, a former lover of Apollo. One day, they were put in charge of watching their father's ancestral wine jar but they fell asleep while performing this duty. While they were asleep, the wine jar was broken by the swines their family kept. When the sisters woke up and saw what had happened, they threw themselves off a cliff in fear of their father's wrath. Apollo, who was passing by, caught them and carried them to two different cities in Chersonesus, Molpadia to Castabus and Parthenos to Bubastus. He turned them into goddesses and they both received divine honors. Molpadia's name was changed to Hemithea upon her deification. Prometheus Prometheus was the titan who was punished by Zeus for stealing fire. He was bound to a rock, where each day an eagle was sent to eat Prometheus' liver, which would then grow back overnight to be eaten again the next day. Seeing his plight, Apollo pleaded Zeus to release the kind Titan, while Artemis and Leto stood behind him with tears in their eyes. Zeus, moved by Apollo's words and the tears of the goddesses, finally sent Heracles to free Prometheus. The rock of Leukas Leukatas was believed to be a white colored rock jutting out from the island of Leukas into the sea. It was present in the sanctuary of Apollo Leukates. A leap from this rock was believed to have put an end to the longings of love. Once, Aphrodite fell deeply in love with Adonis, a young man of great beauty who was later accidentally killed by a boar. Heartbroken, Aphrodite wandered looking for the rock of Leukas. When she reached the sanctuary of Apollo in Argos, she confided in him her love and sorrow. Apollo then brought her to the rock of Leukas and asked her to throw herself from the top of the rock. She did so and was freed from her love. When she sought for the reason behind this, Apollo told her that Zeus, before taking another lover, would sit on this rock to free himself from his love to Hera. Another tale relates that a man named Nireus, who fell in love with the cult statue of Athena, came to the rock and jumped in order relieve himself. After jumping, he fell into the net of a fisherman in which, when he was pulled out, he found a box filled with gold. He fought with the fisherman and took the gold, but Apollo appeared to him in the night in a dream and warned him not to appropriate gold which belonged to others. It was an ancestral custom among the Leukadians to fling a criminal from this rock every year at the sacrifice performed in honor of Apollo for the sake of averting evil. However, a number of men would be stationed all around below rock to catch the criminal and take him out of the borders in order to exile him from the island. This was the same rock from which, according to a legend, Sappho took her suicidal leap. Female lovers Love affairs ascribed to Apollo are a late development in Greek mythology. Their vivid anecdotal qualities have made some of them favorites of painters since the Renaissance, the result being that they stand out more prominently in the modern imagination. Daphne was a nymph who scorned Apollo's advances and ran away from him. When Apollo chased her in order to persuade her, she changed herself into a laurel tree. According to other versions, she cried for help during the chase, and Gaia helped her by taking her in and placing a laurel tree in her place. According to Roman poet Ovid, the chase was brought about by Cupid, who hit Apollo with golden arrow of love and Daphne with leaden arrow of hatred. The myth explains the origin of the laurel and connection of Apollo with the laurel and its leaves, which his priestess employed at Delphi. The leaves became the symbol of victory and laurel wreaths were given to the victors of the Pythian games. Apollo is said to have been the lover of all nine Muses, and not being able to choose one of them, decided to remain unwed. He fathered the Corybantes by the Muse Thalia, Orpheus by Calliope, Linus of Thrace by Calliope or Urania and Hymenaios (Hymen) by one of the Muses. Cyrene was a Thessalian princess whom Apollo loved. In her honor, he built the city Cyrene and made her its ruler. She was later granted longevity by Apollo who turned her into a nymph. The couple had two sons, Aristaeus, and Idmon. Evadne was a nymph daughter of Poseidon and a lover of Apollo. She bore him a son, Iamos. During the time of the childbirth, Apollo sent Eileithyia, the goddess of childbirth to assist her. Rhoeo, a princess of the island of Naxos was loved by Apollo. Out of affection for her, Apollo turned her sisters into goddesses. On the island Delos she bore Apollo a son named Anius. Not wanting to have the child, she entrusted the infant to Apollo and left. Apollo raised and educated the child on his own. Ourea, a daughter of Poseidon, fell in love with Apollo when he and Poseidon were serving the Trojan king Laomedon. They both united on the day the walls of Troy were built. She bore to Apollo a son, whom Apollo named Ileus, after the city of his birth, Ilion (Troy). Ileus was very dear to Apollo. Thero, daughter of Phylas, a maiden as beautiful as the moonbeams, was loved by the radiant Apollo, and she loved him in return. By their union, she became mother of Chaeron, who was famed as "the tamer of horses". He later built the city Chaeronea. Hyrie or Thyrie was the mother of Cycnus. Apollo turned both the mother and son into swans when they jumped into a lake and tried to kill themselves. Hecuba was the wife of King Priam of Troy, and Apollo had a son with her named Troilus. An oracle prophesied that Troy would not be defeated as long as Troilus reached the age of twenty alive. He was ambushed and killed by Achilleus, and Apollo avenged his death by killing Achilles. After the sack of Troy, Hecuba was taken to Lycia by Apollo. Coronis was daughter of Phlegyas, King of the Lapiths. While pregnant with Asclepius, Coronis fell in love with Ischys, son of Elatus and slept with him. When Apollo found out about her infidelity through his prophetic powers, he sent his sister, Artemis, to kill Coronis. Apollo rescued the baby by cutting open Koronis' belly and gave it to the centaur Chiron to raise. Dryope, the daughter of Dryops, was impregnated by Apollo in the form of a snake. She gave birth to a son named Amphissus. In Euripides' play Ion, Apollo fathered Ion by Creusa, wife of Xuthus. He used his powers to conceal her pregnancy from her father. Later, when Creusa left Ion to die in the wild, Apollo asked Hermes to save the child and bring him to the oracle at Delphi, where he was raised by a priestess. Male lovers Hyacinth (or Hyacinthus), a beautiful and athletic Spartan prince, was one of Apollo's favourite lovers. The pair was practicing throwing the discus when a discus thrown by Apollo was blown off course by the jealous Zephyrus and struck Hyacinthus in the head, killing him instantly. Apollo is said to be filled with grief. Out of Hyacinthus' blood, Apollo created a flower named after him as a memorial to his death, and his tears stained the flower petals with the interjection , meaning alas. He was later resurrected and taken to heaven. The festival Hyacinthia was a national celebration of Sparta, which commemorated the death and rebirth of Hyacinthus. Another male lover was Cyparissus, a descendant of Heracles. Apollo gave him a tame deer as a companion but Cyparissus accidentally killed it with a javelin as it lay asleep in the undergrowth. Cyparissus was so saddened by its death that he asked Apollo to let his tears fall forever. Apollo granted the request by turning him into the Cypress named after him, which was said to be a sad tree because the sap forms droplets like tears on the trunk. Admetus, the king of Pherae, was also Apollo's lover. During his exile, which lasted either for one year or nine years, Apollo served Admetus as a herdsman. The romantic nature of their relationship was first described by Callimachus of Alexandria, who wrote that Apollo was "fired with love" for Admetus. Plutarch lists Admetus as one of Apollo's lovers and says that Apollo served Admetus because he doted upon him. Latin poet Ovid in his Ars Amatoria said that even though he was a god, Apollo forsook his pride and stayed in as a servant for the sake of Admetus. Tibullus desrcibes Apollo's love to the king as servitium amoris (slavery of love) and asserts that Apollo became his servant not by force but by choice. He would also make cheese and serve it to Admetus. His domestic actions caused embarrassment to his family. When Admetus wanted to marry princess Alcestis, Apollo provided a chariot pulled by a lion and a boar he had tamed. This satisfied Alcestis' father and he let Admetus marry his daughter. Further, Apollo saved the king from Artemis' wrath and also convinced the Moirai to postpone Admetus' death once. Branchus, a shepherd, one day came across Apollo in the woods. Captivated by the god's beauty, he kissed Apollo. Apollo requited his affections and wanting to reward him, bestowed prophetic skills on him. His descendants, the Branchides, were an influential clan of prophets. Other male lovers of Apollo include: Adonis, who is said to have been the lover of both Apollo and Aphrodite. He behaved as a man with Aphrodite and as a woman with Apollo. Atymnius, otherwise known as a beloved of Sarpedon Boreas, the god of North winds Helenus, the son of Priam and a Trojan Prince, was a lover of Apollo and received from him an ivory bow with which he later wounded Achilles in the hand. Hippolytus of Sicyon (not the same as Hippolytus, the son of Theseus) Hymenaios, the son of Magnes Iapis, to whom Apollo taught the art of healing Phorbas, the dragon slayer (probably the son of Triopas) Children Apollo sired many children, from mortal women and nymphs as well as the goddesses. His children grew up to be physicians, musicians, poets, seers or archers. Many of his sons founded new cities and became kings. They were all usually very beautiful. Asclepius is the most famous son of Apollo. His skills as a physician surpassed that of Apollo's. Zeus killed him for bringing back the dead, but upon Apollo's request, he was resurrected as a god. Aristaeus was placed under the care of Chiron after his birth. He became the god of beekeeping, cheese making, animal husbandry and more. He was ultimately given immortality for the benefits he bestowed upon the humanity. The Corybantes were spear-clashing, dancing demigods. The sons of Apollo who participated in the Trojan War include the Trojan princes Hector and Troilus, as well as Tenes, the king of Tenedos, all three of whom were killed by Achilles over the course of the war. Apollo's children who became musicians and bards include Orpheus, Linus, Ialemus, Hymenaeus, Philammon, Eumolpus and Eleuther. Apollo fathered 3 daughters, Apollonis, Borysthenis and Cephisso, who formed a group of minor Muses, the "Musa Apollonides". They were nicknamed Nete, Mese and Hypate after the highest, middle and lowest strings of his lyre. Phemonoe was a seer and a poetess who was the inventor of Hexameter. Apis, Idmon, Iamus, Tenerus, Mopsus, Galeus, Telmessus and others were gifted seers. Anius, Pythaeus and Ismenus lived as high priests. Most of them were trained by Apollo himself. Arabus, Delphos, Dryops, Miletos, Tenes, Epidaurus, Ceos, Lycoras, Syrus, Pisus, Marathus, Megarus, Patarus, Acraepheus, Cicon, Chaeron and many other sons of Apollo, under the guidance of his words, founded eponymous cities. He also had a son named Chrysorrhoas who was a mechanic artist. His other daughters include Eurynome, Chariclo wife of Chiron, Eurydice the wife of Orpheus, Eriopis, famous for her beautiful hair, Melite the heroine, Pamphile the silk weaver, Parthenos, and by some accounts, Phoebe, Hilyra and Scylla. Apollo turned Parthenos into a constellation after her early death. Additionally, Apollo fostered and educated Chiron, the centaur who later became the greatest teacher and educated many demigods, including Apollo's sons. Apollo also fostered Carnus, the son of Zeus and Europa. Failed love attempts Marpessa was kidnapped by Idas but was loved by Apollo as well. Zeus made her choose between them, and she chose Idas on the grounds that Apollo, being immortal, would tire of her when she grew old. Sinope, a nymph, was approached by the amorous Apollo. She made him promise that he would grant to her whatever she would ask for, and then cleverly asked him to let her stay a virgin. Apollo kept his promise and went back. Bolina was admired by Apollo but she refused him and jumped into the sea. To avoid her death, Apollo turned her into a nymph and let her go. Castalia was a nymph whom Apollo loved. She fled from him and dove into the spring at Delphi, at the base of Mt. Parnassos, which was then named after her. Water from this spring was sacred; it was used to clean the Delphian temples and inspire the priestesses. Cassandra, was a daughter of Hecuba and Priam. Apollo wished to court her. Cassandra promised to return his love on one condition - he should give her the power to see the future. Apollo fulfilled her wish, but she went back on her word and rejected him soon after. Angered that she broke her promise, Apollo cursed her that even though she would see the future, no one would ever believe her prophecies. Hestia, the goddess of the hearth, rejected both Apollo's and Poseidon's marriage proposals and swore that she would always stay unmarried. Female counterparts Artemis Artemis as the sister of Apollo, is thea apollousa, that is, she as a female divinity represented the same idea that Apollo did as a male divinity. In the pre-Hellenic period, their relationship was described as the one between husband and wife, and there seems to have been a tradition which actually described Artemis as the wife of Apollo. However, this relationship was never sexual but spiritual, which is why they both are seen being unmarried in the Hellenic period. Artemis, like her brother, is armed with a bow and arrows. She is the cause of sudden deaths of women. She also is the protector of the young, especially girls. Though she has nothing to do with oracles, music or poetry, she sometimes led the female chorus on Olympus while Apollo sang. The laurel (daphne) was sacred to both. Artemis Daphnaia had her temple among the Lacedemonians, at a place called Hypsoi. Apollo Daphnephoros had a temple in Eretria, a "place where the citizens are to take the oaths". In later times when Apollo was regarded as identical with the sun or Helios, Artemis was naturally regarded as Selene or the moon. Hecate Hecate, the goddess of witchcraft and magic, is the chthonic counterpart of Apollo. They both are cousins, since their mothers - Leto and Asteria - are sisters. One of Apollo's epithets, Hecatos, is the masculine form of Hecate, and both the names mean "working from afar". While Apollo presided over the prophetic powers and magic of light and heaven, Hecate presided over the prophetic powers and magic of night and chthonian darkness. If Hecate is the "gate-keeper", Apollo Agyieus is the "door-keeper". Hecate is the goddess of crossroads and Apollo is the god and protector of streets. The oldest evidence found for Hecate's worship is at Apollo's temple in Miletos. There, Hecate was taken to be Apollo's sister counterpart in the absence of Artemis. Hecate's lunar nature makes her the goddess of the waning moon and contrasts and complements, at the same time, Apollo's solar nature. Athena As a deity of knowledge and great power, Apollo was seen being the male counterpart of Athena. Being Zeus' favorite children, they were given more powers and duties. Apollo and Athena often took up the role as protectors of cities, and were patrons of some of the important cities. Athena was the principle goddess of Athens, Apollo was the principle god of Sparta. As patrons of arts, Apollo and Athena were companions of the Muses, the former a much more frequent companion than the latter. Apollo was sometimes called the son of Athena and Hephaestus. In the Trojan war, as Zeus' executive, Apollo is seen holding the aegis like Athena usually does. Apollo's decisions were usually approved by his sister Athena, and they both worked to establish the law and order set forth by Zeus. Apollo in the Oresteia In Aeschylus' Oresteia trilogy, Clytemnestra kills her husband, King Agamemnon because he had sacrificed their daughter Iphigenia to proceed forward with the Trojan war. Apollo gives an order through the Oracle at Delphi that Agamemnon's son, Orestes, is to kill Clytemnestra and Aegisthus, her lover. Orestes and Pylades carry out the revenge, and consequently Orestes is pursued by the Erinyes or Furies (female personifications of vengeance). Apollo and the Furies argue about whether the matricide was justified; Apollo holds that the bond of marriage is sacred and Orestes was avenging his father, whereas the Erinyes say that the bond of blood between mother and son is more meaningful than the bond of marriage. They invade his temple, and he drives them away. He says that the matter should be brought before Athena. Apollo promises to protect Orestes, as Orestes has become Apollo's supplicant. Apollo advocates Orestes at the trial, and ultimately Athena rules in favor of Apollo. Roman Apollo The Roman worship of Apollo was adopted from the Greeks. As a quintessentially Greek god, Apollo had no direct Roman equivalent, although later Roman poets often referred to him as Phoebus. There was a tradition that the Delphic oracle was consulted as early as the period of the kings of Rome during the reign of Tarquinius Superbus. On the occasion of a pestilence in the 430s BCE, Apollo's first temple at Rome was established in the Flaminian fields, replacing an older cult site there known as the "Apollinare". During the Second Punic War in 212 BCE, the Ludi Apollinares ("Apollonian Games") were instituted in his honor, on the instructions of a prophecy attributed to one Marcius. In the time of Augustus, who considered himself under the special protection of Apollo and was even said to be his son, his worship developed and he became one of the chief gods of Rome. After the battle of Actium, which was fought near a sanctuary of Apollo, Augustus enlarged Apollo's temple, dedicated a portion of the spoils to him, and instituted quinquennial games in his honour. He also erected a new temple to the god on the Palatine hill. Sacrifices and prayers on the Palatine to Apollo and Diana formed the culmination of the Secular Games, held in 17 BCE to celebrate the dawn of a new era. Festivals The chief Apollonian festival was the Pythian Games held every four years at Delphi and was one of the four great Panhellenic Games. Also of major importance was the Delia held every four years on Delos. Athenian annual festivals included the Boedromia, Metageitnia, Pyanepsia, and Thargelia. Spartan annual festivals were the Carneia and the Hyacinthia. Thebes every nine years held the Daphnephoria. Attributes and symbols Apollo's most common attributes were the bow and arrow. Other attributes of his included the kithara (an advanced version of the common lyre), the plectrum and the sword. Another common emblem was the sacrificial tripod, representing his prophetic powers. The Pythian Games were held in Apollo's honor every four years at Delphi. The bay laurel plant was used in expiatory sacrifices and in making the crown of victory at these games. The palm tree was also sacred to Apollo because he had been born under one in Delos. Animals sacred to Apollo included wolves, dolphins, roe deer, swans, cicadas (symbolizing music and song), ravens, hawks, crows (Apollo had hawks and crows as his messengers), snakes (referencing Apollo's function as the god of prophecy), mice and griffins, mythical eagle–lion hybrids of Eastern origin. Homer and Porphyry wrote that Apollo had a hawk as his messenger. In many myths Apollo is transformed into a hawk. In addition, Claudius Aelianus wrote that in Ancient Egypt people believed that hawks were sacred to the god and that according to the ministers of Apollo in Egypt there were certain men called "hawk-keepers" (ἱερακοβοσκοί) who fed and tended the hawks belonging to the god. Eusebius wrote that the second appearance of the moon is held sacred in the city of Apollo in Egypt and that the city's symbol is a man with a hawklike face (Horus). Claudius Aelianus wrote that Egyptians called Apollo Horus in their own language. As god of colonization, Apollo gave oracular guidance on colonies, especially during the height of colonization, 750–550 BCE. According to Greek tradition, he helped Cretan or Arcadian colonists found the city of Troy. However, this story may reflect a cultural influence which had the reverse direction: Hittite cuneiform texts mention an Asia Minor god called Appaliunas or Apalunas in connection with the city of Wilusa attested in Hittite inscriptions, which is now generally regarded as being identical with the Greek Ilion by most scholars. In this interpretation, Apollo's title of Lykegenes can simply be read as "born in Lycia", which effectively severs the god's supposed link with wolves (possibly a folk etymology). In literary contexts, Apollo represents harmony, order, and reason—characteristics contrasted with those of Dionysus, god of wine, who represents ecstasy and disorder. The contrast between the roles of these gods is reflected in the adjectives Apollonian and Dionysian. However, the Greeks thought of the two qualities as complementary: the two gods are brothers, and when Apollo at winter left for Hyperborea, he would leave the Delphic oracle to Dionysus. This contrast appears to be shown on the two sides of the Borghese Vase. Apollo is often associated with the Golden Mean. This is the Greek ideal of moderation and a virtue that opposes gluttony. Apollo in the arts Apollo is a common theme in Greek and Roman art and also in the art of the Renaissance. The earliest Greek word for a statue is "delight" (, agalma), and the sculptors tried to create forms which would inspire such guiding vision. Greek art puts into Apollo the highest degree of power and beauty that can be imagined. The sculptors derived this from observations on human beings, but they also embodied in concrete form, issues beyond the reach of ordinary thought. The naked bodies of the statues are associated with the cult of the body that was essentially a religious activity. The muscular frames and limbs combined with slim waists indicate the Greek desire for health, and the physical capacity which was necessary in the hard Greek environment. The statues of Apollo embody beauty, balance and inspire awe before the beauty of the world. Archaic sculpture Numerous free-standing statues of male youths from Archaic Greece exist, and were once thought to be representations of Apollo, though later discoveries indicated that many represented mortals. In 1895, V. I. Leonardos proposed the term kouros ("male youth") to refer to those from Keratea; this usage was later expanded by Henri Lechat in 1904 to cover all statues of this format. The earliest examples of life-sized statues of Apollo may be two figures from the Ionic sanctuary on the island of Delos. Such statues were found across the Greek speaking world, the preponderance of these were found at the sanctuaries of Apollo with more than one hundred from the sanctuary of Apollo Ptoios, Boeotia alone. Significantly more rare are the life-sized bronze statues. One of the few originals which survived into the present day—so rare that its discovery in 1959 was described as "a miracle" by Ernst Homann-Wedeking—is the masterpiece bronze, Piraeus Apollo. It was found in Piraeus, a port city close to Athens, and is believed to have come from north-eastern Peloponnesus. It is the only surviving large-scale Peloponnesian statue. Classical sculpture The famous Apollo of Mantua and its variants are early forms of the Apollo Citharoedus statue type, in which the god holds the cithara, a sophisticated seven-stringed variant of the lyre, in his left arm. While none of the Greek originals have survived, several Roman copies from approximately the late 1st or early 2nd century exist. Other notable forms are the Apollo Citharoedus and the Apollo Barberini. Hellenistic Greece-Rome Apollo as a handsome beardless young man, is often depicted with a cithara (as Apollo Citharoedus) or bow in his hand, or reclining on a tree (the Apollo Lykeios and Apollo Sauroctonos types). The Apollo Belvedere is a marble sculpture that was rediscovered in the late 15th century; for centuries it epitomized the ideals of Classical Antiquity for Europeans, from the Renaissance through the 19th century. The marble is a Hellenistic or Roman copy of a bronze original by the Greek sculptor Leochares, made between 350 and 325 BCE. The life-size so-called "Adonis" found in 1780 on the site of a villa suburbana near the Via Labicana in the Roman suburb of Centocelle is identified as an Apollo by modern scholars. In the late 2nd century CE floor mosaic from El Djem, Roman Thysdrus, he is identifiable as Apollo Helios by his effulgent halo, though now even a god's divine nakedness is concealed by his cloak, a mark of increasing conventions of modesty in the later Empire. Another haloed Apollo in mosaic, from Hadrumentum, is in the museum at Sousse. The conventions of this representation, head tilted, lips slightly parted, large-eyed, curling hair cut in locks grazing the neck, were developed in the 3rd century BCE to depict Alexander the Great. Some time after this mosaic was executed, the earliest depictions of Christ would also be beardless and haloed. Modern reception Apollo often appears in modern and popular culture due to his status as the god of music, dance and poetry. Postclassical art and literature Dance and music Apollo has featured in dance and music in modern culture. Percy Bysshe Shelley composed a "Hymn of Apollo" (1820), and the god's instruction of the Muses formed the subject of Igor Stravinsky's Apollon musagète (1927–1928). In 1978, the Canadian band Rush released an album with songs "Apollo: Bringer of Wisdom"/"Dionysus: Bringer of Love". Books Apollo been portrayed in modern literature, such as when Charles Handy, in Gods of Management (1978) uses Greek gods as a metaphor to portray various types of organizational culture. Apollo represents a 'role' culture where order, reason, and bureaucracy prevail. In 2016, author Rick Riordan published the first book in the Trials of Apollo series, publishing four other books in the series in 2017, 2018, 2019 and 2020. Film Apollo has been depicted in modern films—for instance, by Keith David in the 1997 animated feature film Hercules, by Luke Evans in the 2010 action film Clash of the Titans, and by Dimitri Lekkos in the 2010 film Percy Jackson & the Olympians: The Lightning Thief. Video games Apollo has appeared in many modern video games. Apollo appears as a minor character in Santa Monica Studio's 2010 action-adventure game God of War III with his bow being used by Peirithous. He also appears in the 2014 Hi-Rez Studios Multiplayer Online Battle Arena game Smite as a playable character. Psychology and philosophy In philosophical discussion of the arts, a distinction is sometimes made between the Apollonian and Dionysian impulses where the former is concerned with imposing intellectual order and the latter with chaotic creativity. Friedrich Nietzsche argued that a fusion of the two was most desirable. Psychologist Carl Jung's Apollo archetype represents what he saw as the disposition in people to over-intellectualise and maintain emotional distance. Spaceflight In spaceflight, the 1960s and 1970s NASA program for orbiting and landing astronauts on the Moon was named after Apollo, by NASA manager Abe Silverstein: "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program." Genealogy See also Family tree of the Greek gods Dryad Epirus Phoebus (disambiguation) Sibylline oracles Tegyra Temple of Apollo (disambiguation) Notes References Sources Primary sources Aelian, On Animals, Volume II: Books 6-11. Translated by A. F. Scholfield. Loeb Classical Library 447. Cambridge, MA: Harvard University Press, 1958. Aeschylus, The Eumenides in Aeschylus, with an English translation by Herbert Weir Smyth, Ph. D. in two volumes, Vol 2, Cambridge, Massachusetts, Harvard University Press, 1926, Online version at the Perseus Digital Library. Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project. Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Apollonius of Rhodes, Apollonius Rhodius: the Argonautica, translated by Robert Cooper Seaton, W. Heinemann, 1912. Internet Archive. Callimachus, Callimachus and Lycophron with an English Translation by A. W. Mair; Aratus, with an English Translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Online version at Harvard University Press. Internet Archive. Cicero, Marcus Tullius, De Natura Deorum in Cicero in Twenty-eight Volumes, XIX De Natura Deorum; Academica, with an english translation by H. Rackham, Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd, 1967. Internet Archive. Diodorus Siculus, Library of History, Volume III: Books 4.59-8, translated by C. H. Oldfather, Loeb Classical Library No. 340. Cambridge, Massachusetts, Harvard University Press, 1939. . Online version at Harvard University Press. Online version by Bill Thayer. Herodotus, Herodotus, with an English translation by A. D. Godley. Cambridge. Harvard University Press. 1920. Online version available at The Perseus Digital Library. Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homeric Hymn 3 to Apollo in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homeric Hymn 4 to Hermes, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, De Astronomica, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Livy, The History of Rome, Books I and II With An English Translation. Cambridge. Cambridge, Mass., Harvard University Press; London, William Heinemann, Ltd. 1919. Nonnus, Dionysiaca; translated by Rouse, W H D, I Books I-XV. Loeb Classical Library No. 344, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive Nonnus, Dionysiaca; translated by Rouse, W H D, II Books XVI-XXXV. Loeb Classical Library No. 345, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive Statius, Thebaid. Translated by Mozley, J H. Loeb Classical Library Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928. Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Sophocles, Oedipus Rex Palaephatus, On Unbelievable Tales 46. Hyacinthus (330 BCE) Ovid, Metamorphoses, Brookes More, Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. 10. 162–219 (1–8 CE) Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Philostratus the Elder, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. i.24 Hyacinthus (170–245 CE) Philostratus the Younger, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. 14. Hyacinthus (170–245 CE) Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library. Plutarch. Lives, Volume I: Theseus and Romulus. Lycurgus and Numa. Solon and Publicola. Translated by Bernadotte Perrin. Loeb Classical Library No. 46. Cambridge, Massachusetts: Harvard University Press, 1914. . Online version at Harvard University Press. Numa at the Perseus Digital Library. Pseudo-Plutarch, De fluviis, in Plutarch's morals, Volume V, edited and translated by William Watson Goodwin, Boston: Little, Brown & Co., 1874. Online version at the Perseus Digital Library. Lucian, Dialogues of the Dead. Dialogues of the Sea-Gods. Dialogues of the Gods. Dialogues of the Courtesans, translated by M. D. MacLeod, Loeb Classical Library No. 431, Cambridge, Massachusetts, Harvard University Press, 1961. . Online version at Harvard University Press. Internet Archive. First Vatican Mythographer, 197. Thamyris et Musae Tzetzes, John, Chiliades, editor Gottlieb Kiessling, F.C.G. Vogel, 1826. Google Books. (English translation: Book I by Ana Untila; Books II–IV, by Gary Berkowitz; Books V–VI by Konstantino Ramiotis; Books VII–VIII by Vasiliki Dogani; Books IX–X by Jonathan Alexander; Books XII–XIII by Nikolaos Giallousis. Internet Archive). Valerius Flaccus, Argonautica, translated by J. H. Mozley, Loeb Classical Library No. 286. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928. . Online version at Harvard University Press. Online translated text available at theoi.com. Vergil, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Secondary sources Athanassakis, Apostolos N., and Benjamin M. Wolkow, The Orphic Hymns, Johns Hopkins University Press; owlerirst Printing edition (May 29, 2013). . Google Books. M. Bieber, 1964. Alexander the Great in Greek and Roman Art. Chicago. Hugh Bowden, 2005. Classical Athens and the Delphic Oracle: Divination and Democracy. Cambridge University Press. Walter Burkert, 1985. Greek Religion (Harvard University Press) III.2.5 passim Fontenrose, Joseph Eddy, Python: A Study of Delphic Myth and Its Origins, University of California Press, 1959. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Miranda J. Green, 1997. Dictionary of Celtic Myth and Legend, Thames and Hudson. Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Karl Kerenyi, 1953. Apollon: Studien über Antiken Religion und Humanität revised edition. Kerényi, Karl 1951, The Gods of the Greeks, Thames and Hudson, London. Mertens, Dieter; Schutzenberger, Margareta. Città e monumenti dei Greci d'Occidente: dalla colonizzazione alla crisi di fine V secolo a.C.. Roma L'Erma di Bretschneider, 2006. . Martin Nilsson, 1955. Die Geschichte der Griechische Religion, vol. I. C.H. Beck. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Pauly–Wissowa, Realencyclopädie der klassischen Altertumswissenschaft: II, "Apollon". The best repertory of cult sites (Burkert). Peck, Harry Thurston, Harpers Dictionary of Classical Antiquities, New York. Harper and Brothers. 1898. Online version at the Perseus Digital Library. Pfeiff, K.A., 1943. Apollon: Wandlung seines Bildes in der griechischen Kunst. Traces the changing iconography of Apollo. D.S.Robertson (1945) A handbook of Greek and Roman Architecture Cambridge University Press Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). "Apollo" Smith, William, A Dictionary of Greek and Roman Antiquities. William Smith, LLD. William Wayte. G. E. Marindin. Albemarle Street, London. John Murray. 1890. Online version at the Perseus Digital Library. Spivey Nigel (1997) Greek art Phaedon Press Ltd. External links Apollo at the Greek Mythology Link, by Carlos Parada The Warburg Institute Iconographic Database: ca 1650 images of Apollo Beauty gods Health gods Knowledge gods Light deities Maintenance deities Music and singing gods Oracular gods Solar gods Greek gods Roman gods Dragonslayers Mythological Greek archers Mythological rapists Homosexuality and bisexuality deities Divine twins Deities in the Iliad Metamorphoses characters Characters in Greek mythology LGBT themes in Greek mythology Children of Zeus Characters in the Odyssey Characters in the Argonautica
Apollo
Algae (; singular alga ) is an informal term for a large and diverse group of photosynthetic eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and autotrophic (they generate food internally) and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. No definition of algae is generally accepted. One definition is that algae "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". Likewise, the colorless Prototheca under Chlorophyta are all devoid of any chlorophyll. Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes from the definition of algae. Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction. Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of nonvascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a by-product of photosynthesis, unlike other photosynthetic bacteria such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated back to 1.6 to 1.7 billion years ago. Because of the wide range of types of algae, they have increasing different industrial and traditional applications in human society. Traditional seaweed farming practices have existed for thousands of years and have strong traditions in East Asia food cultures. More modern algaculture applications extend the food traditions for other applications include cattle feed, using algae for bioremediation or pollution control, transforming sunlight into algae fuels or other chemicals used in industrial processes, and in medical and scientific applications. A 2020 review, found that these applications of algae could play an important role in carbon sequestration in order to mitigate climate change while providing valuable value-add products for global economies. Etymology and study The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'. The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue. Accordingly, the modern study of marine and freshwater algae is called either phycology or algology, depending on whether the Greek or Latin root is used. The name fucus appears in a number of taxa. Classifications The committee on the International Code of Botanical Nomenclature has recommended certain suffixes for use in the classification of algae. These are -phyta for division, -phyceae for class, -phycideae for subclass, -ales for order, -inales for suborder, -aceae for family, -oidease for subfamily, a Greek-based name for genus, and a Latin-based name for species. Algal characteristics basic to primary classification The primary classification of algae is based on certain morphological features. The chief among these are (a) pigment constitution of the cell, (b) chemical nature of stored food materials, (c) kind, number, point of insertion and relative length of the flagella on the motile cell, (d) chemical composition of cell wall and (e) presence or absence of a definitely organized nucleus in the cell or any other significant details of cell structure. History of classification of algae Although Carolus Linnaeus (1754) included algae along with lichens in his 25th class Cryptogamia, he did not elaborate further on the classification of algae. Jean Pierre Étienne Vaucher (1803) was perhaps the first to propose a system of classification of algae, and he recognized three groups, Conferves, Ulves, and Tremelles. While Johann Heinrich Friedrich Link (1820) classified algae on the basis of the colour of the pigment and structure, William Henry Harvey (1836) proposed a system of classification on the basis of the habitat and the pigment. J. G. Agardh (1849–1898) divided algae into six orders: Diatomaceae, Nostochineae, Confervoideae, Ulvaceae, Floriadeae and Fucoideae. Around 1880, algae along with fungi were grouped under Thallophyta, a division created by Eichler (1836). Encouraged by this, Adolf Engler and Karl A. E. Prantl (1912) proposed a revised scheme of classification of algae and included fungi in algae as they were of opinion that fungi have been derived from algae. The scheme proposed by Engler and Prantl is summarised as follows: Schizophyta Phytosarcodina Flagellata Dinoflagellata Bacillariophyta Conjugatae Chlorophyceae Charophyta Phaeophyceae Rhodophyceae Eumycetes (Fungi) The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely. Phylogeny based on plastid not nucleocytoplasmic genealogy: Linnaeus, in Species Plantarum (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In Systema Naturae, Linnaeus described the genera Volvox and Corallina, and a species of Acetabularia (as Madrepora), among the animals. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. W. H. Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae. At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals. Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group. Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes). With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists). Some parasitic algae (e.g., the green algae Prototheca and Helicosporidium, parasites of metazoans, or Cephaleuros, parasites of plants) were originally classified as fungi, sporozoans, or protistans of incertae sedis, while others (e.g., the green algae Phyllosiphon and Rhodochytrium, parasites of plants, or the red algae Pterocladiophila and Gelidiocolax mammillatus, parasites of other red algae, or the dinoflagellates Oodinium, parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., Chlorochytrium), but later were seen as endophytic algae. Some filamentous bacteria (e.g., Beggiatoa) were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae. Relationship to land plants The first land plants probably evolved from shallow freshwater charophyte algae much like Chara almost 500 million years ago. These probably had an isomorphic alternation of generations and were probably filamentous. Fossils of isolated land plant spores suggest land plants may have been around as long as 475 million years ago. Morphology A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns. Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are Colonial: small, regular groups of motile cells Capsoid: individual non-motile cells embedded in mucilage Coccoid: individual non-motile cells with cell walls Palmelloid: nonmotile cells embedded in mucilage Filamentous: a string of nonmotile cells connected together, sometimes branching Parenchymatous: cells forming a thallus with partial differentiation of tissues In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes. Turfs The term algal turf is commonly used but poorly defined. Algal turfs are thick, carpet-like beds of seaweed that retain sediment and compete with foundation species like corals and kelps, and they are usually less than 15 cm tall. Such a turf may consist of one or more species, and will generally cover an area in the order of a square metre or more. Some common characteristics are listed: Algae that form aggregations that have been described as turfs include diatoms, cyanobacteria, chlorophytes, phaeophytes and rhodophytes. Turfs are often composed of numerous species at a wide range of spatial scales, but monospecific turfs are frequently reported. Turfs can be morphologically highly variable over geographic scales and even within species on local scales and can be difficult to identify in terms of the constituent species. Turfs have been defined as short algae, but this has been used to describe height ranges from less than 0.5 cm to more than 10 cm. In some regions, the descriptions approached heights which might be described as canopies (20 to 30 cm). Physiology Many algae, particularly members of the Characeae species, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials. Phytohormones are found not only in higher plants, but in algae, too. Symbiotic algae Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are: Lichens Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature they do not occur separate from lichens. It is unknown when they began to associate. One mycobiont associates with the same phycobiont species, rarely two, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont. Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae (aerophytes) growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them. Coral reefs Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus Symbiodinium to be in a healthy condition. The loss of Symbiodinium from the host is known as coral bleaching, a condition which leads to the deterioration of a reef. Sea sponges Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges (Halichondria panicea). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species. Lifecycle Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have lifecycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae. Numbers The Algal Collection of the US National Herbarium (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles the UK Biodiversity Steering Group Report estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..." Regional and group estimates have been made, as well: 5,000–5,500 species of red algae worldwide "some 1,300 in Australian Seas" 400 seaweed species for the western coastline of South Africa, and 212 species from the coast of KwaZulu-Natal. Some of these are duplicates, as the range extends across both coasts, and the total recorded is probably about 500 species. Most of these are listed in List of seaweeds of South Africa. These exclude phytoplankton and crustose corallines. 669 marine species from California (US) 642 in the check-list of Britain and Ireland and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton. The most recent estimate suggests 72,500 algal species worldwide. Distribution The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of Plantae by seeds and spores. This dispersal can be accomplished by air, water, or other organisms. Due to this, spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an organism depends on the combination of the species and the environmental conditions where the spore lands. The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces. To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, Ulva reticulata and U. fasciata travelled from the mainland to Hawaii in this manner. Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, Clathromorphum is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies." Ecology Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as Navicula pennata have been recorded to a depth of . A type of algae, Ancylonema nordenskioeldii, was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. Same algae was found in the Italian Alps, after pink ice appeared on parts of the Presena glacier. The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms. Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease. On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont). Cultural associations In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao, which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent". Cultivation Seaweed farming Bioreactors Uses Agar Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar. Alginates Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours. Between 100,000 and 170,000 wet tons of Macrocystis are harvested annually in New Mexico for alginate extraction and abalone feed. Energy source To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025. Fertilizer For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales: Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner. Nutrition Naturally growing seaweeds are an important source of food, especially in Asia, leading some to label them as superfoods. They provide many vitamins including: A, B1, B2, B6, niacin, and C, and are rich in iodine, potassium, iron, magnesium, and calcium. In addition, commercially cultivated microalgae, including both algae and cyanobacteria, are marketed as nutritional supplements, such as spirulina, Chlorella and the vitamin-C supplement from Dunaliella, high in beta-carotene. Algae are national foods of many nations: China consumes more than 70 species, including fat choy, a cyanobacterium considered a vegetable; Japan, over 20 species such as nori and aonori; Ireland, dulse; Chile, cochayuyo. Laver is used to make laver bread in Wales, where it is known as ; in Korea, . It is also used along the west coast of North America from California to British Columbia, in Hawaii and by the Māori of New Zealand. Sea lettuce and badderlocks are salad ingredients in Scotland, Ireland, Greenland, and Iceland. Algae is being considered a potential solution for world hunger problem. Two popular forms of algae are used in cuisine: Chlorella: This form of alga is found in freshwater and contains photosynthetic pigments in its chloroplast. It is high in iron, zinc, magnesium, vitamin B2 and Omega-3 Fatty acids. Furthermore, it contains all nine of the essential amino acids the body does not produce on its own Spirulina: Known otherwise as a cyanobacterium (a prokaryote, incorrectly referred to as a "blue-green alga"), contains 10% more protein than Chlorella as well as more thiamine and copper. The oils from some algae have high levels of unsaturated fatty acids. For example, Parietochloris incisa is very high in arachidonic acid, where it reaches up to 47% of the triglyceride pool. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain. Algae have emerged in recent years as a popular source of omega-3 fatty acids for vegetarians who cannot get long-chain EPA and DHA from other vegetarian sources such as flaxseed oil, which only contains the short-chain alpha-linolenic acid (ALA). Pollution control Sewage can be treated with algae, reducing the use of large amounts of toxic chemicals that would otherwise be needed. Algae can be used to capture fertilizers in runoff from farms. When subsequently harvested, the enriched algae can be used as fertilizer. Aquaria and ponds can be filtered using algae, which absorb nutrients from the water in a device called an algae scrubber, also known as an algae turf scrubber. Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds. Polymers Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose-based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards. Bioremediation The alga Stichococcus bacillaris has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance. Pigments The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents. The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples. Stabilizing substances Carrageenan, from the red alga Chondrus crispus, is used as a stabilizer in milk products. Additional images See also AlgaeBase AlgaePARC Eutrophication Iron fertilization Marimo algae Microbiofuels Microphyte Photobioreactor Phycotechnology Plant Toxoid – anatoxin References Bibliography General . Regional Britain and Ireland Australia New Zealand Europe Arctic Greenland Faroe Islands . Canary Islands Morocco South Africa North America External links – a database of all algal names including images, nomenclature, taxonomy, distribution, bibliography, uses, extracts EnAlgae Endosymbiotic events Polyphyletic groups
Algae
The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool which has been used since ancient times. It was used in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the Hindu-Arabic numeral system. The exact origin of the abacus has not yet emerged. It consists of rows of movable beads, or similar objects, strung on a wire. They represent digits. One of the two numbers is set up, and the beads are manipulated to perform an operation such as addition, or even a square or cubic root. In their earliest designs, the rows of beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Abacuses are still made, often as a bamboo frame with beads sliding on wires. In the ancient world, particularly before the introduction of positional notation, abacuses were a practical calculating tool. The abacus is still used to teach the fundamentals of mathematics to some children, e.g., in post-Soviet states. Designs such as the Japanese soroban have been used for practical calculations of up to multi-digit numbers. Any particular abacus design supports multiple methods to perform calculations, including the four basic operations and square and cube roots. Some of these methods work with non-natural numbers (numbers such as and ). Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. Etymology The word abacus dates to at least AD 1387 when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek (abax) which means something without a base, and colloquially, any piece of rectangular material. Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, (abakos). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion. Greek probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (), or “dust” (in the post-Biblical sense "sand used as a writing surface"). Both abacuses and abaci (soft or hard "c") are used as plurals. The user of an abacus is called an abacist. History Mesopotamia The Sumerian abacus appeared between 2700–2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system. Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "may have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations". Egypt Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered. Persia At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire- which is how the abacus may have been exported to other countries. Greece The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384 BC–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus saw use in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution. A tablet found on the Greek island Salamis in 1846 AD (the Salamis Tablet) dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble in length, wide, and thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other. China The earliest known written documentation of the Chinese abacus dates to the 2nd century BC. The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), is typically tall and comes in various widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center. The prototype of the Chinese abacus appeared during the Han Dynasty, and the beads are oval. The Song Dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus. In the early Ming Dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming Dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five. Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it. In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao). The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this allows use with a hexadecimal numeral system (or any base up to 18) which may have been used for traditional Chinese measures of weight. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower.) Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians. Rome The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (calculi) were used. Later, and in medieval Europe, jetons were manufactured. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system. This system of 'counter casting' continued into the late Roman empire and in medieval Europe and persisted in limited use into the nineteenth century. Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century This abacus used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved. Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus. One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives –five units, five tens, etc., essentially in a bi-quinary coded decimal system, related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions). India The Abhidharmakośabhāṣya of Vasubandhu (316-396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus. Japan In Japan, the abacus is called soroban (, lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead became popular in the 1940s. Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a one:four device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus. The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China an aluminium frame plastic bead abacus was used. The file is next to the four beads, and pressing the "clearing" button put the upper bead in the upper position, and the lower bead in the lower position. The abacus is still manufactured in Japan even with the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery can complete a calculation as quickly as a physical instrument. Korea The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty. Native America Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli – the account -; and tzintzin – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh , who were students dedicated to taking the accounts of skies, from childhood. The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row. The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed. The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures. Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles. The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum. Russia The Russian abacus, the schoty (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). Older models have another 4-bead wire for quarter-kopeks, which were minted until 1916. The Russian abacus is often used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color. The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia; according to Yakov Perelman. Some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974. The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians. School abacus Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic. In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image). The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires. The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework. Feynman vs the abacus Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for a really difficult challenge, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods. Neurological analysis Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time. Renaissance abacuses Binary abacus The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position. Visually impaired users An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root. Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades. Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life. See also Chinese Zhusuan Chisanbop Logical abacus Mental abacus Napier's bones Sand table Slide rule Soroban Suanpan Notes Footnotes References Reading External links Tutorials Min Multimedia Abacus curiosities Abacus in Various Number Systems at cut-the-knot Java applet of Chinese, Japanese and Russian abaci An atomic-scale abacus Examples of Abaci Aztex Abacus Indian Abacus Mathematical tools Chinese mathematics Egyptian mathematics Greek mathematics Indian mathematics Japanese mathematics Roman mathematics
Abacus
An acid is a molecule or ion capable of either donating a proton (i.e., hydrogen ion, H+), known as a Brønsted–Lowry acid, or, capable of forming a covalent bond with an electron pair, known as a Lewis acid. The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+. Aqueous Arrhenius acids have characteristic properties which provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word acid is derived from the Latin acidus/acēre, meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic. Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride which is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid. The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital which can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly or by releasing protons (H+) into the solution, which then accept electron pairs. However, hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an acid is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as a Lewis acid. Definitions and concepts Modern definitions are concerned with the fundamental chemical reactions common to all acids. Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted–Lowry definitions are the most relevant. The Brønsted–Lowry definition is the most widely used definition; unless otherwise specified, acid–base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base. Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted–Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms. Arrhenius acids In 1884, Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+), later described as protons or hydrons. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Note that chemists often write H+(aq) and refer to the hydrogen ion when describing acid–base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion (H3O+) or other forms (H5O2+, H9O4+). Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as hydrogen chloride and acetic acid. An Arrhenius base, on the other hand, is a substance which increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules: H3O + OH ⇌ H2O(liq) + H2O(liq) Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it. In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7. Brønsted–Lowry acids While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid–base reactions involve the transfer of a proton. A Brønsted–Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted–Lowry base. Brønsted–Lowry acid–base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste: Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted–Lowry acid. Brønsted–Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition: H3O + Cl + NH3 → Cl + NH(aq) + H2O HCl(benzene) + NH3(benzene) → NH4Cl(s) HCl(g) + NH3(g) → NH4Cl(s) As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid. Lewis acids A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid–base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid–base reactions are proton transfer reactions while Lewis acid–base reactions are electron pair transfers. Many Lewis acids are not Brønsted–Lowry acids. Contrast how the following reactions are described in terms of acid–base chemistry: In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer. The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen. Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). This article deals mostly with Brønsted acids rather than Lewis acids. Dissociation and equilibrium Reactions of acids are often generalized in the form , where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid. Acid–base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). Note that the acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as . In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant K is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means the concentration of H2O. The acid dissociation constant Ka is generally used in the context of acid–base reactions. The numerical value of Ka is equal to the product of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+. The stronger of two acids will have a higher Ka than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for Ka spans many orders of magnitude, a more manageable constant, pKa is more frequently used, where pKa = −log10 Ka. Stronger acids have a smaller pKa than weaker acids. Experimentally determined pKa at 25 °C in aqueous solution are often quoted in textbooks and reference material. Nomenclature Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid. Classical naming system: In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride. Acid strength The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base. Stronger acids have a larger acid dissociation constant, Ka and a more negative pKa than weaker acids. Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable. Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations. While Ka measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's Ka. Lewis acid strength in non-aqueous solutions Lewis acids have been classified in the ECW model and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent. Chemical characteristics Monoprotic acids Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):      Ka Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH). Polyprotic acids Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate). Some macromolecules such as proteins and nucleic acids can have a very large number of acidic protons. A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.     Ka1       Ka2 The first dissociation constant is typically greater than the second (i.e., Ka1 > Ka2). For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which Ka1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the Ka2 is intermediate strength. The large Ka1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid can lose one proton to form bicarbonate anion and lose a second to form carbonate anion (CO). Both Ka values are small, but Ka1 > Ka2 . A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where Ka1 > Ka2 > Ka3.      Ka1       Ka2      Ka3 An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive Ka values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion. Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, α (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases: A plot of these fractional concentrations against pH, for given K1 and K2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general n -protic acid that has been deprotonated i -times: where K0 = 1 and the other K-terms are the dissociation constants for the acid. Neutralization Neutralization is the reaction between an acid and a base, producing a salt and neutralized base; for example, hydrochloric acid and sodium hydroxide form sodium chloride and water: HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq) Neutralization is the basis of titration, where a pH indicator shows equivalence point when the equivalent number of moles of a base have been added to an acid. It is often wrongly assumed that neutralization should result in a solution with pH 7.0, which is only the case with similar acid and base strengths during a reaction. Neutralization with a base weaker than the acid results in a weakly acidic salt. An example is the weakly acidic ammonium chloride, which is produced from the strong acid hydrogen chloride and the weak base ammonia. Conversely, neutralizing a weak acid with a strong base gives a weakly basic salt (e.g., sodium fluoride from hydrogen fluoride and sodium hydroxide). Weak acid–weak base equilibrium In order for a protonated acid to lose a proton, the pH of the system must rise above the pKa of the acid. The decreased concentration of H+ in that basic solution shifts the equilibrium towards the conjugate base form (the deprotonated form of the acid). In lower-pH (more acidic) solutions, there is a high enough H+ concentration in the solution to cause the acid to remain in its protonated form. Solutions of weak acids and salts of their conjugate bases form buffer solutions. Titration To determine the concentration of an acid in an aqueous solution, an acid–base titration is commonly performed. A strong base solution with a known concentration, usually NaOH or KOH, is added to neutralize the acid solution according to the color change of the indicator with the amount of base added. The titration curve of an acid titrated by a base has two axes, with the base volume on the x-axis and the solution's pH value on the y-axis. The pH of the solution always goes up as the base is added to the solution. Example: Diprotic acid For each diprotic acid titration curve, from left to right, there are two midpoints, two equivalence points, and two buffer regions. Equivalence points Due to the successive dissociation processes, there are two equivalence points in the titration curve of a diprotic acid. The first equivalence point occurs when all first hydrogen ions from the first ionization are titrated. In other words, the amount of OH− added equals the original amount of H2A at the first equivalence point. The second equivalence point occurs when all hydrogen ions are titrated. Therefore, the amount of OH− added equals twice the amount of H2A at this time. For a weak diprotic acid titrated by a strong base, the second equivalence point must occur at pH above 7 due to the hydrolysis of the resulted salts in the solution. At either equivalence point, adding a drop of base will cause the steepest rise of the pH value in the system. Buffer regions and midpoints A titration curve for a diprotic acid contains two midpoints where pH=pKa. Since there are two different Ka values, the first midpoint occurs at pH=pKa1 and the second one occurs at pH=pKa2. Each segment of the curve which contains a midpoint at its center is called the buffer region. Because the buffer regions consist of the acid and its conjugate base, it can resist pH changes when base is added until the next equivalent points. Applications of acids Acids exist universally in our lives. There are both numerous kinds of natural acid compounds with biological functions and massive synthesized acids which are used in many ways. In industry Acids are fundamental reagents in treating almost all processes in today's industry. Sulfuric acid, a diprotic acid, is the most widely used acid in industry, which is also the most-produced industrial chemical in the world. It is mainly used in producing fertilizer, detergent, batteries and dyes, as well as used in processing many products such like removing impurities. According to the statistics data in 2011, the annual production of sulfuric acid was around 200 million tonnes in the world. For example, phosphate minerals react with sulfuric acid to produce phosphoric acid for the production of phosphate fertilizers, and zinc is produced by dissolving zinc oxide into sulfuric acid, purifying the solution and electrowinning. In the chemical industry, acids react in neutralization reactions to produce salts. For example, nitric acid reacts with ammonia to produce ammonium nitrate, a fertilizer. Additionally, carboxylic acids can be esterified with alcohols, to produce esters. Acids are often used to remove rust and other corrosion from metals in a process known as pickling. They may be used as an electrolyte in a wet cell battery, such as sulfuric acid in a car battery. In food Tartaric acid is an important component of some commonly used foods like unripened mangoes and tamarind. Natural fruits and vegetables also contain acids. Citric acid is present in oranges, lemon and other citrus fruits. Oxalic acid is present in tomatoes, spinach, and especially in carambola and rhubarb; rhubarb leaves and unripe carambolas are toxic because of high concentrations of oxalic acid. Ascorbic acid (Vitamin C) is an essential vitamin for the human body and is present in such foods as amla (Indian gooseberry), lemon, citrus fruits, and guava. Many acids can be found in various kinds of food as additives, as they alter their taste and serve as preservatives. Phosphoric acid, for example, is a component of cola drinks. Acetic acid is used in day-to-day life as vinegar. Citric acid is used as a preservative in sauces and pickles. Carbonic acid is one of the most common acid additives that are widely added in soft drinks. During the manufacturing process, CO2 is usually pressurized to dissolve in these drinks to generate carbonic acid. Carbonic acid is very unstable and tends to decompose into water and CO2 at room temperature and pressure. Therefore, when bottles or cans of these kinds of soft drinks are opened, the soft drinks fizz and effervesce as CO2 bubbles come out. Certain acids are used as drugs. Acetylsalicylic acid (Aspirin) is used as a pain killer and for bringing down fevers. In human bodies Acids play important roles in the human body. The hydrochloric acid present in the stomach aids digestion by breaking down large and complex food molecules. Amino acids are required for synthesis of proteins required for growth and repair of body tissues. Fatty acids are also required for growth and repair of body tissues. Nucleic acids are important for the manufacturing of DNA and RNA and transmitting of traits to offspring through genes. Carbonic acid is important for maintenance of pH equilibrium in the body. Human bodies contain a variety of organic and inorganic compounds, among those dicarboxylic acids play an essential role in many biological behaviors. Many of those acids are amino acids which mainly serve as materials for the synthesis of proteins. Other weak acids serve as buffers with their conjugate bases to keep the body's pH from undergoing large scale changes which would be harmful to cells. The rest of the dicarboxylic acids also participate in the synthesis of various biologically important compounds in human bodies. Acid catalysis Acids are used as catalysts in industrial and organic chemistry; for example, sulfuric acid is used in very large quantities in the alkylation process to produce gasoline. Some acids, such as sulfuric, phosphoric, and hydrochloric acids, also effect dehydration and condensation reactions. In biochemistry, many enzymes employ acid catalysis. Biological occurrence Many biologically important molecules are acids. Nucleic acids, which contain acidic phosphate groups, include DNA and RNA. Nucleic acids contain the genetic code that determines many of an organism's characteristics, and is passed from parents to offspring. DNA contains the chemical blueprint for the synthesis of proteins which are made up of amino acid subunits. Cell membranes contain fatty acid esters such as phospholipids. An α-amino acid has a central carbon (the α or alpha carbon) which is covalently bonded to a carboxyl group (thus they are carboxylic acids), an amino group, a hydrogen atom and a variable group. The variable group, also called the R group or side chain, determines the identity and many of the properties of a specific amino acid. In glycine, the simplest amino acid, the R group is a hydrogen atom, but in all other amino acids it is contains one or more carbon atoms bonded to hydrogens, and may contain other elements such as sulfur, oxygen or nitrogen. With the exception of glycine, naturally occurring amino acids are chiral and almost invariably occur in the L-configuration. Peptidoglycan, found in some bacterial cell walls contains some D-amino acids. At physiological pH, typically around 7, free amino acids exist in a charged form, where the acidic carboxyl group (-COOH) loses a proton (-COO−) and the basic amine group (-NH2) gains a proton (-NH). The entire molecule has a net neutral charge and is a zwitterion, with the exception of amino acids with basic or acidic side chains. Aspartic acid, for example, possesses one protonated amine and two deprotonated carboxyl groups, for a net charge of −1 at physiological pH. Fatty acids and fatty acid derivatives are another group of carboxylic acids that play a significant role in biology. These contain long hydrocarbon chains and a carboxylic acid group on one end. The cell membrane of nearly all organisms is primarily made up of a phospholipid bilayer, a micelle of hydrophobic fatty acid esters with polar, hydrophilic phosphate "head" groups. Membranes contain additional components, some of which can participate in acid–base reactions. In humans and many other animals, hydrochloric acid is a part of the gastric acid secreted within the stomach to help hydrolyze proteins and polysaccharides, as well as converting the inactive pro-enzyme, pepsinogen into the enzyme, pepsin. Some organisms produce acids for defense; for example, ants produce formic acid. Acid–base equilibrium plays a critical role in regulating mammalian breathing. Oxygen gas (O2) drives cellular respiration, the process by which animals release the chemical potential energy stored in food, producing carbon dioxide (CO2) as a byproduct. Oxygen and carbon dioxide are exchanged in the lungs, and the body responds to changing energy demands by adjusting the rate of ventilation. For example, during periods of exertion the body rapidly breaks down stored carbohydrates and fat, releasing CO2 into the blood stream. In aqueous solutions such as blood CO2 exists in equilibrium with carbonic acid and bicarbonate ion. It is the decrease in pH that signals the brain to breathe faster and deeper, expelling the excess CO2 and resupplying the cells with O2. Cell membranes are generally impermeable to charged or large, polar molecules because of the lipophilic fatty acyl chains comprising their interior. Many biologically important molecules, including a number of pharmaceutical agents, are organic weak acids which can cross the membrane in their protonated, uncharged form but not in their charged form (i.e., as the conjugate base). For this reason the activity of many drugs can be enhanced or inhibited by the use of antacids or acidic foods. The charged form, however, is often more soluble in blood and cytosol, both aqueous environments. When the extracellular environment is more acidic than the neutral pH within the cell, certain acids will exist in their neutral form and will be membrane soluble, allowing them to cross the phospholipid bilayer. Acids that lose a proton at the intracellular pH will exist in their soluble, charged form and are thus able to diffuse through the cytosol to their target. Ibuprofen, aspirin and penicillin are examples of drugs that are weak acids. Common acids Mineral acids (inorganic acids) Hydrogen halides and their solutions: hydrofluoric acid (HF), hydrochloric acid (HCl), hydrobromic acid (HBr), hydroiodic acid (HI) Halogen oxoacids: hypochlorous acid (HClO), chlorous acid (HClO2), chloric acid (HClO3), perchloric acid (HClO4), and corresponding analogs for bromine and iodine Hypofluorous acid (HFO), the only known oxoacid for fluorine. Sulfuric acid (H2SO4) Fluorosulfuric acid (HSO3F) Nitric acid (HNO3) Phosphoric acid (H3PO4) Fluoroantimonic acid (HSbF6) Fluoroboric acid (HBF4) Hexafluorophosphoric acid (HPF6) Chromic acid (H2CrO4) Boric acid (H3BO3) Sulfonic acids A sulfonic acid has the general formula RS(=O)2–OH, where R is an organic radical. Methanesulfonic acid (or mesylic acid, CH3SO3H) Ethanesulfonic acid (or esylic acid, CH3CH2SO3H) Benzenesulfonic acid (or besylic acid, C6H5SO3H) p-Toluenesulfonic acid (or tosylic acid, CH3C6H4SO3H) Trifluoromethanesulfonic acid (or triflic acid, CF3SO3H) Polystyrene sulfonic acid (sulfonated polystyrene, [CH2CH(C6H4)SO3H]n) Carboxylic acids A carboxylic acid has the general formula R-C(O)OH, where R is an organic radical. The carboxyl group -C(O)OH contains a carbonyl group, C=O, and a hydroxyl group, O-H. Acetic acid (CH3COOH) Citric acid (C6H8O7) Formic acid (HCOOH) Gluconic acid HOCH2-(CHOH)4-COOH Lactic acid (CH3-CHOH-COOH) Oxalic acid (HOOC-COOH) Tartaric acid (HOOC-CHOH-CHOH-COOH) Halogenated carboxylic acids Halogenation at alpha position increases acid strength, so that the following acids are all stronger than acetic acid. Fluoroacetic acid Trifluoroacetic acid Chloroacetic acid Dichloroacetic acid Trichloroacetic acid Vinylogous carboxylic acids Normal carboxylic acids are the direct union of a carbonyl group and a hydroxyl group. In vinylogous carboxylic acids, a carbon-carbon double bond separates the carbonyl and hydroxyl groups. Ascorbic acid Nucleic acids Deoxyribonucleic acid (DNA) Ribonucleic acid (RNA) References Listing of strengths of common acids and bases External links Curtipot: Acid–Base equilibria diagrams, pH calculation and titration curves simulation and analysis – freeware Acid–base chemistry
Acid
The American National Standards Institute (ANSI ) is a private non-profit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States. The organization also coordinates U.S. standards with international standards so that American products can be used worldwide. ANSI accredits standards that are developed by representatives of other standards organizations, government agencies, consumer groups, companies, and others. These standards ensure that the characteristics and performance of products are consistent, that people use the same definitions and terms, and that products are tested the same way. ANSI also accredits organizations that carry out product or personnel certification in accordance with requirements defined in international standards. The organization's headquarters are in Washington, D.C. ANSI's operations office is located in New York City. The ANSI annual operating budget is funded by the sale of publications, membership dues and fees, accreditation services, fee-based programs, and international standards programs. History ANSI was most likely originally formed in 1918, when five engineering societies and three government agencies founded the American Engineering Standards Committee (AESC). In 1928, the AESC became the American Standards Association (ASA). In 1966, the ASA was reorganized and became United States of America Standards Institute (USASI). The present name was adopted in 1969. Prior to 1918, these five founding engineering societies: American Institute of Electrical Engineers (AIEE, now IEEE) American Society of Mechanical Engineers (ASME) American Society of Civil Engineers (ASCE) American Institute of Mining Engineers (AIME, now American Institute of Mining, Metallurgical, and Petroleum Engineers) American Society for Testing and Materials (now ASTM International) had been members of the United Engineering Society (UES). At the behest of the AIEE, they invited the U.S. government Departments of War, Navy (combined in 1947 to become the Department of Defense or DOD) and Commerce to join in founding a national standards organization. According to Adam Stanton, the first permanent secretary and head of staff in 1919, AESC started as an ambitious program and little else. Staff for the first year consisted of one executive, Clifford B. LePage, who was on loan from a founding member, ASME. An annual budget of $7,500 was provided by the founding bodies. In 1931, the organization (renamed ASA in 1928) became affiliated with the U.S. National Committee of the International Electrotechnical Commission (IEC), which had been formed in 1904 to develop electrical and electronics standards. Members ANSI's members are government agencies, organizations, academic and international bodies, and individuals. In total, the Institute represents the interests of more than 270,000 companies and organizations and 30 million professionals worldwide. Process Although ANSI itself does not develop standards, the Institute oversees the development and use of standards by accrediting the procedures of standards developing organizations. ANSI accreditation signifies that the procedures used by standards developing organizations meet the institute's requirements for openness, balance, consensus, and due process. ANSI also designates specific standards as American National Standards, or ANS, when the Institute determines that the standards were developed in an environment that is equitable, accessible and responsive to the requirements of various stakeholders. Voluntary consensus standards quicken the market acceptance of products while making clear how to improve the safety of those products for the protection of consumers. There are approximately 9,500 American National Standards that carry the ANSI designation. The American National Standards process involves: consensus by a group that is open to representatives from all interested parties broad-based public review and comment on draft standards consideration of and response to comments incorporation of submitted changes that meet the same consensus requirements into a draft standard availability of an appeal by any participant alleging that these principles were not respected during the standards-development process. International activities In addition to facilitating the formation of standards in the United States, ANSI promotes the use of U.S. standards internationally, advocates U.S. policy and technical positions in international and regional standards organizations, and encourages the adoption of international standards as national standards where appropriate. The institute is the official U.S. representative to the two major international standards organizations, the International Organization for Standardization (ISO), as a founding member, and the International Electrotechnical Commission (IEC), via the U.S. National Committee (USNC). ANSI participates in almost the entire technical program of both the ISO and the IEC, and administers many key committees and subgroups. In many instances, U.S. standards are taken forward to ISO and IEC, through ANSI or the USNC, where they are adopted in whole or in part as international standards. Adoption of ISO and IEC standards as American standards increased from 0.2% in 1986 to 15.5% in May 2012. Standards panels The Institute administers nine standards panels: ANSI Homeland Defense and Security Standardization Collaborative (HDSSC) ANSI Nanotechnology Standards Panel (ANSI-NSP) ID Theft Prevention and ID Management Standards Panel (IDSP) ANSI Energy Efficiency Standardization Coordination Collaborative (EESCC) Nuclear Energy Standards Coordination Collaborative (NESCC) Electric Vehicles Standards Panel (EVSP) ANSI-NAM Network on Chemical Regulation ANSI Biofuels Standards Coordination Panel Healthcare Information Technology Standards Panel (HITSP) Each of the panels works to identify, coordinate, and harmonize voluntary standards relevant to these areas. In 2009, ANSI and the National Institute of Standards and Technology (NIST) formed the Nuclear Energy Standards Coordination Collaborative (NESCC). NESCC is a joint initiative to identify and respond to the current need for standards in the nuclear industry. American national standards The ASA (as for American Standards Association) photographic exposure system, originally defined in ASA Z38.2.1 (since 1943) and ASA PH2.5 (since 1954), together with the DIN system (DIN 4512 since 1934), became the basis for the ISO system (since 1974), currently used worldwide (ISO 6, ISO 2240, ISO 5800, ISO 12232). A standard for the set of values used to represent characters in digital computers. The ANSI code standard extended the previously created ASCII seven bit code standard (ASA X3.4-1963), with additional codes for European alphabets (see also Extended Binary Coded Decimal Interchange Code or EBCDIC). In Microsoft Windows, the phrase "ANSI" refers to the Windows ANSI code pages (even though they are not ANSI standards). Most of these are fixed width, though some characters for ideographic languages are variable width. Since these characters are based on a draft of the ISO-8859 series, some of Microsoft's symbols are visually very similar to the ISO symbols, leading many to falsely assume that they are identical. The first computer programming language standard was "American Standard Fortran" (informally known as "FORTRAN 66"), approved in March 1966 and published as ASA X3.9-1966. The programming language COBOL had ANSI standards in 1968, 1974, and 1985. The COBOL 2002 standard was issued by ISO. The original standard implementation of the C programming language was standardized as ANSI X3.159-1989, becoming the well-known ANSI C. The X3J13 committee was created in 1986 to formalize the ongoing consolidation of Common Lisp, culminating in 1994 with the publication of ANSI's first object-oriented programming standard. A popular Unified Thread Standard for nuts and bolts is ANSI/ASME B1.1 which was defined in 1935, 1949, 1989, and 2003. The ANSI-NSF International standards used for commercial kitchens, such as restaurants, cafeterias, delis, etc. The ANSI/APSP (Association of Pool & Spa Professionals) standards used for pools, spas, hot tubs, barriers, and suction entrapment avoidance. The ANSI/HI (Hydraulic Institute) standards used for pumps. The ANSI for eye protection is Z87.1, which gives a specific impact resistance rating to the eyewear. This standard is commonly used for shop glasses, shooting glasses, and many other examples of protective eyewear. The ANSI paper sizes (ANSI/ASME Y14.1). Other initiatives In 2008, ANSI, in partnership with Citation Technologies, created the first dynamic, online web library for ISO 14000 standards. On June 23, 2009, ANSI announced a product and services agreement with Citation Technologies to deliver all ISO Standards on a web-based platform. Through the ANSI-Citation partnership, 17,765 International Standards developed by more than 3,000 ISO technical bodies will be made available on the citation platform, arming subscribers with powerful search tools and collaboration, notification, and change-management functionality. ANSI, in partnership with Citation Technologies, AAMI, ASTM, and DIN, created a single, centralized database for medical device standards on September 9, 2009. In early 2009, ANSI launched a new Certificate Accreditation Program (ANSI-CAP) to provide neutral, third-party attestation that a given certificate program meets the American National Standard ASTM E2659-09. In 2009, ANSI began accepting applications for certification bodies seeking accreditation according to requirements defined under the Toy Safety Certification Program (TSCP) as the official third-party accreditor of TSCP's product certification bodies. In 2006, ANSI launched www.StandardsPortal.org, an online resource for facilitating more open and efficient trade between international markets in the areas of standards, conformity assessment, and technical regulations. The site currently features content for the United States, China, India, Korea, and Brazil, with additional countries and regions planned for future content. ANSI design standards have also been incorporated into building codes encompassing several specific building sub-sets, such as the ANSI/SPRI ES-1, which pertains to "Wind Design Standard for Edge Systems Used With Low Slope Roofing Systems", for example. See also Accredited Crane Operator Certification ANSI ASC X9 ANSI ASC X12 ANSI C Institute of Environmental Sciences and Technology (IEST) Institute of Nuclear Materials Management (INMM) ISO (to which ANSI is the official US representative) National Information Standards Organization (NISO) National Institute of Standards and Technology (NIST) Open standards References External links 1918 establishments in the United States 501(c)(3) organizations Charities based in Washington, D.C. ISO member bodies Organizations established in 1918 Technical specifications
American National Standards Institute
The atomic number or proton number (symbol Z) of a chemical element is the number of protons found in the nucleus of every atom of that element. The atomic number uniquely identifies a chemical element. It is identical to the charge number of the nucleus. In an uncharged atom, the atomic number is also equal to the number of electrons. The sum of the atomic number Z and the number of neutrons N gives the mass number A of an atom. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in unified atomic mass units (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A. Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth, determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century. The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent atomic number) come into common use in this context. History The periodic table and a natural number for each element Loosely speaking, the existence or construction of a periodic table of elements creates an ordering of the elements, and so they can be numbered in order. Dmitri Mendeleev claimed that he arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time. A simple numbering based on periodic table position was never entirely satisfactory, however. Besides the case of iodine and tellurium, later several other pairs of elements (such as argon and potassium, cobalt and nickel) were known to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time). The Rutherford-Bohr model and van den Broek In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom was exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This proved eventually to be the case. Moseley's 1913 experiment The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z. To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminum (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time. Missing elements After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91. From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96). The proton and the idea of nuclear electrons In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms. In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to be composed of four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two of the charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number. The discovery of the neutron makes Z the proton number All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive charge now was realized to come entirely from a content of 79 protons. After 1932, therefore, an element's atomic number Z was also realized to be identical to the proton number of its nuclei. Chemical properties Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number. New elements The quest for new elements is usually described using atomic numbers. As of , all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life of a nuclide becomes shorter as atomic number increases, though undiscovered nuclides with certain "magic" numbers of protons and neutrons may have relatively longer half-lives and comprise an island of stability. A hypothetical element composed only of neutrons has also been proposed and would have atomic number 0. See also Effective atomic number Mass number Neutron number Atomic theory Chemical element History of the periodic table List of elements by atomic number Prout's hypothesis References Chemical properties Nuclear physics Atoms Dimensionless numbers of chemistry Numbers
Atomic number
Anatomy (Greek anatomē, 'dissection') is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science which deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine. The discipline of anatomy is divided into macroscopic and microscopic. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells. The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th century medical imaging techniques including X-ray, ultrasound, and magnetic resonance imaging. Definition Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, their locations and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions. The discipline of anatomy can be subdivided into a number of branches including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels. The term "anatomy" is commonly taken to refer to human anatomy. However, substantially the same structures and tissues are found throughout the rest of the animal kingdom and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy. Animal tissues The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells. Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cell, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm. Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue. Connective tissue Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Connective tissue gives shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed. Epithelium Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells. Muscle tissue Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body. Nervous tissue Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach. Vertebrate anatomy All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics; a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution. Fish anatomy The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure. Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases. The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column. Amphibian anatomy Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist. In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side. Reptile anatomy Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid. Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers. Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead. Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye. Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey. Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood. Bird anatomy Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks. The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes. Mammal anatomy Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs but some aquatic mammals have no limbs or limbs modified into fins and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea. Mammals are amniotes, and most are viviparous, giving birth to live young. The exception to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a nipple and completes its development. Human anatomy Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet. Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope. Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology. Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells. Invertebrate anatomy Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies. Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles. Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring. Arthropod anatomy Arthropods comprise the largest phylum in the animal kingdom with over a million known invertebrate species. Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts. Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ. Other branches of anatomy Superficial or surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables physicians or veterinary surgeons to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body. Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals. Artistic anatomy relates to anatomic studies for artistic reasons. History Ancient In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart, its vessels, liver, spleen, kidneys, hypothalamus, uterus and bladder, and showed the blood vessels diverging from the heart. The Ebers Papyrus (c. 1550 BCE) features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body. Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded by a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which have contributed towards the understanding of the brain, eye, liver, reproductive organs and the nervous system. The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks, but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemy rulers helped raise Alexandria up, further rivalling the cultural and scientific achievements of other Greek states. Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research. They also conducted vivisections on the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works making impressing contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs and nervous system, and characterizing the course of disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He was able to distinguish the sensory and the motor nerves in the human body and believed that air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carried the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the valves of the heart, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves. Great feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus was able to discover and describe not only the salivary glands, but the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland. The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic dynasty. In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from the Greek some time in the 15th century. Medieval to early modern Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, then the thorax, then the head and limbs. It was the standard anatomy textbook for the next century. Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected. Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian. In England, anatomy was the subject of the first public lectures given in any science; these were given by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians. Late modern In the United States, medical schools began to be set up towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection and these were difficult to obtain. Philadelphia, Baltimore and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were in consequence protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery". The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically. Before the modern medical era, the main means for studying the internal structures of the body were dissection of the dead and inspection, palpation and auscultation of the living. It was the advent of microscopy that opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. Study of small structures involved passing light through them and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different types of tissue. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a great advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids and other biological molecules gave rise to a new field of molecular anatomy. Equally important advances have occurred in non-invasive techniques for examining the interior structures of the body. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations. See also Anatomical model Outline of human anatomy Plastination Notes Bibliography "Anatomy of the Human Body". 20th edition. 1918. Henry Gray External links Anatomy, In Our Time. BBC Radio 4. Melvyn Bragg with guests Ruth Richardson, Andrew Cunningham and Harold Ellis. Anatomia Collection: anatomical plates 1522 to 1867 (digitized books and images) Lyman, Henry Munson. The Book of Health (1898). Science History Institute Digital Collections . Gunther von Hagens True Anatomy for New Ways of Teaching. Branches of biology Morphology (biology)
Anatomy
Ambiguity is a type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved according to a rule or process with a finite number of steps. (The ambi- part of the term reflects an idea of "two", as in "two meanings".) The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with information that is vague, it is difficult to form any interpretation at the desired level of specificity. Linguistic forms Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Lexical ambiguity The lexical ambiguity of a word or phrase pertains to its having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be captured by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). The context in which an ambiguous word is used often makes it evident which of the meanings is intended. If, for instance, someone says "I buried $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to disambiguate a used word. Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word sense disambiguation. The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from their candidate of choice. Ambiguity is a powerful tool of political science. More problematic are words whose senses express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being unlocked" or "impossible to lock"). Semantic and syntactic ambiguity Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw"). Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar. Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen. Metonymy involves referring to one entity by the name of a different but closely related entity (for example, using "wheels" to refer to a car, or "Wall Street" to refer to the stock exchanges located on that street or even the entire US financial sector). In the modern vocabulary of critical semiotics, metonymy encompasses any potentially ambiguous word substitution that is based on contextual contiguity (located close together), or a function or process that an object performs, such as "sweet ride" to refer to a nice car. Metonym miscommunication is considered a primary mechanism of linguistic humor. Philosophy Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. Literature and rhetoric In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby. Mathematical notation Mathematical notation, widely used in physics and other sciences, avoids many ambiguities compared to expression in natural language. However, for various reasons, several lexical, syntactic and semantic ambiguities remain. Names of functions The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Sinc function Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square, see Talk:Elliptic integral#List of notations; dealing with complex values, this may cause problems. Exponential integral Hermite polynomial Expressions Ambiguous expressions often appear in physical and mathematical texts. It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, . Then, if one sees , there is no way to distinguish whether it means multiplied by , or function evaluated at argument equal to . In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression f=f(x) is qualified as an error. The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, is interpreted as ; in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expression does not denote the sine function, but the product of the three variables , , , although in the informal notation of a slide presentation it may stand for . Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notation , the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables , and , or it is an indication to a trivalent tensor. Examples of potentially confusing ambiguous mathematical expressions An expression such as can be understood to mean either or . Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing or . The expression means in several texts, though it might be thought to mean , since commonly means . Conversely, might seem to mean , as this exponentiation notation usually denotes function iteration: in general, means . However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application. The expression can be interpreted as meaning ; however, it is more commonly understood to mean . Notations in quantum optics and quantum mechanics It is common to define the coherent states in quantum optics with and states with fixed number of photons with . Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and photon state if the Latin characters dominate. The ambiguity becomes even worse, if is used for the states with certain value of the coordinate, and means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Ambiguous terms in physics and mathematics Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "...Only in the context of a proposition has a name meaning." A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled. It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled. It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state). The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term. Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail which still can be resolved at the background of statistical noise. See also Accuracy and precision and its talk. The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. Mathematical interpretation of ambiguity In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, leaves open what the value of X is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as , which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher. Constructed language Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages which have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn. Biology In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. Christianity and Judaism Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery which fascinates humans. The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts which he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases. (The title of one of his most famous books, Orthodoxy, itself employing such a paradox.) Music In music, pieces or sections which confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p.79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p.195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." Visual art In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception. The opposite of such ambiguous images are impossible objects. Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Social psychology and the bystander effect In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) illicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies. Computer science In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously 1,000,000 or 1,048,576) is less uncertain than the engineering value 1.0e6 (defined to designate the interval 950,000 to 1,050,000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes. See also References External links Collection of Ambiguous or Inconsistent/Incomplete Statements Leaving out ambiguities when writing Semantics Mathematical notation Concepts in epistemology Barriers to critical thinking Formal semantics (natural language)
Ambiguity
Asia () is Earth's largest and most populous continent, located primarily in the Eastern and Northern Hemispheres. It shares the continental landmass of Eurasia with the continent of Europe, and the continental landmass of Afro-Eurasia with Africa and Europe. Asia covers an area of , about 30% of Earth's total land area and 8.7% of the Earth's total surface area. The continent, which has long been home to the majority of the human population, was the site of many of the first civilizations. Its 4.7 billion people constituting roughly 60% of the world's population. In general terms, Asia is bounded on the east by the Pacific Ocean, on the south by the Indian Ocean, and on the north by the Arctic Ocean. The border of Asia with Europe is a historical and cultural construct, as there is no clear physical and geographical separation between them. It is somewhat arbitrary and has moved since its first conception in classical antiquity. The division of Eurasia into two continents reflects East–West cultural, linguistic, and ethnic differences, some of which vary on a spectrum rather than with a sharp dividing line. The most commonly accepted boundaries place Asia to the east of the Suez Canal separating it from Africa; and to the east of the Turkish Straits, the Ural Mountains and Ural River, and to the south of the Caucasus Mountains and the Caspian and Black Seas, separating it from Europe. China and India alternated in being the largest economies in the world from 1 to 1800 CE. China was a major economic power and attracted many to the east, and for many the legendary wealth and prosperity of the ancient culture of India personified Asia, attracting European commerce, exploration and colonialism. The accidental discovery of a trans-Atlantic route from Europe to America by Columbus while in search for a route to India demonstrates this deep fascination. The Silk Road became the main east–west trading route in the Asian hinterlands while the Straits of Malacca stood as a major sea route. Asia has exhibited economic dynamism (particularly East Asia) as well as robust population growth during the 20th century, but overall population growth has since fallen. Asia was the birthplace of most of the world's mainstream religions including Hinduism, Zoroastrianism, Judaism, Jainism, Buddhism, Confucianism, Taoism, Christianity, Islam, Sikhism, as well as many other religions. Given its size and diversity, the concept of Asia—a name dating back to classical antiquity—may actually have more to do with human geography than physical geography. Asia varies greatly across and within its regions with regard to ethnic groups, cultures, environments, economics, historical ties and government systems. It also has a mix of many different climates ranging from the equatorial south via the hot desert in the Middle East, temperate areas in the east and the continental centre to vast subarctic and polar areas in Siberia. Definition and boundaries Asia–Africa boundary The boundary between Asia and Africa is the Red Sea, the Gulf of Suez, and the Suez Canal. This makes Egypt a transcontinental country, with the Sinai peninsula in Asia and the remainder of the country in Africa. Asia–Europe boundary The threefold division of the Old World into Europe, Asia and Africa has been in use since the 6th century BC, due to Greek geographers such as Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni river) in Georgia of Caucasus (from its mouth by Poti on the Black Sea coast, through the Surami Pass and along the Kura River to the Caspian Sea), a convention still followed by Herodotus in the 5th century BC. During the Hellenistic period, this convention was revised, and the boundary between Europe and Asia was now considered to be the Tanais (the modern Don River). This is the convention used by Roman era authors such as Posidonius, Strabo and Ptolemy. The border between Asia and Europe was historically defined by European academics. The Don River became unsatisfactory to northern Europeans when Peter the Great, king of the Tsardom of Russia, defeating rival claims of Sweden and the Ottoman Empire to the eastern lands, and armed resistance by the tribes of Siberia, synthesized a new Russian Empire extending to the Ural Mountains and beyond, founded in 1721. The major geographical theorist of the empire was a former Swedish prisoner-of-war, taken at the Battle of Poltava in 1709 and assigned to Tobolsk, where he associated with Peter's Siberian official, Vasily Tatishchev, and was allowed freedom to conduct geographical and anthropological studies in preparation for a future book. In Sweden, five years after Peter's death, in 1730 Philip Johan von Strahlenberg published a new atlas proposing the Ural Mountains as the border of Asia. Tatishchev announced that he had proposed the idea to von Strahlenberg. The latter had suggested the Emba River as the lower boundary. Over the next century various proposals were made until the Ural River prevailed in the mid-19th century. The border had been moved perforce from the Black Sea to the Caspian Sea into which the Ural River projects. The border between the Black Sea and the Caspian is usually placed along the crest of the Caucasus Mountains, although it is sometimes placed further north. Asia–Oceania boundary The border between Asia and the region of Oceania is usually placed somewhere in the Malay Archipelago. The Maluku Islands in Indonesia are often considered to lie on the border of southeast Asia, with New Guinea, to the east of the islands, being wholly part of Oceania. The terms Southeast Asia and Oceania, devised in the 19th century, have had several vastly different geographic meanings since their inception. The chief factor in determining which islands of the Malay Archipelago are Asian has been the location of the colonial possessions of the various empires there (not all European). Lewis and Wigen assert, "The narrowing of 'Southeast Asia' to its present boundaries was thus a gradual process." Ongoing definition Geographical Asia is a cultural artifact of European conceptions of the world, beginning with the Ancient Greeks, being imposed onto other cultures, an imprecise concept causing endemic contention about what it means. Asia does not exactly correspond to the cultural borders of its various types of constituents. From the time of Herodotus a minority of geographers have rejected the three-continent system (Europe, Africa, Asia) on the grounds that there is no substantial physical separation between them. For example, Sir Barry Cunliffe, the emeritus professor of European archeology at Oxford, argues that Europe has been geographically and culturally merely "the western excrescence of the continent of Asia". Geographically, Asia is the major eastern constituent of the continent of Eurasia with Europe being a northwestern peninsula of the landmass. Asia, Europe and Africa make up a single continuous landmass—Afro-Eurasia (except for the Suez Canal)—and share a common continental shelf. Almost all of Europe and a major part of Asia sit atop the Eurasian Plate, adjoined on the south by the Arabian and Indian Plate and with the easternmost part of Siberia (east of the Chersky Range) on the North American Plate. Etymology The idea of a place called "Asia" was originally a concept of Greek civilization, though this might not correspond to the entire continent currently known by that name. The English word comes from Latin literature, where it has the same form, "Asia". Whether "Asia" in other languages comes from Latin of the Roman Empire is much less certain, and the ultimate source of the Latin word is uncertain, though several theories have been published. One of the first classical writers to use Asia as a name of the whole continent was Pliny. This metonymical change in meaning is common and can be observed in some other geographical names, such as Scandinavia (from Scania). Bronze Age Before Greek poetry, the Aegean Sea area was in a Greek Dark Age, at the beginning of which syllabic writing was lost and alphabetic writing had not begun. Prior to then in the Bronze Age the records of the Assyrian Empire, the Hittite Empire and the various Mycenaean states of Greece mention a region undoubtedly Asia, certainly in Anatolia, including if not identical to Lydia. These records are administrative and do not include poetry. The Mycenaean states were destroyed about 1200 BCE by unknown agents, though one school of thought assigns the Dorian invasion to this time. The burning of the palaces caused the clay tablets holding the Mycenaean administrative records to be preserved by baking. These tablets were written in a Greek syllabic script called Linear B. This script was deciphered by a number of interested parties, most notably by a young World War II cryptographer, Michael Ventris, subsequently assisted by the scholar, John Chadwick. A major cache discovered by Carl Blegen at the site of ancient Pylos included hundreds of male and female names formed by different methods. Some of these are of women held in servitude (as study of the society implied by the content reveals). They were used in trades, such as cloth-making, and usually came with children. The epithet lawiaiai, "captives", associated with some of them identifies their origin. Some are ethnic names. One in particular, aswiai, identifies "women of Asia". Perhaps they were captured in Asia, but some others, Milatiai, appear to have been of Miletus, a Greek colony, which would not have been raided for slaves by Greeks. Chadwick suggests that the names record the locations where these foreign women were purchased. The name is also in the singular, Aswia, which refers both to the name of a country and to a female from there. There is a masculine form, . This Aswia appears to have been a remnant of a region known to the Hittites as Assuwa, centered on Lydia, or "Roman Asia". This name, Assuwa, has been suggested as the origin for the name of the continent "Asia". The Assuwa league was a confederation of states in western Anatolia, defeated by the Hittites under Tudhaliya I around 1400 BCE. Classical antiquity Latin Asia and Greek Ἀσία appear to be the same word. Roman authors translated Ἀσία as Asia. The Romans named a province Asia, located in western Anatolia (in modern-day Turkey). There was an Asia Minor and an Asia Major located in modern-day Iraq. As the earliest evidence of the name is Greek, it is likely circumstantially that Asia came from Ἀσία, but ancient transitions, due to the lack of literary contexts, are difficult to catch in the act. The most likely vehicles were the ancient geographers and historians, such as Herodotus, who were all Greek. Ancient Greek certainly evidences early and rich uses of the name. The first continental use of Asia is attributed to Herodotus (about 440 BCE), not because he innovated it, but because his Histories are the earliest surviving prose to describe it in any detail. He defines it carefully, mentioning the previous geographers whom he had read, but whose works are now missing. By it he means Anatolia and the Persian Empire, in contrast to Greece and Egypt. Herodotus comments that he is puzzled as to why three women's names were "given to a tract which is in reality one" (Europa, Asia, and Libya, referring to Africa), stating that most Greeks assumed that Asia was named after the wife of Prometheus (i.e. Hesione), but that the Lydians say it was named after Asies, son of Cotys, who passed the name on to a tribe at Sardis. In Greek mythology, "Asia" (Ἀσία) or "Asie" (Ἀσίη) was the name of a "Nymph or Titan goddess of Lydia". In ancient Greek religion, places were under the care of female divinities, parallel to guardian angels. The poets detailed their doings and generations in allegoric language salted with entertaining stories, which subsequently playwrights transformed into classical Greek drama and became "Greek mythology". For example, Hesiod mentions the daughters of Tethys and Ocean, among whom are a "holy company", "who with the Lord Apollo and the Rivers have youths in their keeping". Many of these are geographic: Doris, Rhodea, Europa, Asia. Hesiod explains: The Iliad (attributed by the ancient Greeks to Homer) mentions two Phrygians (the tribe that replaced the Luvians in Lydia) in the Trojan War named Asios (an adjective meaning "Asian"); and also a marsh or lowland containing a marsh in Lydia as . According to many Muslims, the term came from Ancient Egypt's Queen Asiya, the adoptive mother of Moses. History The history of Asia can be seen as the distinct histories of several peripheral coastal regions: East Asia, South Asia, Southeast Asia and the Middle East, linked by the interior mass of the Central Asian steppes. The coastal periphery was home to some of the world's earliest known civilizations, each of them developing around fertile river valleys. The civilizations in Mesopotamia, the Indus Valley and the Yellow River shared many similarities. These civilizations may well have exchanged technologies and ideas such as mathematics and the wheel. Other innovations, such as writing, seem to have been developed individually in each area. Cities, states and empires developed in these lowlands. The central steppe region had long been inhabited by horse-mounted nomads who could reach all areas of Asia from the steppes. The earliest postulated expansion out of the steppe is that of the Indo-Europeans, who spread their languages into the Middle East, South Asia, and the borders of China, where the Tocharians resided. The northernmost part of Asia, including much of Siberia, was largely inaccessible to the steppe nomads, owing to the dense forests, climate and tundra. These areas remained very sparsely populated. The center and the peripheries were mostly kept separated by mountains and deserts. The Caucasus and Himalaya mountains and the Karakum and Gobi deserts formed barriers that the steppe horsemen could cross only with difficulty. While the urban city dwellers were more advanced technologically and socially, in many cases they could do little in a military aspect to defend against the mounted hordes of the steppe. However, the lowlands did not have enough open grasslands to support a large horsebound force; for this and other reasons, the nomads who conquered states in China, India, and the Middle East often found themselves adapting to the local, more affluent societies. The Islamic Caliphate's defeats of the Byzantine and Persian empires led to West Asia and southern parts of Central Asia and western parts of South Asia under its control during its conquests of the 7th century. The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Before the Mongol invasion, Song dynasty reportedly had approximately 120 million citizens; the 1300 census which followed the invasion reported roughly 60 million people. The Black Death, one of the most devastating pandemics in human history, is thought to have originated in the arid plains of central Asia, where it then travelled along the Silk Road. The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, most of the Middle East, North Africa and the Balkans from the mid 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing dynasty. The Islamic Mughal Empire and the Hindu Maratha Empire controlled much of India in the 16th and 18th centuries respectively. The Empire of Japan controlled most of East Asia and much of Southeast Asia, New Guinea and the Pacific islands until the end of World War II. Geography and climate Asia is the largest continent on Earth. It covers 9% of the Earth's total surface area (or 30% of its land area), and has the longest coastline, at . Asia is generally defined as comprising the eastern four-fifths of Eurasia. It is located to the east of the Suez Canal and the Ural Mountains, and south of the Caucasus Mountains (or the Kuma–Manych Depression) and the Caspian and Black Seas. It is bounded on the east by the Pacific Ocean, on the south by the Indian Ocean and on the north by the Arctic Ocean. Asia is subdivided into 49 countries, five of them (Georgia, Azerbaijan, Russia, Kazakhstan and Turkey) are transcontinental countries lying partly in Europe. Geographically, Russia is partly in Asia, but is considered a European nation, both culturally and politically. The Gobi Desert is in Mongolia and the Arabian Desert stretches across much of the Middle East. The Yangtze River in China is the longest river in the continent. The Himalayas between Nepal and China is the tallest mountain range in the world. Tropical rainforests stretch across much of southern Asia and coniferous and deciduous forests lie farther north. Main regions There are various approaches to the regional division of Asia. The following subdivision into regions is used, among others, by the UN statistics agency UNSD. This division of Asia into regions by the United Nations is done solely for statistical reasons and does not imply any assumption about political or other affiliations of countries and territories. North Asia (Siberia) Central Asia (The 'stans) Western Asia (The Middle East or Near East) South Asia (Indian subcontinent) East Asia (Far East) Southeast Asia (East Indies and Indochina) Climate Asia has extremely diverse climate features. Climates range from arctic and subarctic in Siberia to tropical in southern India and Southeast Asia. It is moist across southeast sections, and dry across much of the interior. Some of the largest daily temperature ranges on Earth occur in western sections of Asia. The monsoon circulation dominates across southern and eastern sections, due to the presence of the Himalayas forcing the formation of a thermal low which draws in moisture during the summer. Southwestern sections of the continent are hot. Siberia is one of the coldest places in the Northern Hemisphere, and can act as a source of arctic air masses for North America. The most active place on Earth for tropical cyclone activity lies northeast of the Philippines and south of Japan. A survey carried out in 2010 by global risk analysis farm Maplecroft identified 16 countries that are extremely vulnerable to climate change. Each nation's vulnerability was calculated using 42 socio, economic and environmental indicators, which identified the likely climate change impacts during the next 30 years. The Asian countries of Bangladesh, India, the Philippines, Vietnam, Thailand, Pakistan, China and Sri Lanka were among the 16 countries facing extreme risk from climate change. Some shifts are already occurring. For example, in tropical parts of India with a semi-arid climate, the temperature increased by 0.4 °C between 1901 and 2003. A 2013 study by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) aimed to find science-based, pro-poor approaches and techniques that would enable Asia's agricultural systems to cope with climate change, while benefitting poor and vulnerable farmers. The study's recommendations ranged from improving the use of climate information in local planning and strengthening weather-based agro-advisory services, to stimulating diversification of rural household incomes and providing incentives to farmers to adopt natural resource conservation measures to enhance forest cover, replenish groundwater and use renewable energy. The ten countries of the Association of Southeast Asian Nations (ASEAN) - Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand, and Vietnam - are among the most vulnerable to the effects of climate change in the world, however, ASEAN's climate mitigation efforts are not commensurate with the climate threats and risks it faces. Economy Asia has the largest continental economy by both GDP Nominal and PPP in the world, and is the fastest growing economic region. , the largest economies in Asia are China, Japan, India, South Korea, Indonesia and Turkey based on GDP in both nominal and PPP. Based on Global Office Locations 2011, Asia dominated the office locations with 4 of the top 5 being in Asia: Hong Kong, Singapore, Tokyo and Seoul. Around 68 percent of international firms have an office in Hong Kong. In the late 1990s and early 2000s, the economies of China and India have been growing rapidly, both with an average annual growth rate of more than 8%. Other recent very-high-growth nations in Asia include Israel, Malaysia, Indonesia, Bangladesh, Thailand, Vietnam, and the Philippines, and mineral-rich nations such as Kazakhstan, Turkmenistan, Iran, Brunei, the United Arab Emirates, Qatar, Kuwait, Saudi Arabia, Bahrain and Oman. According to economic historian Angus Maddison in his book The World Economy: A Millennial Perspective, India had the world's largest economy during 0 BCE and 1000 BCE. Historically, India was the largest economy in the world for most of the two millennia from the 1st until 19th century, contributing 25% of the world's industrial output. China was the largest and most advanced economy on earth for much of recorded history and shared the mantle with India. For several decades in the late twentieth century Japan was the largest economy in Asia and second-largest of any single nation in the world, after surpassing the Soviet Union (measured in net material product) in 1990 and Germany in 1968. (NB: A number of supernational economies are larger, such as the European Union (EU), the North American Free Trade Agreement (NAFTA) or APEC). This ended in 2010 when China overtook Japan to become the world's second largest economy. In the late 1980s and early 1990s, Japan's GDP was almost as large (current exchange rate method) as that of the rest of Asia combined. In 1995, Japan's economy nearly equaled that of the US as the largest economy in the world for a day, after the Japanese currency reached a record high of 79 yen/US$. Economic growth in Asia since World War II to the 1990s had been concentrated in Japan as well as the four regions of South Korea, Taiwan, Hong Kong and Singapore located in the Pacific Rim, known as the Asian tigers, which have now all received developed country status, having the highest GDP per capita in Asia. It is forecasted that India will overtake Japan in terms of nominal GDP by 2025. By 2027, according to Goldman Sachs, China will have the largest economy in the world. Several trade blocs exist, with the most developed being the Association of Southeast Asian Nations. Asia is the largest continent in the world by a considerable margin, and it is rich in natural resources, such as petroleum, forests, fish, water, rice, copper and silver. Manufacturing in Asia has traditionally been strongest in East and Southeast Asia, particularly in China, Taiwan, South Korea, Japan, India, the Philippines, and Singapore. Japan and South Korea continue to dominate in the area of multinational corporations, but increasingly the PRC and India are making significant inroads. Many companies from Europe, North America, South Korea and Japan have operations in Asia's developing countries to take advantage of its abundant supply of cheap labour and relatively developed infrastructure. According to Citigroup 9 of 11 Global Growth Generators countries came from Asia driven by population and income growth. They are Bangladesh, China, India, Indonesia, Iraq, Mongolia, the Philippines, Sri Lanka and Vietnam. Asia has three main financial centers: Hong Kong, Tokyo and Singapore. Call centers and business process outsourcing (BPOs) are becoming major employers in India and the Philippines due to the availability of a large pool of highly skilled, English-speaking workers. The increased use of outsourcing has assisted the rise of India and the China as financial centers. Due to its large and extremely competitive information technology industry, India has become a major hub for outsourcing. Trade between Asian countries and countries on other continents is largely carried out on the sea routes that are important for Asia. Individual main routes have emerged from this. The main route leads from the Chinese coast south via Hanoi to Jakarta, Singapore and Kuala Lumpur through the Strait of Malacca via the Sri Lankan Colombo to the southern tip of India via Malé to East Africa Mombasa, from there to Djibouti, then through the Red Sea over the Suez Canal into Mediterranean, there via Haifa, Istanbul and Athens to the upper Adriatic to the northern Italian hub of Trieste with its rail connections to Central and Eastern Europe or further to Barcelona and around Spain and France to the European northern ports. A far smaller part of the goods traffic runs via South Africa to Europe. A particularly significant part of the Asian goods traffic is carried out across the Pacific towards Los Angeles and Long Beach. In contrast to the sea routes, the Silk Road via the land route to Europe is on the one hand still under construction and on the other hand is much smaller in terms of scope. Intra-Asian trade, including sea trade, is growing rapidly. In 2010, Asia had 3.3 million millionaires (people with net worth over US$1 million excluding their homes), slightly below North America with 3.4 million millionaires. Last year Asia had toppled Europe. Citigroup in The Wealth Report 2012 stated that Asian centa-millionaire overtook North America's wealth for the first time as the world's "economic center of gravity" continued moving east. At the end of 2011, there were 18,000 Asian people mainly in Southeast Asia, China and Japan who have at least $100 million in disposable assets, while North America with 17,000 people and Western Europe with 14,000 people. Tourism With growing Regional Tourism with domination of Chinese visitors, MasterCard has released Global Destination Cities Index 2013 with 10 of 20 are dominated by Asia and Pacific Region Cities and also for the first time a city of a country from Asia (Bangkok) set in the top-ranked with 15.98 international visitors. Demographics East Asia had by far the strongest overall Human Development Index (HDI) improvement of any region in the world, nearly doubling average HDI attainment over the past 40 years, according to the report's analysis of health, education and income data. China, the second highest achiever in the world in terms of HDI improvement since 1970, is the only country on the "Top 10 Movers" list due to income rather than health or education achievements. Its per capita income increased a stunning 21-fold over the last four decades, also lifting hundreds of millions out of income poverty. Yet it was not among the region's top performers in improving school enrollment and life expectancy. Nepal, a South Asian country, emerges as one of the world's fastest movers since 1970 mainly due to health and education achievements. Its present life expectancy is 25 years longer than in the 1970s. More than four of every five children of school age in Nepal now attend primary school, compared to just one in five 40 years ago. Hong Kong ranked highest among the countries grouped on the HDI (number 7 in the world, which is in the "very high human development" category), followed by Singapore (9), Japan (19) and South Korea (22). Afghanistan (155) ranked lowest amongst Asian countries out of the 169 countries assessed. Languages Asia is home to several language families and many language isolates. Most Asian countries have more than one language that is natively spoken. For instance, according to Ethnologue, more than 600 languages are spoken in Indonesia, more than 800 languages spoken in India, and more than 100 are spoken in the Philippines. China has many languages and dialects in different provinces. Religions Many of the world's major religions have their origins in Asia, including the five most practiced in the world (excluding irreligion), which are Christianity, Islam, Hinduism, Chinese folk religion (classified as Confucianism and Taoism), and Buddhism respectively. Asian mythology is complex and diverse. The story of the Great Flood for example, as presented to Jews in the Hebrew Bible in the narrative of Noah—and later to Christians in the Old Testament, and to Muslims in the Quran—is earliest found in Mesopotamian mythology, in the Enûma Eliš and Epic of Gilgamesh. Hindu mythology similarly tells about an avatar of Vishnu in the form of a fish who warned Manu of a terrible flood. Ancient Chinese mythology also tells of a Great Flood spanning generations, one that required the combined efforts of emperors and divinities to control. Abrahamic The Abrahamic religions including Judaism, Christianity, Islam, Druze faith, and Baháʼí Faith originated in West Asia. Judaism, the oldest of the Abrahamic faiths, is practiced primarily in Israel, the indigenous homeland and historical birthplace of the Hebrew nation: which today consists both of those Jews who remained in the Middle East and those who returned from diaspora in Europe, North America, and other regions; though various diaspora communities persist worldwide. Jews are the predominant ethnic group in Israel (75.6%) numbering at about 6.1 million, although the levels of adherence to Jewish religion vary. Outside of Israel there are small ancient Jewish communities in Turkey (17,400), Azerbaijan (9,100), Iran (8,756), India (5,000) and Uzbekistan (4,000), among many other places. In total, there are 14.4–17.5 million (2016, est.) Jews alive in the world today, making them one of the smallest Asian minorities, at roughly 0.3 to 0.4 percent of the total population of the continent. Christianity is a widespread religion in Asia with more than 286 million adherents according to Pew Research Center in 2010, and nearly 364 million according to Britannica Book of the Year 2014. Constituting around 12.6% of the total population of Asia. In the Philippines and East Timor, Roman Catholicism is the predominant religion; it was introduced by the Spaniards and the Portuguese, respectively. In Armenia and Georgia, Eastern Orthodoxy is the predominant religion. In the Middle East, such as in the Levant, Anatolia and Fars, Syriac Christianity (Church of the East) and Oriental Orthodoxy are prevalent minority denominations, which are both Eastern Christian sects mainly adhered to Assyrian people or Syriac Christians. Vibrant indigenous minorities in Western Asia are adhering to the Eastern Catholic Churches and Eastern Orthodoxy. Saint Thomas Christians in India trace their origins to the evangelistic activity of Thomas the Apostle in the 1st century. Significant Christian communities also found in Central Asia, South Asia, Southeast Asia and East Asia. Islam, which originated in the Hejaz located in modern-day Saudi Arabia, is the second largest and most widely-spread religion in Asia with at least 1 billion Muslims constituting around 23.8% of the total population of Asia. With 12.7% of the world Muslim population, the country currently with the largest Muslim population in the world is Indonesia, followed by Pakistan (11.5%), India (10%), Bangladesh, Iran and Turkey. Mecca, Medina and Jerusalem are the three holiest cities for Islam in all the world. The Hajj and Umrah attract large numbers of Muslim devotees from all over the world to Mecca and Medina. Iran is the largest Shi'a country. The Druze Faith or Druzism originated in Western Asia, is a monotheistic religion based on the teachings of figures like Hamza ibn-'Ali ibn-Ahmad and Al-Hakim bi-Amr Allah, and Greek philosophers such as Plato and Aristotle. The number of Druze people worldwide is around one million, with about 45% to 50% live in Syria, 35% to 40% live in Lebanon, and less than 10% live in Israel, with recently there has been a growing Druze diaspora. The Baháʼí Faith originated in Asia, in Iran (Persia), and spread from there to the Ottoman Empire, Central Asia, India, and Burma during the lifetime of Bahá'u'lláh. Since the middle of the 20th century, growth has particularly occurred in other Asian countries, because Baháʼí activities in many Muslim countries has been severely suppressed by authorities. Lotus Temple is a big Baháʼí Temple in India. Indian and East Asian religions Almost all Asian religions have philosophical character and Asian philosophical traditions cover a large spectrum of philosophical thoughts and writings. Indian philosophy includes Hindu philosophy and Buddhist philosophy. They include elements of nonmaterial pursuits, whereas another school of thought from India, Cārvāka, preached the enjoyment of the material world. The religions of Hinduism, Buddhism, Jainism and Sikhism originated in India, South Asia. In East Asia, particularly in China and Japan, Confucianism, Taoism and Zen Buddhism took shape. , Hinduism has around 1.1 billion adherents. The faith represents around 25% of Asia's population and is the largest religion in Asia. However, it is mostly concentrated in South Asia. Over 80% of the populations of both India and Nepal adhere to Hinduism, alongside significant communities in Bangladesh, Pakistan, Bhutan, Sri Lanka and Bali, Indonesia. Many overseas Indians in countries such as Burma, Singapore and Malaysia also adhere to Hinduism. Buddhism has a great following in mainland Southeast Asia and East Asia. Buddhism is the religion of the majority of the populations of Cambodia (96%), Thailand (95%), Burma (80–89%), Japan (36–96%), Bhutan (75–84%), Sri Lanka (70%), Laos (60–67%) and Mongolia (53–93%). Large Buddhist populations also exist in Singapore (33–51%), Taiwan (35–93%), South Korea (23–50%), Malaysia (19–21%), Nepal (9–11%), Vietnam (10–75%), China (20–50%), North Korea (2–14%), and small communities in India and Bangladesh. The Communist-governed countries of China, Vietnam and North Korea are officially atheist, thus the number of Buddhists and other religious adherents may be under-reported. Jainism is found mainly in India and in overseas Indian communities such as the United States and Malaysia. Sikhism is found in Northern India and amongst overseas Indian communities in other parts of Asia, especially Southeast Asia. Confucianism is found predominantly in Mainland China, South Korea, Taiwan and in overseas Chinese populations. Taoism is found mainly in Mainland China, Taiwan, Malaysia and Singapore. In many Chinese communities, Taoism is easily syncretized with Mahayana Buddhism, thus exact religious statistics are difficult to obtain and may be understated or overstated. Modern conflicts Some of the events pivotal in the Asia territory related to the relationship with the outside world in the post-Second World War were: The Partition of India The Chinese Civil War The Kashmir conflict The Balochistan Conflict The Naxalite–Maoist insurgency in India The Korean War The French-Indochina War The Vietnam War The Indonesia–Malaysia confrontation The 1959 Tibetan uprising The Sino-Vietnamese War The Bangladesh Liberation War The Yom Kippur War The Xinjiang conflict The Iranian Revolution The Soviet–Afghan War The Iran–Iraq War The Cambodian Killing Fields The Insurgency in Laos The Lebanese Civil War The Sri Lankan Civil War The 1988 Maldives coup d'état The Dissolution of the Soviet Union The Gulf War The Nepalese Civil War The Indo-Pakistani wars and conflicts The West Papua conflict The First Nagorno-Karabakh War The 1989 Tiananmen Square protests The Indonesian occupation of East Timor The 1999 Pakistani coup d'état The War in Afghanistan The Iraq War The South Thailand insurgency The 2006 Thai coup d'état The Burmese Civil War The Saffron Revolution The Kurdish-Turkish conflict The Arab Spring The Arab–Israeli conflict The Syrian Civil War The Sino-Indian War The 2014 Thai coup d'état The Moro conflict in the Philippines The Islamic State of Iraq and the Levant The Turkish invasion of Syria The Rohingya crisis in Myanmar The Saudi Arabian-led intervention in Yemen The Hong Kong protests The 2020 China–India skirmishes The 1969 inter-ethnic violence in Kuala Lumpur Culture Nobel prizes The polymath Rabindranath Tagore, a Bengali poet, dramatist, and writer from Santiniketan, now in West Bengal, India, became in 1913 the first Asian Nobel laureate. He won his Nobel Prize in Literature for notable impact his prose works and poetic thought had on English, French, and other national literatures of Europe and the Americas. He is also the writer of the national anthems of Bangladesh and India. Other Asian writers who won Nobel Prize for literature include Yasunari Kawabata (Japan, 1968), Kenzaburō Ōe (Japan, 1994), Gao Xingjian (China, 2000), Orhan Pamuk (Turkey, 2006), and Mo Yan (China, 2012). Some may consider the American writer, Pearl S. Buck, an honorary Asian Nobel laureate, having spent considerable time in China as the daughter of missionaries, and based many of her novels, namely The Good Earth (1931) and The Mother (1933), as well as the biographies of her parents for their time in China, The Exile and Fighting Angel, all of which earned her the Literature prize in 1938. Also, Mother Teresa of India and Shirin Ebadi of Iran were awarded the Nobel Peace Prize for their significant and pioneering efforts for democracy and human rights, especially for the rights of women and children. Ebadi is the first Iranian and the first Muslim woman to receive the prize. Another Nobel Peace Prize winner is Aung San Suu Kyi from Burma for her peaceful and non-violent struggle under a military dictatorship in Burma. She is a nonviolent pro-democracy activist and leader of the National League for Democracy in Burma (Myanmar) and a noted prisoner of conscience. She is a Buddhist and was awarded the Nobel Peace Prize in 1991. Chinese dissident Liu Xiaobo was awarded the Nobel Peace Prize for "his long and non-violent struggle for fundamental human rights in China" on 8 October 2010. He is the first Chinese citizen to be awarded a Nobel Prize of any kind while residing in China. In 2014, Kailash Satyarthi from India and Malala Yousafzai from Pakistan were awarded the Nobel Peace Prize "for their struggle against the suppression of children and young people and for the right of all children to education". Sir C.V. Raman is the first Asian to get a Nobel prize in Sciences. He won the Nobel Prize in Physics "for his work on the scattering of light and for the discovery of the effect named after him". Japan has won the most Nobel Prizes of any Asian nation with 24 followed by India which has won 13. Amartya Sen, (born 3 November 1933) is an Indian economist who was awarded the 1998 Nobel Memorial Prize in Economic Sciences for his contributions to welfare economics and social choice theory, and for his interest in the problems of society's poorest members. Other Asian Nobel Prize winners include Subrahmanyan Chandrasekhar, Abdus Salam, Malala Yousafzai, Robert Aumann, Menachem Begin, Aaron Ciechanover, Avram Hershko, Daniel Kahneman, Shimon Peres, Yitzhak Rabin, Ada Yonath, Yasser Arafat, José Ramos-Horta and Bishop Carlos Filipe Ximenes Belo of Timor Leste, Kim Dae-jung, and 13 Japanese scientists. Most of the said awardees are from Japan and Israel except for Chandrasekhar and Raman (India), Abdus Salam and Malala Yousafzai, (Pakistan), Arafat (Palestinian Territories), Kim (South Korea), and Horta and Belo (Timor Leste). In 2006, Dr. Muhammad Yunus of Bangladesh was awarded the Nobel Peace Prize for the establishment of Grameen Bank, a community development bank that lends money to poor people, especially women in Bangladesh. Dr. Yunus received his PhD in economics from Vanderbilt University, United States. He is internationally known for the concept of micro credit which allows poor and destitute people with little or no collateral to borrow money. The borrowers typically pay back money within the specified period and the incidence of default is very low. The Dalai Lama has received approximately eighty-four awards over his spiritual and political career. On 22 June 2006, he became one of only four people ever to be recognized with Honorary Citizenship by the Governor General of Canada. On 28 May 2005, he received the Christmas Humphreys Award from the Buddhist Society in the United Kingdom. Most notable was the Nobel Peace Prize, presented in Oslo, Norway on 10 December 1989. Political geography Within the above-mentioned states are several partially recognized countries with limited to no international recognition. None of them are members of the UN: See also References to articles: Subregions of Asia Special topics: Asian Century Asian cuisine Asian furniture Asian Games Asia-Pacific Asian Para Games Asian Monetary Unit Asian people Eastern world Eurasia Far East East Asia Southeast Asia South Asia Central Asia Western Asia North Asia Fauna of Asia Flags of Asia Middle East Eastern Mediterranean Levant Near East Pan-Asianism Lists: List of cities in Asia List of metropolitan areas in Asia by population List of sovereign states and dependent territories in Asia Projects Asian Highway Network Trans-Asian Railway Notes References Bibliography Further reading Embree, Ainslie T., ed. Encyclopedia of Asian history (1988) vol. 1 online; vol 2 online; vol 3 online; vol 4 online Higham, Charles. Encyclopedia of Ancient Asian Civilizations. Facts on File library of world history. New York: Facts On File, 2004. Kamal, Niraj. "Arise Asia: Respond to White Peril". New Delhi: Wordsmith, 2002, Kapadia, Feroz, and Mandira Mukherjee. Encyclopaedia of Asian Culture and Society. New Delhi: Anmol Publications, 1999. Levinson, David, and Karen Christensen, eds. Encyclopedia of Modern Asia. (6 vol. Charles Scribner's Sons, 2002). External links Continents
Asia
The Atlantic Ocean is the second-largest of the world's five oceans, with an area of about . It covers approximately 20% of Earth's surface and about 29% of its water surface area. It is known to separate the "Old World" of Africa, Europe and Asia from the "New World" of the Americas in the European perception of the World. The Atlantic Ocean occupies an elongated, S-shaped basin extending longitudinally between Europe and Africa to the east, and the Americas to the west. As one component of the interconnected World Ocean, it is connected in the north to the Arctic Ocean, to the Pacific Ocean in the southwest, the Indian Ocean in the southeast, and the Southern Ocean in the south (other definitions describe the Atlantic as extending southward to Antarctica). The Atlantic Ocean is divided in two parts, by the Equatorial Counter Current, with the North(ern) Atlantic Ocean and the South(ern) Atlantic Ocean at about 8°N. Scientific explorations of the Atlantic include the Challenger expedition, the German Meteor expedition, Columbia University's Lamont-Doherty Earth Observatory and the United States Navy Hydrographic Office. Etymology The oldest known mentions of an "Atlantic" sea come from Stesichorus around mid-sixth century BC (Sch. A. R. 1. 211): (Greek: ; English: 'the Atlantic sea'; etym. 'Sea of Atlas') and in The Histories of Herodotus around 450 BC (Hdt. 1.202.4): (Greek: ; English: 'Sea of Atlas' or 'the Atlantic sea') where the name refers to "the sea beyond the pillars of Heracles" which is said to be part of the sea that surrounds all land. In these uses, the name refers to Atlas, the Titan in Greek mythology, who supported the heavens and who later appeared as a frontispiece in Medieval maps and also lent his name to modern atlases. On the other hand, to early Greek sailors and in Ancient Greek mythological literature such as the Iliad and the Odyssey, this all-encompassing ocean was instead known as Oceanus, the gigantic river that encircled the world; in contrast to the enclosed seas well known to the Greeks: the Mediterranean and the Black Sea. In contrast, the term "Atlantic" originally referred specifically to the Atlas Mountains in Morocco and the sea off the Strait of Gibraltar and the North African coast. The Greek word has been reused by scientists for the huge Panthalassa ocean that surrounded the supercontinent Pangaea hundreds of millions of years ago. The term "Aethiopian Ocean", derived from Ancient Ethiopia, was applied to the Southern Atlantic as late as the mid-19th century. During the Age of Discovery, the Atlantic was also known to English cartographers as the Great Western Ocean. The pond is a term often used by British and American speakers in reference to the Northern Atlantic Ocean, as a form of meiosis, or ironic understatement. It is used mostly when referring to events or circumstances "on this side of the pond" or "on the other side of the pond", rather than to discuss the ocean itself. The term dates to 1640, first appearing in print in pamphlet released during the reign of Charles I, and reproduced in 1869 in Nehemiah Wallington's Historical Notices of Events Occurring Chiefly in The Reign of Charles I, where "great Pond" is used in reference to the Atlantic Ocean by Francis Windebank, Charles I's Secretary of State. Extent and data The International Hydrographic Organization (IHO) defined the limits of the oceans and seas in 1953, but some of these definitions have been revised since then and some are not used by various authorities, institutions, and countries, see for example the CIA World Factbook. Correspondingly, the extent and number of oceans and seas vary. The Atlantic Ocean is bounded on the west by North and South America. It connects to the Arctic Ocean through the Denmark Strait, Greenland Sea, Norwegian Sea and Barents Sea. To the east, the boundaries of the ocean proper are Europe: the Strait of Gibraltar (where it connects with the Mediterranean Sea—one of its marginal seas—and, in turn, the Black Sea, both of which also touch upon Asia) and Africa. In the southeast, the Atlantic merges into the Indian Ocean. The 20° East meridian, running south from Cape Agulhas to Antarctica defines its border. In the 1953 definition it extends south to Antarctica, while in later maps it is bounded at the 60° parallel by the Southern Ocean. The Atlantic has irregular coasts indented by numerous bays, gulfs and seas. These include the Baltic Sea, Black Sea, Caribbean Sea, Davis Strait, Denmark Strait, part of the Drake Passage, Gulf of Mexico, Labrador Sea, Mediterranean Sea, North Sea, Norwegian Sea, almost all of the Scotia Sea, and other tributary water bodies. Including these marginal seas the coast line of the Atlantic measures compared to for the Pacific. Including its marginal seas, the Atlantic covers an area of or 23.5% of the global ocean and has a volume of or 23.3% of the total volume of the earth's oceans. Excluding its marginal seas, the Atlantic covers and has a volume of . The North Atlantic covers (11.5%) and the South Atlantic (11.1%). The average depth is and the maximum depth, the Milwaukee Deep in the Puerto Rico Trench, is . Biggest seas in Atlantic Ocean Top large seas: Sargasso Sea - 3.5 million km2 Caribbean Sea - 2.754 million km2 Mediterranean Sea - 2.510 million km2 Gulf of Guinea - 2.35 million km2 Gulf of Mexico - 1.550 million km2 Norwegian Sea - 1.383 million km2 Hudson Bay - 1.23 million km2 Greenland Sea - 1.205 million km2 Argentine Sea - 1 million km2 Labrador Sea - 841,000 km2 Irminger Sea - 780,000 km2 Baffin Bay - 689,000 km2 North Sea - 575,000 km2 Black Sea - 436,000 km2 Baltic Sea - 377,000 km2 Libyan Sea - 350,000 km2 Levantine Sea - 320,000 km2 Celtic Sea - 300,000 km2 Tyrrhenian Sea - 275,000 km2 Gulf of Saint Lawrence - 226,000 km2 Bay of Biscay - 223,000 km2 Aegean Sea - 214,000 km2 Ionian Sea - 169,000 km2 Balearic Sea - 150,000 km2 Adriatic Sea - 138,000 km2 Gulf of Bothnia - 116,300 km2 Sea of Crete - 95,000 km2 Gulf of Maine - 93,000 km2 Ligurian Sea - 80,000 km2 English Channel - 75,000 km2 James Bay - 68,300 km2 Bothnian Sea - 66,000 km2 Gulf of Sidra - 57,000 km2 Sea of the Hebrides - 47,000 km2 Irish Sea - 46,000 km2 Sea of Azov - 39,000 km2 Bothnian Bay - 36,800 km2 Gulf of Venezuela - 17,840 km2 Bay of Campeche - 16,000 km2 Gulf of Lion - 15,000 km2 Sea of Marmara - 11,350 km2 Wadden Sea - 10,000 km2 Archipelago Sea - 8,300 km2 Bathymetry The bathymetry of the Atlantic is dominated by a submarine mountain range called the Mid-Atlantic Ridge (MAR). It runs from 87°N or south of the North Pole to the subantarctic Bouvet Island at 54°S. Mid-Atlantic Ridge The MAR divides the Atlantic longitudinally into two halves, in each of which a series of basins are delimited by secondary, transverse ridges. The MAR reaches above along most of its length, but is interrupted by larger transform faults at two places: the Romanche Trench near the Equator and the Gibbs Fracture Zone at 53°N. The MAR is a barrier for bottom water, but at these two transform faults deep water currents can pass from one side to the other. The MAR rises above the surrounding ocean floor and its rift valley is the divergent boundary between the North American and Eurasian plates in the North Atlantic and the South American and African plates in the South Atlantic. The MAR produces basaltic volcanoes in Eyjafjallajökull, Iceland, and pillow lava on the ocean floor. The depth of water at the apex of the ridge is less than in most places, while the bottom of the ridge is three times as deep. The MAR is intersected by two perpendicular ridges: the Azores–Gibraltar Transform Fault, the boundary between the Nubian and Eurasian plates, intersects the MAR at the Azores Triple Junction, on either side of the Azores microplate, near the 40°N. A much vaguer, nameless boundary, between the North American and South American plates, intersects the MAR near or just north of the Fifteen-Twenty Fracture Zone, approximately at 16°N. In the 1870s, the Challenger expedition discovered parts of what is now known as the Mid-Atlantic Ridge, or: The remainder of the ridge was discovered in the 1920s by the German Meteor expedition using echo-sounding equipment. The exploration of the MAR in the 1950s led to the general acceptance of seafloor spreading and plate tectonics. Most of the MAR runs under water but where it reaches the surfaces it has produced volcanic islands. While nine of these have collectively been nominated a World Heritage Site for their geological value, four of them are considered of "Outstanding Universal Value" based on their cultural and natural criteria: Þingvellir, Iceland; Landscape of the Pico Island Vineyard Culture, Portugal; Gough and Inaccessible Islands, United Kingdom; and Brazilian Atlantic Islands: Fernando de Noronha and Atol das Rocas Reserves, Brazil. Ocean floor Continental shelves in the Atlantic are wide off Newfoundland, southernmost South America, and north-eastern Europe. In the western Atlantic carbonate platforms dominate large areas, for example, the Blake Plateau and Bermuda Rise. The Atlantic is surrounded by passive margins except at a few locations where active margins form deep trenches: the Puerto Rico Trench ( maximum depth) in the western Atlantic and South Sandwich Trench () in the South Atlantic. There are numerous submarine canyons off north-eastern North America, western Europe, and north-western Africa. Some of these canyons extend along the continental rises and farther into the abyssal plains as deep-sea channels. In 1922, a historic moment in cartography and oceanography occurred. The USS Stewart used a Navy Sonic Depth Finder to draw a continuous map across the bed of the Atlantic. This involved little guesswork because the idea of sonar is straightforward with pulses being sent from the vessel, which bounce off the ocean floor, then return to the vessel. The deep ocean floor is thought to be fairly flat with occasional deeps, abyssal plains, trenches, seamounts, basins, plateaus, canyons, and some guyots. Various shelves along the margins of the continents constitute about 11% of the bottom topography with few deep channels cut across the continental rise. The mean depth between 60°N and 60°S is , or close to the average for the global ocean, with a modal depth between . In the South Atlantic the Walvis Ridge and Rio Grande Rise form barriers to ocean currents. The Laurentian Abyss is found off the eastern coast of Canada. Water characteristics Surface water temperatures, which vary with latitude, current systems, and season and reflect the latitudinal distribution of solar energy, range from below to over . Maximum temperatures occur north of the equator, and minimum values are found in the polar regions. In the middle latitudes, the area of maximum temperature variations, values may vary by . From October to June the surface is usually covered with sea ice in the Labrador Sea, Denmark Strait, and Baltic Sea. The Coriolis effect circulates North Atlantic water in a clockwise direction, whereas South Atlantic water circulates counter-clockwise. The south tides in the Atlantic Ocean are semi-diurnal; that is, two high tides occur every 24 lunar hours. In latitudes above 40° North some east–west oscillation, known as the North Atlantic oscillation, occurs. Salinity On average, the Atlantic is the saltiest major ocean; surface water salinity in the open ocean ranges from 33 to 37 parts per thousand (3.3–3.7%) by mass and varies with latitude and season. Evaporation, precipitation, river inflow and sea ice melting influence surface salinity values. Although the lowest salinity values are just north of the equator (because of heavy tropical rainfall), in general, the lowest values are in the high latitudes and along coasts where large rivers enter. Maximum salinity values occur at about 25° north and south, in subtropical regions with low rainfall and high evaporation. The high surface salinity in the Atlantic, on which the Atlantic thermohaline circulation is dependent, is maintained by two processes: the Agulhas Leakage/Rings, which brings salty Indian Ocean waters into the South Atlantic, and the "Atmospheric Bridge", which evaporates subtropical Atlantic waters and exports it to the Pacific. Water masses The Atlantic Ocean consists of four major, upper water masses with distinct temperature and salinity. The Atlantic Subarctic Upper Water in the northernmost North Atlantic is the source for Subarctic Intermediate Water and North Atlantic Intermediate Water. North Atlantic Central Water can be divided into the Eastern and Western North Atlantic central Water since the western part is strongly affected by the Gulf Stream and therefore the upper layer is closer to underlying fresher subpolar intermediate water. The eastern water is saltier because of its proximity to Mediterranean Water. North Atlantic Central Water flows into South Atlantic Central Water at 15°N. There are five intermediate waters: four low-salinity waters formed at subpolar latitudes and one high-salinity formed through evaporation. Arctic Intermediate Water, flows from north to become the source for North Atlantic Deep Water south of the Greenland-Scotland sill. These two intermediate waters have different salinity in the western and eastern basins. The wide range of salinities in the North Atlantic is caused by the asymmetry of the northern subtropical gyre and the large number of contributions from a wide range of sources: Labrador Sea, Norwegian-Greenland Sea, Mediterranean, and South Atlantic Intermediate Water. The North Atlantic Deep Water (NADW) is a complex of four water masses, two that form by deep convection in the open ocean — Classical and Upper Labrador Sea Water — and two that form from the inflow of dense water across the Greenland-Iceland-Scotland sill — Denmark Strait and Iceland-Scotland Overflow Water. Along its path across Earth the composition of the NADW is affected by other water masses, especially Antarctic Bottom Water and Mediterranean Overflow Water. The NADW is fed by a flow of warm shallow water into the northern North Atlantic which is responsible for the anomalous warm climate in Europe. Changes in the formation of NADW have been linked to global climate changes in the past. Since man-made substances were introduced into the environment, the path of the NADW can be traced throughout its course by measuring tritium and radiocarbon from nuclear weapon tests in the 1960s and CFCs. Gyres The clockwise warm-water North Atlantic Gyre occupies the northern Atlantic, and the counter-clockwise warm-water South Atlantic Gyre appears in the southern Atlantic. In the North Atlantic, surface circulation is dominated by three inter-connected currents: the Gulf Stream which flows north-east from the North American coast at Cape Hatteras; the North Atlantic Current, a branch of the Gulf Stream which flows northward from the Grand Banks; and the Subpolar Front, an extension of the North Atlantic Current, a wide, vaguely defined region separating the subtropical gyre from the subpolar gyre. This system of currents transport warm water into the North Atlantic, without which temperatures in the North Atlantic and Europe would plunge dramatically. North of the North Atlantic Gyre, the cyclonic North Atlantic Subpolar Gyre plays a key role in climate variability. It is governed by ocean currents from marginal seas and regional topography, rather than being steered by wind, both in the deep ocean and at sea level. The subpolar gyre forms an important part of the global thermohaline circulation. Its eastern portion includes eddying branches of the North Atlantic Current which transport warm, saline waters from the subtropics to the north-eastern Atlantic. There this water is cooled during winter and forms return currents that merge along the eastern continental slope of Greenland where they form an intense (40–50 Sv) current which flows around the continental margins of the Labrador Sea. A third of this water becomes part of the deep portion of the North Atlantic Deep Water (NADW). The NADW, in its turn, feeds the meridional overturning circulation (MOC), the northward heat transport of which is threatened by anthropogenic climate change. Large variations in the subpolar gyre on a decade-century scale, associated with the North Atlantic oscillation, are especially pronounced in Labrador Sea Water, the upper layers of the MOC. The South Atlantic is dominated by the anti-cyclonic southern subtropical gyre. The South Atlantic Central Water originates in this gyre, while Antarctic Intermediate Water originates in the upper layers of the circumpolar region, near the Drake Passage and the Falkland Islands. Both these currents receive some contribution from the Indian Ocean. On the African east coast, the small cyclonic Angola Gyre lies embedded in the large subtropical gyre. The southern subtropical gyre is partly masked by a wind-induced Ekman layer. The residence time of the gyre is 4.4–8.5 years. North Atlantic Deep Water flows southward below the thermocline of the subtropical gyre. Sargasso Sea The Sargasso Sea in the western North Atlantic can be defined as the area where two species of Sargassum (S. fluitans and natans) float, an area wide and encircled by the Gulf Stream, North Atlantic Drift, and North Equatorial Current. This population of seaweed probably originated from Tertiary ancestors on the European shores of the former Tethys Ocean and has, if so, maintained itself by vegetative growth, floating in the ocean for millions of years. Other species endemic to the Sargasso Sea include the sargassum fish, a predator with algae-like appendages which hovers motionless among the Sargassum. Fossils of similar fishes have been found in fossil bays of the former Tethys Ocean, in what is now the Carpathian region, that were similar to the Sargasso Sea. It is possible that the population in the Sargasso Sea migrated to the Atlantic as the Tethys closed at the end of the Miocene around 17 Ma. The origin of the Sargasso fauna and flora remained enigmatic for centuries. The fossils found in the Carpathians in the mid-20th century often called the "quasi-Sargasso assemblage", finally showed that this assemblage originated in the Carpathian Basin from where it migrated over Sicily to the Central Atlantic where it evolved into modern species of the Sargasso Sea. The location of the spawning ground for European eels remained unknown for decades. In the early 19th century it was discovered that the southern Sargasso Sea is the spawning ground for both the European and American eel and that the former migrate more than and the latter . Ocean currents such as the Gulf Stream transport eel larvae from the Sargasso Sea to foraging areas in North America, Europe, and Northern Africa. Recent but disputed research suggests that eels possibly use Earth's magnetic field to navigate through the ocean both as larvae and as adults. Climate Climate is influenced by the temperatures of the surface waters and water currents as well as winds. Because of the ocean's great capacity to store and release heat, maritime climates are more moderate and have less extreme seasonal variations than inland climates. Precipitation can be approximated from coastal weather data and air temperature from water temperatures. The oceans are the major source of the atmospheric moisture that is obtained through evaporation. Climatic zones vary with latitude; the warmest zones stretch across the Atlantic north of the equator. The coldest zones are in high latitudes, with the coldest regions corresponding to the areas covered by sea ice. Ocean currents influence the climate by transporting warm and cold waters to other regions. The winds that are cooled or warmed when blowing over these currents influence adjacent land areas. The Gulf Stream and its northern extension towards Europe, the North Atlantic Drift is thought to have at least some influence on climate. For example, the Gulf Stream helps moderate winter temperatures along the coastline of southeastern North America, keeping it warmer in winter along the coast than inland areas. The Gulf Stream also keeps extreme temperatures from occurring on the Florida Peninsula. In the higher latitudes, the North Atlantic Drift, warms the atmosphere over the oceans, keeping the British Isles and north-western Europe mild and cloudy, and not severely cold in winter like other locations at the same high latitude. The cold water currents contribute to heavy fog off the coast of eastern Canada (the Grand Banks of Newfoundland area) and Africa's north-western coast. In general, winds transport moisture and air over land areas. Natural hazards Every winter, the Icelandic Low produces frequent storms. Icebergs are common from early February to the end of July across the shipping lanes near the Grand Banks of Newfoundland. The ice season is longer in the polar regions, but there is little shipping in those areas. Hurricanes are a hazard in the western parts of the North Atlantic during the summer and autumn. Due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare. Geology and plate tectonics The Atlantic Ocean is underlain mostly by dense mafic oceanic crust made up of basalt and gabbro and overlain by fine clay, silt and siliceous ooze on the abyssal plain. The continental margins and continental shelf mark lower density, but greater thickness felsic continental rock that is often much older than that of the seafloor. The oldest oceanic crust in the Atlantic is up to 145 million years and situated off the west coast of Africa and east coast of North America, or on either side of the South Atlantic. In many places, the continental shelf and continental slope are covered in thick sedimentary layers. For instance, on the North American side of the ocean, large carbonate deposits formed in warm shallow waters such as Florida and the Bahamas, while coarse river outwash sands and silt are common in shallow shelf areas like the Georges Bank. Coarse sand, boulders, and rocks were transported into some areas, such as off the coast of Nova Scotia or the Gulf of Maine during the Pleistocene ice ages. Central Atlantic The break-up of Pangaea began in the Central Atlantic, between North America and Northwest Africa, where rift basins opened during the Late Triassic and Early Jurassic. This period also saw the first stages of the uplift of the Atlas Mountains. The exact timing is controversial with estimates ranging from 200 to 170 Ma. The opening of the Atlantic Ocean coincided with the initial break-up of the supercontinent Pangaea, both of which were initiated by the eruption of the Central Atlantic Magmatic Province (CAMP), one of the most extensive and voluminous large igneous provinces in Earth's history associated with the Triassic–Jurassic extinction event, one of Earth's major extinction events. Theoliitic dikes, sills, and lava flows from the CAMP eruption at 200 Ma have been found in West Africa, eastern North America, and northern South America. The extent of the volcanism has been estimated to of which covered what is now northern and central Brazil. The formation of the Central American Isthmus closed the Central American Seaway at the end of the Pliocene 2.8 Ma ago. The formation of the isthmus resulted in the migration and extinction of many land-living animals, known as the Great American Interchange, but the closure of the seaway resulted in a "Great American Schism" as it affected ocean currents, salinity, and temperatures in both the Atlantic and Pacific. Marine organisms on both sides of the isthmus became isolated and either diverged or went extinct. Geologically, the Northern Atlantic is the area delimited to the south by two conjugate margins, Newfoundland and Iberia, and to the north by the Arctic Eurasian Basin. The opening of the Northern Atlantic closely followed the margins of its predecessor, the Iapetus Ocean, and spread from the Central Atlantic in six stages: Iberia–Newfoundland, Porcupine–North America, Eurasia–Greenland, Eurasia–North America. Active and inactive spreading systems in this area are marked by the interaction with the Iceland hotspot. Seafloor spreading led to the extension of the crust and formations of troughs and sedimentary basins. The Rockall Trough opened between 105 and 84 million years ago although along the rift failed along with one leading into the Bay of Biscay. Spreading began opening the Labrador Sea around 61 million years ago, continuing until 36 million years ago. Geologists distinguish two magmatic phases. One from 62 to 58 million years ago predates the separation of Greenland from northern Europe while the second from 56 to 52 million years ago happened as the separation occurred. Iceland began to form 62 million years ago due to a particularly concentrated mantle plume. Large quantities of basalt erupted at this time period are found on Baffin Island, Greenland, the Faroe Islands, and Scotland, with ash falls in Western Europe acting as a stratigraphic marker. The opening of the North Atlantic caused significant uplift of continental crust along the coast. For instance, in spite of 7 km thick basalt, Gunnbjorn Field in East Greenland is the highest point on the island, elevated enough that it exposes older Mesozoic sedimentary rocks at its base, similar to old lava fields above sedimentary rocks in the uplifted Hebrides of western Scotland. The North Atlantic Ocean contains about 810 seamounts, most of them situated along the Mid-Atlantic Ridge. The OSPAR database (Convention for the Protection of the Marine Environment of the North-East Atlantic) mentions 104 seamounts: 74 within the national Exclusive economic zone. Of these seamounts, 46 are located close to the Iberian Peninsula. South Atlantic West Gondwana (South America and Africa) broke up in the Early Cretaceous to form the South Atlantic. The apparent fit between the coastlines of the two continents was noted on the first maps that included the South Atlantic and it was also the subject of the first computer-assisted plate tectonic reconstructions in 1965. This magnificent fit, however, has since then proven problematic and later reconstructions have introduced various deformation zones along the shorelines to accommodate the northward-propagating break-up. Intra-continental rifts and deformations have also been introduced to subdivide both continental plates into sub-plates. Geologically the South Atlantic can be divided into four segments: Equatorial segment, from 10°N to the Romanche Fracture Zone (RFZ); Central segment, from RFZ to Florianopolis Fracture Zone (FFZ, north of Walvis Ridge and Rio Grande Rise); Southern segment, from FFZ to the Agulhas-Falkland Fracture Zone (AFFZ); and Falkland segment, south of AFFZ. In the southern segment the Early Cretaceous (133–130 Ma) intensive magmatism of the Paraná–Etendeka Large Igneous Province produced by the Tristan hotspot resulted in an estimated volume of . It covered an area of in Brazil, Paraguay, and Uruguay and in Africa. Dyke swarms in Brazil, Angola, eastern Paraguay, and Namibia, however, suggest the LIP originally covered a much larger area and also indicate failed rifts in all these areas. Associated offshore basaltic flows reach as far south as the Falkland Islands and South Africa. Traces of magmatism in both offshore and onshore basins in the central and southern segments have been dated to 147–49 Ma with two peaks between 143 and 121 Ma and 90–60 Ma. In the Falkland segment rifting began with dextral movements between the Patagonia and Colorado sub-plates between the Early Jurassic (190 Ma) and the Early Cretaceous (126.7 Ma). Around 150 Ma sea-floor spreading propagated northward into the southern segment. No later than 130 Ma rifting had reached the Walvis Ridge–Rio Grande Rise. In the central segment rifting started to break Africa in two by opening the Benue Trough around 118 Ma. Rifting in the central segment, however, coincided with the Cretaceous Normal Superchron (also known as the Cretaceous quiet period), a 40 Ma period without magnetic reversals, which makes it difficult to date sea-floor spreading in this segment. The equatorial segment is the last phase of the break-up, but, because it is located on the Equator, magnetic anomalies cannot be used for dating. Various estimates date the propagation of sea-floor spreading in this segment to the period 120–96 Ma. This final stage, nevertheless, coincided with or resulted in the end of continental extension in Africa. About 50 Ma the opening of the Drake Passage resulted from a change in the motions and separation rate of the South American and Antarctic plates. First small ocean basins opened and a shallow gateway appeared during the Middle Eocene. 34–30 Ma a deeper seaway developed, followed by an Eocene–Oligocene climatic deterioration and the growth of the Antarctic ice sheet. Closure of the Atlantic An embryonic subduction margin is potentially developing west of Gibraltar. The Gibraltar Arc in the western Mediterranean is migrating westward into the Central Atlantic where it joins the converging African and Eurasian plates. Together these three tectonic forces are slowly developing into a new subduction system in the eastern Atlantic Basin. Meanwhile, the Scotia Arc and Caribbean Plate in the western Atlantic Basin are eastward-propagating subduction systems that might, together with the Gibraltar system, represent the beginning of the closure of the Atlantic Ocean and the final stage of the Atlantic Wilson cycle. History Human origin Humans evolved in Africa; first by diverging from other apes around 7 mya; then developing stone tools around 2.6 mya; to finally evolve as modern humans around 200 kya. The earliest evidence for the complex behavior associated with this behavioral modernity has been found in the Greater Cape Floristic Region (GCFR) along the coast of South Africa. During the latest glacial stages, the now-submerged plains of the Agulhas Bank were exposed above sea level, extending the South African coastline farther south by hundreds of kilometers. A small population of modern humans — probably fewer than a thousand reproducing individuals — survived glacial maxima by exploring the high diversity offered by these Palaeo-Agulhas plains. The GCFR is delimited to the north by the Cape Fold Belt and the limited space south of it resulted in the development of social networks out of which complex Stone Age technologies emerged. Human history thus begins on the coasts of South Africa where the Atlantic Benguela Upwelling and Indian Ocean Agulhas Current meet to produce an intertidal zone on which shellfish, fur seal, fish and sea birds provided the necessary protein sources. The African origin of this modern behaviour is evidenced by 70,000 years-old engravings from Blombos Cave, South Africa. Old World Mitochondrial DNA (mtDNA) studies indicate that 80–60,000 years ago a major demographic expansion within Africa, derived from a single, small population, coincided with the emergence of behavioral complexity and the rapid MIS 5–4 environmental changes. This group of people not only expanded over the whole of Africa, but also started to disperse out of Africa into Asia, Europe, and Australasia around 65,000 years ago and quickly replaced the archaic humans in these regions. During the Last Glacial Maximum (LGM) 20,000 years ago humans had to abandon their initial settlements along the European North Atlantic coast and retreat to the Mediterranean. Following rapid climate changes at the end of the LGM this region was repopulated by Magdalenian culture. Other hunter-gatherers followed in waves interrupted by large-scale hazards such as the Laacher See volcanic eruption, the inundation of Doggerland (now the North Sea), and the formation of the Baltic Sea. The European coasts of the North Atlantic were permanently populated about 9–8.5 thousand years ago. This human dispersal left abundant traces along the coasts of the Atlantic Ocean. 50 kya-old, deeply stratified shell middens found in Ysterfontein on the western coast of South Africa are associated with the Middle Stone Age (MSA). The MSA population was small and dispersed and the rate of their reproduction and exploitation was less intense than those of later generations. While their middens resemble 12–11 kya-old Late Stone Age (LSA) middens found on every inhabited continent, the 50–45 kya-old Enkapune Ya Muto in Kenya probably represents the oldest traces of the first modern humans to disperse out of Africa. The same development can be seen in Europe. In La Riera Cave (23–13 kya) in Asturias, Spain, only some 26,600 molluscs were deposited over 10 kya. In contrast, 8–7 kya-old shell middens in Portugal, Denmark, and Brazil generated thousands of tons of debris and artefacts. The Ertebølle middens in Denmark, for example, accumulated of shell deposits representing some 50 million molluscs over only a thousand years. This intensification in the exploitation of marine resources has been described as accompanied by new technologies — such as boats, harpoons, and fish-hooks — because many caves found in the Mediterranean and on the European Atlantic coast have increased quantities of marine shells in their upper levels and reduced quantities in their lower. The earliest exploitation, however, took place on the now submerged shelves, and most settlements now excavated were then located several kilometers from these shelves. The reduced quantities of shells in the lower levels can represent the few shells that were exported inland. New World During the LGM the Laurentide Ice Sheet covered most of northern North America while Beringia connected Siberia to Alaska. In 1973, late American geoscientist Paul S. Martin proposed a "blitzkrieg" colonization of the Americas by which Clovis hunters migrated into North America around 13,000 years ago in a single wave through an ice-free corridor in the ice sheet and "spread southward explosively, briefly attaining a density sufficiently large to overkill much of their prey." Others later proposed a "three-wave" migration over the Bering Land Bridge. These hypotheses remained the long-held view regarding the settlement of the Americas, a view challenged by more recent archaeological discoveries: the oldest archaeological sites in the Americas have been found in South America; sites in north-east Siberia report virtually no human presence there during the LGM; and most Clovis artefacts have been found in eastern North America along the Atlantic coast. Furthermore, colonisation models based on mtDNA, yDNA, and atDNA data respectively support neither the "blitzkrieg" nor the "three-wave" hypotheses but they also deliver mutually ambiguous results. Contradictory data from archaeology and genetics will most likely deliver future hypotheses that will, eventually, confirm each other. A proposed route across the Pacific to South America could explain early South American finds and another hypothesis proposes a northern path, through the Canadian Arctic and down the North American Atlantic coast. Early settlements across the Atlantic have been suggested by alternative theories, ranging from purely hypothetical to mostly disputed, including the Solutrean hypothesis and some of the Pre-Columbian trans-oceanic contact theories. The Norse settlement of the Faroe Islands and Iceland began during the 9th and 10th centuries. A settlement on Greenland was established before 1000 CE, but contact with it was lost in 1409 and it was finally abandoned during the early Little Ice Age. This setback was caused by a range of factors: an unsustainable economy resulted in erosion and denudation, while conflicts with the local Inuit resulted in the failure to adapt their Arctic technologies; a colder climate resulted in starvation, and the colony got economically marginalized as the Great Plague and Barbary pirates harvested its victims on Iceland in the 15th century. Iceland was initially settled 865–930 CE following a warm period when winter temperatures hovered around which made farming favorable at high latitudes. This did not last, however, and temperatures quickly dropped; at 1080 CE summer temperatures had reached a maximum of . The Landnámabók (Book of Settlement) records disastrous famines during the first century of settlement — "men ate foxes and ravens" and "the old and helpless were killed and thrown over cliffs" — and by the early 1200s hay had to be abandoned for short-season crops such as barley. Atlantic World Christopher Columbus reached the Americas in 1492 under Spanish flag. Six years later Vasco da Gama reached India under the Portuguese flag, by navigating south around the Cape of Good Hope, thus proving that the Atlantic and Indian Oceans are connected. In 1500, in his voyage to India following Vasco da Gama, Pedro Alvares Cabral reached Brazil, taken by the currents of the South Atlantic Gyre. Following these explorations, Spain and Portugal quickly conquered and colonized large territories in the New World and forced the Amerindian population into slavery in order to explore the vast quantities of silver and gold they found. Spain and Portugal monopolized this trade in order to keep other European nations out, but conflicting interests nevertheless led to a series of Spanish-Portuguese wars. A peace treaty mediated by the Pope divided the conquered territories into Spanish and Portuguese sectors while keeping other colonial powers away. England, France, and the Dutch Republic enviously watched the Spanish and Portuguese wealth grow and allied themselves with pirates such as Henry Mainwaring and Alexandre Exquemelin. They could explore the convoys leaving the Americas because prevailing winds and currents made the transport of heavy metals slow and predictable. In the colonies of the Americas, depredation, smallpox and others diseases, and slavery quickly reduced the indigenous population of the Americas to the extent that the Atlantic slave trade had to be introduced to replace them — a trade that became the norm and an integral part of the colonization. Between the 15th century and 1888, when Brazil became the last part of the Americas to end the slave trade, an estimated ten million Africans were exported as slaves, most of them destined for agricultural labour. The slave trade was officially abolished in the British Empire and the United States in 1808, and slavery itself was abolished in the British Empire in 1838 and in the United States in 1865 after the Civil War. From Columbus to the Industrial Revolution Trans-Atlantic trade, including colonialism and slavery, became crucial for Western Europe. For European countries with direct access to the Atlantic (including Britain, France, the Netherlands, Portugal, and Spain) 1500–1800 was a period of sustained growth during which these countries grew richer than those in Eastern Europe and Asia. Colonialism evolved as part of the Trans-Atlantic trade, but this trade also strengthened the position of merchant groups at the expense of monarchs. Growth was more rapid in non-absolutist countries, such as Britain and the Netherlands, and more limited in absolutist monarchies, such as Portugal, Spain, and France, where profit mostly or exclusively benefited the monarchy and its allies. Trans-Atlantic trade also resulted in increasing urbanization: in European countries facing the Atlantic, urbanization grew from 8% in 1300, 10.1% in 1500, to 24.5% in 1850; in other European countries from 10% in 1300, 11.4% in 1500, to 17% in 1850. Likewise, GDP doubled in Atlantic countries but rose by only 30% in the rest of Europe. By end of the 17th century, the volume of the Trans-Atlantic trade had surpassed that of the Mediterranean trade. The Atlantic Ocean became the scene of one of the longest continuous naval military camapaigns throughout World War II, from 1939 to 1945. Economy The Atlantic has contributed significantly to the development and economy of surrounding countries. Besides major transatlantic transportation and communication routes, the Atlantic offers abundant petroleum deposits in the sedimentary rocks of the continental shelves. The Atlantic harbors petroleum and gas fields, fish, marine mammals (seals and whales), sand and gravel aggregates, placer deposits, polymetallic nodules, and precious stones. Gold deposits are a mile or two under water on the ocean floor, however, the deposits are also encased in rock that must be mined through. Currently, there is no cost-effective way to mine or extract gold from the ocean to make a profit. Various international treaties attempt to reduce pollution caused by environmental threats such as oil spills, marine debris, and the incineration of toxic wastes at sea. Fisheries The shelves of the Atlantic hosts one of the world's richest fishing resources. The most productive areas include the Grand Banks of Newfoundland, the Scotian Shelf, Georges Bank off Cape Cod, the Bahama Banks, the waters around Iceland, the Irish Sea, the Bay of Fundy, the Dogger Bank of the North Sea, and the Falkland Banks. Fisheries have, however, undergone significant changes since the 1950s and global catches can now be divided into three groups of which only two are observed in the Atlantic: fisheries in the Eastern Central and South-West Atlantic oscillate around a globally stable value, the rest of the Atlantic is in overall decline following historical peaks. The third group, "continuously increasing trend since 1950", is only found in the Indian Ocean and Western Pacific. In the North-East Atlantic total catches decreased between the mid-1970s and the 1990s and reached 8.7 million tons in 2013. Blue whiting reached a 2.4 million tons peak in 2004 but was down to 628,000 tons in 2013. Recovery plans for cod, sole, and plaice have reduced mortality in these species. Arctic cod reached its lowest levels in the 1960s–1980s but is now recovered. Arctic saithe and haddock are considered fully fished; Sand eel is overfished as was capelin which has now recovered to fully fished. Limited data makes the state of redfishes and deep-water species difficult to assess but most likely they remain vulnerable to overfishing. Stocks of northern shrimp and Norwegian lobster are in good condition. In the North-East Atlantic 21% of stocks are considered overfished. In the North-West Atlantic landings have decreased from 4.2 million tons in the early 1970s to 1.9 million tons in 2013. During the 21st century some species have shown weak signs of recovery, including Greenland halibut, yellowtail flounder, Atlantic halibut, haddock, spiny dogfish, while other stocks shown no such signs, including cod, witch flounder, and redfish. Stocks of invertebrates, in contrast, remain at record levels of abundance. 31% of stocks are overfished in the North-west Atlantic. In 1497, John Cabot became the first Western European since the Vikings to explore mainland North America and one of his major discoveries was the abundant resources of Atlantic cod off Newfoundland. Referred to as "Newfoundland Currency" this discovery yielded some 200 million tons of fish over five centuries. In the late 19th and early 20th centuries new fisheries started to exploit haddock, mackerel, and lobster. From the 1950s to the 1970s the introduction of European and Asian distant-water fleets in the area dramatically increased the fishing capacity and the number of exploited species. It also expanded the exploited areas from near-shore to the open sea and to great depths to include deep-water species such as redfish, Greenland halibut, witch flounder, and grenadiers. Overfishing in the area was recognised as early as the 1960s but, because this was occurring on international waters, it took until the late 1970s before any attempts to regulate was made. In the early 1990s, this finally resulted in the collapse of the Atlantic northwest cod fishery. The population of a number of deep-sea fishes also collapsed in the process, including American plaice, redfish, and Greenland halibut, together with flounder and grenadier. In the Eastern Central Atlantic small pelagic fishes constitute about 50% of landings with sardine reaching 0.6–1.0 million tons per year. Pelagic fish stocks are considered fully fished or overfished, with sardines south of Cape Bojador the notable exception. Almost half of the stocks are fished at biologically unsustainable levels. Total catches have been fluctuating since the 1970s; reaching 3.9 million tons in 2013 or slightly less than the peak production in 2010. In the Western Central Atlantic, catches have been decreasing since 2000 and reached 1.3 million tons in 2013. The most important species in the area, Gulf menhaden, reached a million tons in the mid-1980s but only half a million tons in 2013 and is now considered fully fished. Round sardinella was an important species in the 1990s but is now considered overfished. Groupers and snappers are overfished and northern brown shrimp and American cupped oyster are considered fully fished approaching overfished. 44% of stocks are being fished at unsustainable levels. In the South-East Atlantic catches have decreased from 3.3 million tons in the early 1970s to 1.3 million tons in 2013. Horse mackerel and hake are the most important species, together representing almost half of the landings. Off South Africa and Namibia deep-water hake and shallow-water Cape hake have recovered to sustainable levels since regulations were introduced in 2006 and the states of Southern African pilchard and anchovy have improved to fully fished in 2013. In the South-West Atlantic, a peak was reached in the mid-1980s and catches now fluctuate between 1.7 and 2.6 million tons. The most important species, the Argentine shortfin squid, which reached half a million tons in 2013 or half the peak value, is considered fully fished to overfished. Another important species was the Brazilian sardinella, with a production of 100,000 tons in 2013 it is now considered overfished. Half the stocks in this area are being fished at unsustainable levels: Whitehead's round herring has not yet reached fully fished but Cunene horse mackerel is overfished. The sea snail perlemoen abalone is targeted by illegal fishing and remain overfished. Environmental issues Endangered species Endangered marine species include the manatee, seals, sea lions, turtles, and whales. Drift net fishing can kill dolphins, albatrosses and other seabirds (petrels, auks), hastening the fish stock decline and contributing to international disputes. Waste and pollution Marine pollution is a generic term for the entry into the ocean of potentially hazardous chemicals or particles. The biggest culprits are rivers and with them many agriculture fertilizer chemicals as well as livestock and human waste. The excess of oxygen-depleting chemicals leads to hypoxia and the creation of a dead zone. Marine debris, which is also known as marine litter, describes human-created waste floating in a body of water. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter. The North Atlantic garbage patch is estimated to be hundreds of kilometers across in size. Other pollution concerns include agricultural and municipal waste. Municipal pollution comes from the eastern United States, southern Brazil, and eastern Argentina; oil pollution in the Caribbean Sea, Gulf of Mexico, Lake Maracaibo, Mediterranean Sea, and North Sea; and industrial waste and municipal sewage pollution in the Baltic Sea, North Sea, and Mediterranean Sea. A USAF C-124 aircraft from Dover Air Force Base, Delaware was carrying three nuclear bombs over the Atlantic Ocean when it experienced a loss of power. For their own safety, the crew jettisoned two nuclear bombs, which were never recovered. Climate change North Atlantic hurricane activity has increased over past decades because of increased sea surface temperature (SST) at tropical latitudes, changes that can be attributed to either the natural Atlantic Multidecadal Oscillation (AMO) or to anthropogenic climate change. A 2005 report indicated that the Atlantic meridional overturning circulation (AMOC) slowed down by 30% between 1957 and 2004. If the AMO were responsible for SST variability, the AMOC would have increased in strength, which is apparently not the case. Furthermore, it is clear from statistical analyses of annual tropical cyclones that these changes do not display multidecadal cyclicity. Therefore, these changes in SST must be caused by human activities. The ocean mixed layer plays an important role in heat storage over seasonal and decadal time-scales, whereas deeper layers are affected over millennia and have a heat capacity about 50 times that of the mixed layer. This heat uptake provides a time-lag for climate change but it also results in thermal expansion of the oceans which contributes to sea level rise. 21st-century global warming will probably result in an equilibrium sea-level rise five times greater than today, whilst melting of glaciers, including that of the Greenland ice-sheet, expected to have virtually no effect during the 21st century, will probably result in a sea-level rise of 3–6 m over a millennium. See also List of countries and territories bordering the Atlantic Ocean List of rivers of the Americas by coastline#Atlantic Ocean coast Seven Seas Gulf Stream shutdown Shipwrecks in the Atlantic Ocean Atlantic hurricanes Atlantic history Piracy in the Atlantic World Transatlantic crossing Atlantic Revolutions Natural delimitation between the Pacific and South Atlantic oceans by the Scotia Arc References Sources map Further reading External links Atlantic Ocean. Cartage.org.lb. "Map of Atlantic Coast of North America from the Chesapeake Bay to Florida" from 1639 via the Library of Congress Oceans Articles containing video clips Oceans surrounding Antarctica‎
Atlantic Ocean
An android is a humanoid robot or other artificial being often made from a flesh-like material. Historically, androids were completely within the domain of science fiction and frequently seen in film and television, but recent advances in robot technology now allow the design of functional and realistic humanoid robots. While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (merging the radical anthrōpos and the word robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of scientific fiction, futurism and speculative astrobiology). Etymology The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls. The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886). This story features an artificial humanlike robot named Hadaly. As said by the officer in the story, "In this age of Realien advancement, who knows what goes on in the mind of those responsible for these mechanical dolls." The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944). Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts. The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dick in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070. Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between. Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition: the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues the golem type – made from flexible, possibly organic material, including golems and homunculi the automaton type – made from a mix of dead and living parts, including automatons and robots Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence). Projects Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway. Japan Japanese robotics have been leading the field since the 1970s. Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth. In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans. The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and the Kokoro company demonstrated the Actroid at Expo 2005 in Aichi Prefecture, Japan and released the Telenoid R1 in 2010. In 2006, Kokoro developed a new DER 2 android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice. The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the Tokyo University of Science, has developed an android head called Saya, which was exhibited at Robodex 2002 in Yokohama, Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is working at the Science University of Tokyo as a guide. The Waseda University (Japan) and NTT Docomo's manufacturers have succeeded in creating a shape-shifting robot WD-2. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three degrees of freedom. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a 3D scanner to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask. Singapore Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's Siri or Microsoft's Cortana. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly. Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in tele-presence robotics, creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements. South Korea KITECH researched and developed EveR-1, an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is tall and weighs , matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the Biblical Eve, plus the letter r for robot. EveR-1's advanced computing processing power enables speech recognition and vocal synthesis, at the same time processing lip synchronization and visual recognition by 90-degree micro-CCD cameras with face recognition technology. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the Ministry of Information and Communication has an ambitious plan to put a robot in every household by 2020. Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment. The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa. United States Walt Disney and a staff of Imagineers created Great Moments with Mr. Lincoln that debuted at the 1964 New York World's Fair. Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the United States Military Academy, created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film Metropolis, as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level. Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020. Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources. Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same. Maria Bot is an ambassador robot for good and ethical AI technology. Hanson Robotics, Inc., of Texas and KAIST produced an android portrait of Albert Einstein, using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "Albert Hubo", thus represents the first full-body walking android in history. Hanson Robotics, the FedEx Institute of Technology, and the University of Texas at Arlington also developed the android portrait of sci-fi author Philip K. Dick (creator of Do Androids Dream of Electric Sheep?, the basis for the film Blade Runner), with full conversational capabilities that incorporated thousands of pages of the author's works. In 2005, the PKD android won a first-place artificial intelligence award from AAAI. Use in fiction Androids are a staple of science fiction. Isaac Asimov pioneered the fictionalization of the science of robotics and artificial intelligence, notably in his 1950s series I, Robot. One thing common to most fictional androids is that the real-life technological challenges associated with creating thoroughly human-like robots—such as the creation of strong artificial intelligence—are assumed to have been solved. Fictional androids are often depicted as mentally and physically equal or superior to humans—moving, thinking and speaking as fluidly as them. The tension between the nonhuman substance and the human appearance—or even human ambitions—of androids is the dramatic impetus behind most of their fictional depictions. Some android heroes seek, like Pinocchio, to become human, as in the film Bicentennial Man, or Data in Star Trek: The Next Generation. Others, as in the film Westworld, rebel against abuse by careless humans. Android hunter Deckard in Do Androids Dream of Electric Sheep? and its film adaptation Blade Runner discovers that his targets appear to be, in some ways, more "human" than he is. Android stories, therefore, are not essentially stories "about" androids; they are stories about the human condition and what it means to be human. One aspect of writing about the meaning of humanity is to use discrimination against androids as a mechanism for exploring racism in society, as in Blade Runner. Perhaps the clearest example of this is John Brunner's 1968 novel Into the Slave Nebula, where the blue-skinned android slaves are explicitly shown to be fully human. More recently, the androids Bishop and Annalee Call in the films Aliens and Alien Resurrection are used as vehicles for exploring how humans deal with the presence of an "Other". The 2018 video game Detroit: Become Human also explores how androids are treated as second class citizens in a near future society. Female androids, or "gynoids", are often seen in science fiction, and can be viewed as a continuation of the long tradition of men attempting to create the stereotypical "perfect woman". Examples include the Greek myth of Pygmalion and the female robot Maria in Fritz Lang's Metropolis. Some gynoids, like Pris in Blade Runner, are designed as sex-objects, with the intent of "pleasing men's violent sexual desires", or as submissive, servile companions, such as in The Stepford Wives. Fiction about gynoids has therefore been described as reinforcing "essentialist ideas of femininity", although others have suggested that the treatment of androids is a way of exploring racism and misogyny in society. The 2015 Japanese film Sayonara, starring Geminoid F, was promoted as "the first movie to feature an android performing opposite a human actor". See also References Further reading Kerman, Judith B. (1991). Retrofitting Blade Runner: Issues in Ridley Scott's Blade Runner and Philip K. Dick's Do Androids Dream of Electric Sheep? Bowling Green, OH: Bowling Green State University Popular Press. . Perkowitz, Sidney (2004). Digital People: From Bionic Humans to Androids. Joseph Henry Press. . Shelde, Per (1993). Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films. New York: New York University Press. . Ishiguro, Hiroshi. "Android science." Cognitive Science Society. 2005. Glaser, Horst Albert and Rossbach, Sabine: The Artificial Human, Frankfurt/M., Bern, New York 2011 "The Artificial Human" TechCast Article Series, Jason Rupinski and Richard Mix, "Public Attitudes to Androids: Robot Gender, Tasks, & Pricing" An-droid, "Similar to the Android name" Carpenter, J. (2009). Why send the Terminator to do R2D2s job?: Designing androids as rhetorical phenomena. Proceedings of HCI 2009: Beyond Gray Droids: Domestic Robot Design for the 21st Century. Cambridge, UK. 1 September. Telotte, J.P. Replications: A Robotic History of the Science Fiction Film. University of Illinois Press, 1995. External links Japanese inventions South Korean inventions Osaka University research Science fiction themes Human–machine interaction Robots
Android (robot)
Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest physicists of all time. Einstein is best known for developing the theory of relativity, but he also made important contributions to the development of the theory of quantum mechanics. Relativity and quantum mechanics are together the two pillars of modern physics. His mass–energy equivalence formula , which arises from relativity theory, has been dubbed "the world's most famous equation". His work is also known for its influence on the philosophy of science. He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect", a pivotal step in the development of quantum theory. His intellectual achievements and originality resulted in "Einstein" becoming synonymous with "genius". In 1905, a year sometimes described as his annus mirabilis ('miracle year'), Einstein published four groundbreaking papers. These outlined the theory of the photoelectric effect, explained Brownian motion, introduced special relativity, and demonstrated mass-energy equivalence. Einstein thought that the laws of classical mechanics could no longer be reconciled with those of the electromagnetic field, which led him to develop his special theory of relativity. He then extended the theory to gravitational fields; he published a paper on general relativity in 1916, introducing his theory of gravitation. In 1917, he applied the general theory of relativity to model the structure of the universe. He continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the thermal properties of light and the quantum theory of radiation, which laid the foundation of the photon theory of light. However, for much of the later part of his career, he worked on two ultimately unsuccessful endeavors. First, despite his great contributions to quantum mechanics, he opposed what it evolved into, objecting that nature "does not play dice". Second, he attempted to devise a unified field theory by generalizing his geometric theory of gravitation to include electromagnetism. As a result, he became increasingly isolated from the mainstream of modern physics. Einstein was born in the German Empire, but moved to Switzerland in 1895, forsaking his German citizenship (as a subject of the Kingdom of Württemberg) the following year. In 1897, at the age of 17, he enrolled in the mathematics and physics teaching diploma program at the Swiss Federal polytechnic school in Zürich, graduating in 1900. In 1901, he acquired Swiss citizenship, which he kept for the rest of his life, and in 1903 he secured a permanent position at the Swiss Patent Office in Bern. In 1905, he was awarded a PhD by the University of Zurich. In 1914, Einstein moved to Berlin in order to join the Prussian Academy of Sciences and the Humboldt University of Berlin. In 1917, Einstein became director of the Kaiser Wilhelm Institute for Physics; he also became a German citizen again, this time Prussian. In 1933, while Einstein was visiting the United States, Adolf Hitler came to power in Germany. Einstein, of Jewish origin, objected to the policies of the newly elected Nazi government; he settled in the United States and became an American citizen in 1940. On the eve of World War II, he endorsed a letter to President Franklin D. Roosevelt alerting him to the potential German nuclear weapons program and recommending that the US begin similar research. Einstein supported the Allies but generally denounced the idea of nuclear weapons. Life and career Early life and education Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879 into a family of secular Ashkenazi Jews. His parents were Hermann Einstein, a salesman and engineer, and Pauline Koch. In 1880, the family moved to Munich, where Einstein's father and his uncle Jakob founded Elektrotechnische Fabrik J. Einstein & Cie, a company that manufactured electrical equipment based on direct current. Albert attended a Catholic elementary school in Munich, from the age of five, for three years. At the age of eight, he was transferred to the Luitpold Gymnasium (now known as the Albert Einstein Gymnasium), where he received advanced primary and secondary school education until he left the German Empire seven years later. In 1894, Hermann and Jakob's company lost a bid to supply the city of Munich with electrical lighting because they lacked the capital to convert their equipment from the direct current (DC) standard to the more efficient alternating current (AC) standard. The loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan and a few months later to Pavia. When the family moved to Pavia, Einstein, then 15, stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to pursue electrical engineering, but Einstein clashed with the authorities and resented the school's regimen and teaching method. He later wrote that the spirit of learning and creative thought was lost in strict rote learning. At the end of December 1894, he traveled to Italy to join his family in Pavia, convincing the school to let him go by using a doctor's note. During his time in Italy he wrote a short essay with the title "On the Investigation of the State of the Ether in a Magnetic Field". Einstein excelled at math and physics from a young age, reaching a mathematical level years ahead of his peers. The 12-year-old Einstein taught himself algebra and Euclidean geometry over a single summer. Einstein also independently discovered his own original proof of the Pythagorean theorem at age 12. A family tutor Max Talmud says that after he had given the 12-year-old Einstein a geometry textbook, after a short time "[Einstein] had worked through the whole book. He thereupon devoted himself to higher mathematics... Soon the flight of his mathematical genius was so high I could not follow." His passion for geometry and algebra led the 12-year-old to become convinced that nature could be understood as a "mathematical structure". Einstein started teaching himself calculus at 12, and as a 14-year-old he says he had "mastered integral and differential calculus". At age 13, when he had become more seriously interested in philosophy (and music), Einstein was introduced to Kant's Critique of Pure Reason. Kant became his favorite philosopher, his tutor stating: "At the time he was still a child, only thirteen years old, yet Kant's works, incomprehensible to ordinary mortals, seemed to be clear to him." In 1895, at the age of 16, Einstein took the entrance examinations for the Swiss Federal polytechnic school in Zürich (later the Eidgenössische Technische Hochschule, ETH). He failed to reach the required standard in the general part of the examination, but obtained exceptional grades in physics and mathematics. On the advice of the principal of the polytechnic school, he attended the Argovian cantonal school (gymnasium) in Aarau, Switzerland, in 1895 and 1896 to complete his secondary schooling. While lodging with the family of Professor Jost Winteler, he fell in love with Winteler's daughter, Marie. Albert's sister Maja later married Winteler's son Paul. In January 1896, with his father's approval, Einstein renounced his citizenship in the German Kingdom of Württemberg to avoid military service. In September 1896 he passed the Swiss Matura with mostly good grades, including a top grade of 6 in physics and mathematical subjects, on a scale of 1–6. At 17, he enrolled in the four-year mathematics and physics teaching diploma program at the Federal polytechnic school. Marie Winteler, who was a year older, moved to Olsberg, Switzerland, for a teaching post. Einstein's future wife, a 20-year-old Serbian named Mileva Marić, also enrolled at the polytechnic school that year. She was the only woman among the six students in the mathematics and physics section of the teaching diploma course. Over the next few years, Einstein's and Marić's friendship developed into a romance, and they spent countless hours debating and reading books together on extra-curricular physics in which they were both interested. Einstein wrote in his letters to Marić that he preferred studying alongside her. In 1900, Einstein passed the exams in Maths and Physics and was awarded a Federal teaching diploma. There is eyewitness evidence and several letters over many years that indicate Marić might have collaborated with Einstein prior to his landmark 1905 papers, known as the Annus Mirabilis papers, and that they developed some of the concepts together during their studies, although some historians of physics who have studied the issue disagree that she made any substantive contributions. Marriages and children Early correspondence between Einstein and Marić was discovered and published in 1987 which revealed that the couple had a daughter named "Lieserl", born in early 1902 in Novi Sad where Marić was staying with her parents. Marić returned to Switzerland without the child, whose real name and fate are unknown. The contents of Einstein's letter in September 1903 suggest that the girl was either given up for adoption or died of scarlet fever in infancy. Einstein and Marić married in January 1903. In May 1904, their son Hans Albert Einstein was born in Bern, Switzerland. Their son Eduard was born in Zürich in July 1910. The couple moved to Berlin in April 1914, but Marić returned to Zürich with their sons after learning that, despite their close relationship before, Einstein's chief romantic attraction was now his cousin Elsa Löwenthal; she was his first cousin maternally and second cousin paternally. Einstein and Marić divorced on 14 February 1919, having lived apart for five years. As part of the divorce settlement, Einstein agreed to give Marić his future (in the event, 1921) Nobel Prize money. In letters revealed in 2015, Einstein wrote to his early love Marie Winteler about his marriage and his strong feelings for her. He wrote in 1910, while his wife was pregnant with their second child: "I think of you in heartfelt love every spare minute and am so unhappy as only a man can be." He spoke about a "misguided love" and a "missed life" regarding his love for Marie. Einstein married Löwenthal in 1919, after having had a relationship with her since 1912. They emigrated to the United States in 1933. Elsa was diagnosed with heart and kidney problems in 1935 and died in December 1936. In 1923, Einstein fell in love with a secretary named Betty Neumann, the niece of a close friend, Hans Mühsam. In a volume of letters released by Hebrew University of Jerusalem in 2006, Einstein described about six women, including Margarete Lebach (a blonde Austrian), Estella Katzenellenbogen (the rich owner of a florist business), Toni Mendel (a wealthy Jewish widow) and Ethel Michanowski (a Berlin socialite), with whom he spent time and from whom he received gifts while being married to Elsa. Later, after the death of his second wife Elsa, Einstein was briefly in a relationship with Margarita Konenkova. Konenkova was a Russian spy who was married to the noted Russian sculptor Sergei Konenkov (who created the bronze bust of Einstein at the Institute for Advanced Study at Princeton). Einstein's son Eduard had a breakdown at about age 20 and was diagnosed with schizophrenia. His mother cared for him and he was also committed to asylums for several periods, finally being committed permanently after her death. Patent office After graduating in 1900, Einstein spent almost two frustrating years searching for a teaching post. He acquired Swiss citizenship in February 1901, but was not conscripted for medical reasons. With the help of Marcel Grossmann's father, he secured a job in Bern at the Swiss Patent Office, as an assistant examiner – level III. Einstein evaluated patent applications for a variety of devices including a gravel sorter and an electromechanical typewriter. In 1903, his position at the Swiss Patent Office became permanent, although he was passed over for promotion until he "fully mastered machine technology". Much of his work at the patent office related to questions about transmission of electric signals and electrical-mechanical synchronization of time, two technical problems that show up conspicuously in the thought experiments that eventually led Einstein to his radical conclusions about the nature of light and the fundamental connection between space and time. With a few friends he had met in Bern, Einstein started a small discussion group in 1902, self-mockingly named "The Olympia Academy", which met regularly to discuss science and philosophy. Sometimes they were joined by Mileva who attentively listened but did not participate. Their readings included the works of Henri Poincaré, Ernst Mach, and David Hume, which influenced his scientific and philosophical outlook. First scientific papers In 1900, Einstein's paper "Folgerungen aus den Capillaritätserscheinungen" ("Conclusions from the Capillarity Phenomena") was published in the journal Annalen der Physik. On 30 April 1905, Einstein completed his dissertation, A New Determination of Molecular Dimensions with Alfred Kleiner, Professor of Experimental Physics at the University of Zürich, serving as pro-forma advisor. His work was accepted in July, and Einstein was awarded a Ph.D. Also in 1905, which has been called Einstein's annus mirabilis (amazing year), he published four groundbreaking papers, on the photoelectric effect, Brownian motion, special relativity, and the equivalence of mass and energy, which were to bring him to the notice of the academic world, at the age of 26. Academic career By 1908, he was recognized as a leading scientist and was appointed lecturer at the University of Bern. The following year, after he gave a lecture on electrodynamics and the relativity principle at the University of Zurich, Alfred Kleiner recommended him to the faculty for a newly created professorship in theoretical physics. Einstein was appointed associate professor in 1909. Einstein became a full professor at the German Charles-Ferdinand University in Prague in April 1911, accepting Austrian citizenship in the Austro-Hungarian Empire to do so. During his Prague stay, he wrote 11 scientific works, five of them on radiation mathematics and on the quantum theory of solids. In July 1912, he returned to his alma mater in Zürich. From 1912 until 1914, he was a professor of theoretical physics at the ETH Zurich, where he taught analytical mechanics and thermodynamics. He also studied continuum mechanics, the molecular theory of heat, and the problem of gravitation, on which he worked with mathematician and friend Marcel Grossmann. When the "Manifesto of the Ninety-Three" was published in October 1914—a document signed by a host of prominent German intellectuals that justified Germany's militarism and position during the First World War—Einstein was one of the few German intellectuals to rebut its contents and sign the pacifistic "Manifesto to the Europeans". In the spring of 1913, Einstein was enticed to move to Berlin with an offer that included membership in the Prussian Academy of Sciences, and a linked University of Berlin professorship, enabling him to concentrate exclusively on research. On 3 July 1913, he became a member of the Prussian Academy of Sciences in Berlin. Max Planck and Walther Nernst visited him the next week in Zurich to persuade him to join the academy, additionally offering him the post of director at the Kaiser Wilhelm Institute for Physics, which was soon to be established. Membership in the academy included paid salary and professorship without teaching duties at Humboldt University of Berlin. He was officially elected to the academy on 24 July, and he moved to Berlin the following year. His decision to move to Berlin was also influenced by the prospect of living near his cousin Elsa, with whom he had started a romantic affair. Einstein assumed his position with the academy, and Berlin University, after moving into his Dahlem apartment on 1 April 1914. As World War I broke out that year, the plan for Kaiser Wilhelm Institute for Physics was aborted. The institute was established on 1 October 1917, with Einstein as its director. In 1916, Einstein was elected president of the German Physical Society (1916–1918). In 1911, Einstein used his 1907 Equivalence principle to calculate the deflection of light from another star by the Sun's gravity. In 1913, Einstein improved upon those calculations by using Riemannian space-time to represent the gravity field. By the fall of 1915, Einstein had successfully completed his general theory of relativity, which he used to calculate that deflection, and the perihelion precession of Mercury. In 1919, that deflection prediction was confirmed by Sir Arthur Eddington during the solar eclipse of 29 May 1919. Those observations were published in the international media, making Einstein world-famous. On 7 November 1919, the leading British newspaper The Times printed a banner headline that read: "Revolution in Science – New Theory of the Universe – Newtonian Ideas Overthrown". In 1920, he became a Foreign Member of the Royal Netherlands Academy of Arts and Sciences. In 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". While the general theory of relativity was still considered somewhat controversial, the citation also does not treat even the cited photoelectric work as an explanation but merely as a discovery of the law, as the idea of photons was considered outlandish and did not receive universal acceptance until the 1924 derivation of the Planck spectrum by S. N. Bose. Einstein was elected a Foreign Member of the Royal Society (ForMemRS) in 1921. He also received the Copley Medal from the Royal Society in 1925. Einstein resigned from the Prussian Academy in March 1933. Einstein's scientific accomplishments while in Berlin, included finishing the general theory of relativity, proving the gyromagnetic effect, contributing to the quantum theory of radiation, and Bose–Einstein statistics. 1921–1922: Travels abroad Einstein visited New York City for the first time on 2 April 1921, where he received an official welcome by Mayor John Francis Hylan, followed by three weeks of lectures and receptions. He went on to deliver several lectures at Columbia University and Princeton University, and in Washington, he accompanied representatives of the National Academy of Sciences on a visit to the White House. On his return to Europe he was the guest of the British statesman and philosopher Viscount Haldane in London, where he met several renowned scientific, intellectual, and political figures, and delivered a lecture at King's College London. He also published an essay, "My First Impression of the U.S.A.", in July 1921, in which he tried briefly to describe some characteristics of Americans, much as had Alexis de Tocqueville, who published his own impressions in Democracy in America (1835). For some of his observations, Einstein was clearly surprised: "What strikes a visitor is the joyous, positive attitude to life ... The American is friendly, self-confident, optimistic, and without envy." In 1922, his travels took him to Asia and later to Palestine, as part of a six-month excursion and speaking tour, as he visited Singapore, Ceylon and Japan, where he gave a series of lectures to thousands of Japanese. After his first public lecture, he met the emperor and empress at the Imperial Palace, where thousands came to watch. In a letter to his sons, he described his impression of the Japanese as being modest, intelligent, considerate, and having a true feel for art. In his own travel diaries from his 1922–23 visit to Asia, he expresses some views on the Chinese, Japanese and Indian people, which have been described as xenophobic and racist judgments when they were rediscovered in 2018. Because of Einstein's travels to the Far East, he was unable to personally accept the Nobel Prize for Physics at the Stockholm award ceremony in December 1922. In his place, the banquet speech was made by a German diplomat, who praised Einstein not only as a scientist but also as an international peacemaker and activist. On his return voyage, he visited Palestine for 12 days, his only visit to that region. He was greeted as if he were a head of state, rather than a physicist, which included a cannon salute upon arriving at the home of the British high commissioner, Sir Herbert Samuel. During one reception, the building was stormed by people who wanted to see and hear him. In Einstein's talk to the audience, he expressed happiness that the Jewish people were beginning to be recognized as a force in the world. Einstein visited Spain for two weeks in 1923, where he briefly met Santiago Ramón y Cajal and also received a diploma from King Alfonso XIII naming him a member of the Spanish Academy of Sciences. From 1922 to 1932, Einstein was a member of the International Committee on Intellectual Cooperation of the League of Nations in Geneva (with a few months of interruption in 1923–1924), a body created to promote international exchange between scientists, researchers, teachers, artists, and intellectuals. Originally slated to serve as the Swiss delegate, Secretary-General Eric Drummond was persuaded by Catholic activists Oskar Halecki and Giuseppe Motta to instead have him become the German delegate, thus allowing Gonzague de Reynold to take the Swiss spot, from which he promoted traditionalist Catholic values. Einstein's former physics professor Hendrik Lorentz and the Polish chemist Marie Curie were also members of the committee. 1925: Visit to South America In the months of March and April 1925, Einstein visited South America, where he spent about a month in Argentina, a week in Uruguay, and a week in Rio de Janeiro, Brazil. Einstein's visit was initiated by Jorge Duclout (1856–1927) and Mauricio Nirenstein (1877–1935) with the support of several Argentine scholars, including Julio Rey Pastor, Jakob Laub, and Leopoldo Lugones. The visit by Einstein and his wife was financed primarily by the Council of the University of Buenos Aires and the Asociación Hebraica Argentina (Argentine Hebraic Association) with a smaller contribution from the Argentine-Germanic Cultural Institution. 1930–1931: Travel to the US In December 1930, Einstein visited America for the second time, originally intended as a two-month working visit as a research fellow at the California Institute of Technology. After the national attention, he received during his first trip to the US, he and his arrangers aimed to protect his privacy. Although swamped with telegrams and invitations to receive awards or speak publicly, he declined them all. After arriving in New York City, Einstein was taken to various places and events, including Chinatown, a lunch with the editors of The New York Times, and a performance of Carmen at the Metropolitan Opera, where he was cheered by the audience on his arrival. During the days following, he was given the keys to the city by Mayor Jimmy Walker and met the president of Columbia University, who described Einstein as "the ruling monarch of the mind". Harry Emerson Fosdick, pastor at New York's Riverside Church, gave Einstein a tour of the church and showed him a full-size statue that the church made of Einstein, standing at the entrance. Also during his stay in New York, he joined a crowd of 15,000 people at Madison Square Garden during a Hanukkah celebration. Einstein next traveled to California, where he met Caltech president and Nobel laureate Robert A. Millikan. His friendship with Millikan was "awkward", as Millikan "had a penchant for patriotic militarism", where Einstein was a pronounced pacifist. During an address to Caltech's students, Einstein noted that science was often inclined to do more harm than good. This aversion to war also led Einstein to befriend author Upton Sinclair and film star Charlie Chaplin, both noted for their pacifism. Carl Laemmle, head of Universal Studios, gave Einstein a tour of his studio and introduced him to Chaplin. They had an instant rapport, with Chaplin inviting Einstein and his wife, Elsa, to his home for dinner. Chaplin said Einstein's outward persona, calm and gentle, seemed to conceal a "highly emotional temperament", from which came his "extraordinary intellectual energy". Chaplin's film, City Lights, was to premiere a few days later in Hollywood, and Chaplin invited Einstein and Elsa to join him as his special guests. Walter Isaacson, Einstein's biographer, described this as "one of the most memorable scenes in the new era of celebrity". Chaplin visited Einstein at his home on a later trip to Berlin and recalled his "modest little flat" and the piano at which he had begun writing his theory. Chaplin speculated that it was "possibly used as kindling wood by the Nazis". 1933: Emigration to the US In February 1933, while on a visit to the United States, Einstein knew he could not return to Germany with the rise to power of the Nazis under Germany's new chancellor, Adolf Hitler. While at American universities in early 1933, he undertook his third two-month visiting professorship at the California Institute of Technology in Pasadena. In February and March 1933, the Gestapo repeatedly raided his family's apartment in Berlin. He and his wife Elsa returned to Europe in March, and during the trip, they learned that the German Reichstag had passed the Enabling Act on 23 March, transforming Hitler's government into a de facto legal dictatorship, and that they would not be able to proceed to Berlin. Later on, they heard that their cottage had been raided by the Nazis and Einstein's personal sailboat confiscated. Upon landing in Antwerp, Belgium on 28 March, Einstein immediately went to the German consulate and surrendered his passport, formally renouncing his German citizenship. The Nazis later sold his boat and converted his cottage into a Hitler Youth camp. Refugee status In April 1933, Einstein discovered that the new German government had passed laws barring Jews from holding any official positions, including teaching at universities. Historian Gerald Holton describes how, with "virtually no audible protest being raised by their colleagues", thousands of Jewish scientists were suddenly forced to give up their university positions and their names were removed from the rolls of institutions where they were employed. A month later, Einstein's works were among those targeted by the German Student Union in the Nazi book burnings, with Nazi propaganda minister Joseph Goebbels proclaiming, "Jewish intellectualism is dead." One German magazine included him in a list of enemies of the German regime with the phrase, "not yet hanged", offering a $5,000 bounty on his head. In a subsequent letter to physicist and friend Max Born, who had already emigrated from Germany to England, Einstein wrote, "... I must confess that the degree of their brutality and cowardice came as something of a surprise." After moving to the US, he described the book burnings as a "spontaneous emotional outburst" by those who "shun popular enlightenment", and "more than anything else in the world, fear the influence of men of intellectual independence". Einstein was now without a permanent home, unsure where he would live and work, and equally worried about the fate of countless other scientists still in Germany. He rented a house in De Haan, Belgium, where he lived for a few months. In late July 1933, he went to England for about six weeks at the personal invitation of British naval officer Commander Oliver Locker-Lampson, who had become friends with Einstein in the preceding years. Locker-Lampson invited him to stay near his home in a wooden cabin on Roughton Heath in the Parish of . To protect Einstein, Locker-Lampson had two bodyguards watch over him at his secluded cabin; a photo of them carrying shotguns and guarding Einstein was published in the Daily Herald on 24 July 1933. Locker-Lampson took Einstein to meet Winston Churchill at his home, and later, Austen Chamberlain and former Prime Minister Lloyd George. Einstein asked them to help bring Jewish scientists out of Germany. British historian Martin Gilbert notes that Churchill responded immediately, and sent his friend, physicist Frederick Lindemann, to Germany to seek out Jewish scientists and place them in British universities. Churchill later observed that as a result of Germany having driven the Jews out, they had lowered their "technical standards" and put the Allies' technology ahead of theirs. Einstein later contacted leaders of other nations, including Turkey's Prime Minister, İsmet İnönü, to whom he wrote in September 1933 requesting placement of unemployed German-Jewish scientists. As a result of Einstein's letter, Jewish invitees to Turkey eventually totaled over "1,000 saved individuals". Locker-Lampson also submitted a bill to parliament to extend British citizenship to Einstein, during which period Einstein made a number of public appearances describing the crisis brewing in Europe. In one of his speeches he denounced Germany's treatment of Jews, while at the same time he introduced a bill promoting Jewish citizenship in Palestine, as they were being denied citizenship elsewhere. In his speech he described Einstein as a "citizen of the world" who should be offered a temporary shelter in the UK. Both bills failed, however, and Einstein then accepted an earlier offer from the Institute for Advanced Study, in Princeton, New Jersey, US, to become a resident scholar. Resident scholar at the Institute for Advanced Study In October 1933, Einstein returned to the US and took up a position at the Institute for Advanced Study, noted for having become a refuge for scientists fleeing Nazi Germany. At the time, most American universities, including Harvard, Princeton and Yale, had minimal or no Jewish faculty or students, as a result of their Jewish quotas, which lasted until the late 1940s. Einstein was still undecided on his future. He had offers from several European universities, including Christ Church, Oxford, where he stayed for three short periods between May 1931 and June 1933 and was offered a five-year studentship, but in 1935, he arrived at the decision to remain permanently in the United States and apply for citizenship. Einstein's affiliation with the Institute for Advanced Study would last until his death in 1955. He was one of the four first selected (along with John von Neumann and Kurt Gödel) at the new Institute, where he soon developed a close friendship with Gödel. The two would take long walks together discussing their work. Bruria Kaufman, his assistant, later became a physicist. During this period, Einstein tried to develop a unified field theory and to refute the accepted interpretation of quantum physics, both unsuccessfully. World War II and the Manhattan Project In 1939, a group of Hungarian scientists that included émigré physicist Leó Szilárd attempted to alert Washington to ongoing Nazi atomic bomb research. The group's warnings were discounted. Einstein and Szilárd, along with other refugees such as Edward Teller and Eugene Wigner, "regarded it as their responsibility to alert Americans to the possibility that German scientists might win the race to build an atomic bomb, and to warn that Hitler would be more than willing to resort to such a weapon." To make certain the US was aware of the danger, in July 1939, a few months before the beginning of World War II in Europe, Szilárd and Wigner visited Einstein to explain the possibility of atomic bombs, which Einstein, a pacifist, said he had never considered. He was asked to lend his support by writing a letter, with Szilárd, to President Roosevelt, recommending the US pay attention and engage in its own nuclear weapons research. The letter is believed to be "arguably the key stimulus for the U.S. adoption of serious investigations into nuclear weapons on the eve of the U.S. entry into World War II". In addition to the letter, Einstein used his connections with the Belgian Royal Family and the Belgian queen mother to get access with a personal envoy to the White House's Oval Office. Some say that as a result of Einstein's letter and his meetings with Roosevelt, the US entered the "race" to develop the bomb, drawing on its "immense material, financial, and scientific resources" to initiate the Manhattan Project. For Einstein, "war was a disease ... [and] he called for resistance to war." By signing the letter to Roosevelt, some argue he went against his pacifist principles. In 1954, a year before his death, Einstein said to his old friend, Linus Pauling, "I made one great mistake in my life—when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification—the danger that the Germans would make them ..." In 1955, Einstein and ten other intellectuals and scientists, including British philosopher Bertrand Russell, signed a manifesto highlighting the danger of nuclear weapons. US citizenship Einstein became an American citizen in 1940. Not long after settling into his career at the Institute for Advanced Study in Princeton, New Jersey, he expressed his appreciation of the meritocracy in American culture when compared to Europe. He recognized the "right of individuals to say and think what they pleased", without social barriers, and as a result, individuals were encouraged, he said, to be more creative, a trait he valued from his own early education. Einstein joined the National Association for the Advancement of Colored People (NAACP) in Princeton, where he campaigned for the civil rights of African Americans. He considered racism America's "worst disease", seeing it as "handed down from one generation to the next". As part of his involvement, he corresponded with civil rights activist W. E. B. Du Bois and was prepared to testify on his behalf during his trial in 1951. When Einstein offered to be a character witness for Du Bois, the judge decided to drop the case. In 1946, Einstein visited Lincoln University in Pennsylvania, a historically black college, where he was awarded an honorary degree. Lincoln was the first university in the United States to grant college degrees to African Americans; alumni include Langston Hughes and Thurgood Marshall. Einstein gave a speech about racism in America, adding, "I do not intend to be quiet about it." A resident of Princeton recalls that Einstein had once paid the college tuition for a black student. Einstein has said "Being a Jew myself, perhaps I can understand and empathize with how black people feel as victims of discrimination". Personal life Assisting Zionist causes Einstein was a figurehead leader in helping establish the Hebrew University of Jerusalem, which opened in 1925 and was among its first Board of Governors. Earlier, in 1921, he was asked by the biochemist and president of the World Zionist Organization, Chaim Weizmann, to help raise funds for the planned university. He also submitted various suggestions as to its initial programs. Among those, he advised first creating an Institute of Agriculture in order to settle the undeveloped land. That should be followed, he suggested, by a Chemical Institute and an Institute of Microbiology, to fight the various ongoing epidemics such as malaria, which he called an "evil" that was undermining a third of the country's development. Establishing an Oriental Studies Institute, to include language courses given in both Hebrew and Arabic, for scientific exploration of the country and its historical monuments, was also important. Einstein was not a nationalist; he was against the creation of an independent Jewish state, which would be established without his help as Israel in 1948. Einstein felt that the waves of arriving Jews of the Aliyah could live alongside existing Arabs in Palestine. His views were not shared by the majority of Jews seeking to form a new country; as a result, Einstein was limited to a marginal role in the Zionist movement. Chaim Weizmann later became Israel's first president. Upon his death while in office in November 1952 and at the urging of Ezriel Carlebach, Prime Minister David Ben-Gurion offered Einstein the position of President of Israel, a mostly ceremonial post. The offer was presented by Israel's ambassador in Washington, Abba Eban, who explained that the offer "embodies the deepest respect which the Jewish people can repose in any of its sons". Einstein declined, and wrote in his response that he was "deeply moved", and "at once saddened and ashamed" that he could not accept it. Love of music Einstein developed an appreciation for music at an early age. In his late journals he wrote: "If I were not a physicist, I would probably be a musician. I often think in music. I live my daydreams in music. I see my life in terms of music... I get most joy in life out of music." His mother played the piano reasonably well and wanted her son to learn the violin, not only to instill in him a love of music but also to help him assimilate into German culture. According to conductor Leon Botstein, Einstein began playing when he was 5. However, he did not enjoy it at that age. When he turned 13, he discovered the violin sonatas of Mozart, whereupon he became enamored of Mozart's compositions and studied music more willingly. Einstein taught himself to play without "ever practicing systematically". He said that "love is a better teacher than a sense of duty." At age 17, he was heard by a school examiner in Aarau while playing Beethoven's violin sonatas. The examiner stated afterward that his playing was "remarkable and revealing of 'great insight. What struck the examiner, writes Botstein, was that Einstein "displayed a deep love of the music, a quality that was and remains in short supply. Music possessed an unusual meaning for this student." Music took on a pivotal and permanent role in Einstein's life from that period on. Although the idea of becoming a professional musician himself was not on his mind at any time, among those with whom Einstein played chamber music were a few professionals, and he performed for private audiences and friends. Chamber music had also become a regular part of his social life while living in Bern, Zürich, and Berlin, where he played with Max Planck and his son, among others. He is sometimes erroneously credited as the editor of the 1937 edition of the Köchel catalog of Mozart's work; that edition was prepared by Alfred Einstein, who may have been a distant relation. In 1931, while engaged in research at the California Institute of Technology, he visited the Zoellner family conservatory in Los Angeles, where he played some of Beethoven and Mozart's works with members of the Zoellner Quartet. Near the end of his life, when the young Juilliard Quartet visited him in Princeton, he played his violin with them, and the quartet was "impressed by Einstein's level of coordination and intonation". Political views In 1918, Einstein was one of the founding members of the German Democratic Party, a liberal party. Later in his life, Einstein's political view was in favor of socialism and critical of capitalism, which he detailed in his essays such as "Why Socialism?" His opinions on the Bolsheviks also changed with time. In 1925, he criticized them for not having a 'well-regulated system of government' and called their rule a 'regime of terror and a tragedy in human history'. He later adopted a more moderated view, criticizing their methods but praising them, which is shown by his 1929 remark on Vladimir Lenin: "In Lenin I honor a man, who in total sacrifice of his own person has committed his entire energy to realizing social justice. I do not find his methods advisable. One thing is certain, however: men like him are the guardians and renewers of mankind's conscience." Einstein offered and was called on to give judgments and opinions on matters often unrelated to theoretical physics or mathematics. He strongly advocated the idea of a democratic global government that would check the power of nation-states in the framework of a world federation. He wrote "I advocate world government because I am convinced that there is no other possible way of eliminating the most terrible danger in which man has ever found himself." The FBI created a secret dossier on Einstein in 1932, and by the time of his death his FBI file was 1,427 pages long. Einstein was deeply impressed by Mahatma Gandhi, with whom he exchanged written letters. He described Gandhi as "a role model for the generations to come". The initial connection was established on 27 September 1931, when Wilfrid Israel took his Indian guest V. A. Sundaram to meet his friend Einstein at his summer home in the town of Caputh. Sundaram was Gandhi's disciple and special envoy, whom Wilfrid Israel met while visiting India and visiting the Indian leader's home in 1925. During the visit, Einstein wrote a short letter to Gandhi that was delivered to him through his envoy, and Gandhi responded quickly with his own letter. Although in the end Einstein and Gandhi were unable to meet as they had hoped, the direct connection between them was established through Wilfrid Israel. Religious and philosophical views Einstein spoke of his spiritual outlook in a wide array of original writings and interviews. He said he had sympathy for the impersonal pantheistic God of Baruch Spinoza's philosophy. He did not believe in a personal god who concerns himself with fates and actions of human beings, a view which he described as naïve. He clarified, however, that "I am not an atheist", preferring to call himself an agnostic, or a "deeply religious nonbeliever". When asked if he believed in an afterlife, Einstein replied, "No. And one life is enough for me." Einstein was primarily affiliated with non-religious humanist and Ethical Culture groups in both the UK and US. He served on the advisory board of the First Humanist Society of New York, and was an honorary associate of the Rationalist Association, which publishes New Humanist in Britain. For the 75th anniversary of the New York Society for Ethical Culture, he stated that the idea of Ethical Culture embodied his personal conception of what is most valuable and enduring in religious idealism. He observed, "Without 'ethical culture' there is no salvation for humanity." In a German-language letter to philosopher Eric Gutkind, dated 3 January 1954, Einstein wrote:The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this. ... For me the Jewish religion like all other religions is an incarnation of the most childish superstitions. And the Jewish people to whom I gladly belong and with whose mentality I have a deep affinity have no different quality for me than all other people. ... I cannot see anything 'chosen' about them. Death On 17 April 1955, Einstein experienced internal bleeding caused by the rupture of an abdominal aortic aneurysm, which had previously been reinforced surgically by Rudolph Nissen in 1948. He took the draft of a speech he was preparing for a television appearance commemorating the state of Israel's seventh anniversary with him to the hospital, but he did not live to complete it. Einstein refused surgery, saying, "I want to go when I want. It is tasteless to prolong life artificially. I have done my share; it is time to go. I will do it elegantly." He died in Penn Medicine Princeton Medical Center early the next morning at the age of 76, having continued to work until near the end. During the autopsy, the pathologist Thomas Stoltz Harvey removed Einstein's brain for preservation without the permission of his family, in the hope that the neuroscience of the future would be able to discover what made Einstein so intelligent. Einstein's remains were cremated in Trenton, New Jersey, and his ashes were scattered at an undisclosed location. In a memorial lecture delivered on 13 December 1965 at UNESCO headquarters, nuclear physicist J. Robert Oppenheimer summarized his impression of Einstein as a person: "He was almost wholly without sophistication and wholly without worldliness ... There was always with him a wonderful purity at once childlike and profoundly stubborn." Einstein bequeathed his personal archives, library and intellectual assets to the Hebrew University of Jerusalem in Israel. Scientific career Throughout his life, Einstein published hundreds of books and articles. He published more than 300 scientific papers and 150 non-scientific ones. On 5 December 2014, universities and archives announced the release of Einstein's papers, comprising more than 30,000 unique documents. Einstein's intellectual achievements and originality have made the word "Einstein" synonymous with "genius". In addition to the work he did by himself he also collaborated with other scientists on additional projects including the Bose–Einstein statistics, the Einstein refrigerator and others. 1905 – Annus Mirabilis papers The Annus Mirabilis papers are four articles pertaining to the photoelectric effect (which gave rise to quantum theory), Brownian motion, the special theory of relativity, and E = mc2 that Einstein published in the Annalen der Physik scientific journal in 1905. These four works contributed substantially to the foundation of modern physics and changed views on space, time, and matter. The four papers are: Statistical mechanics Thermodynamic fluctuations and statistical physics Einstein's first paper submitted in 1900 to Annalen der Physik was on capillary attraction. It was published in 1901 with the title "Folgerungen aus den Capillaritätserscheinungen", which translates as "Conclusions from the capillarity phenomena". Two papers he published in 1902–1903 (thermodynamics) attempted to interpret atomic phenomena from a statistical point of view. These papers were the foundation for the 1905 paper on Brownian motion, which showed that Brownian movement can be construed as firm evidence that molecules exist. His research in 1903 and 1904 was mainly concerned with the effect of finite atomic size on diffusion phenomena. Theory of critical opalescence Einstein returned to the problem of thermodynamic fluctuations, giving a treatment of the density variations in a fluid at its critical point. Ordinarily the density fluctuations are controlled by the second derivative of the free energy with respect to the density. At the critical point, this derivative is zero, leading to large fluctuations. The effect of density fluctuations is that light of all wavelengths is scattered, making the fluid look milky white. Einstein relates this to Rayleigh scattering, which is what happens when the fluctuation size is much smaller than the wavelength, and which explains why the sky is blue. Einstein quantitatively derived critical opalescence from a treatment of density fluctuations, and demonstrated how both the effect and Rayleigh scattering originate from the atomistic constitution of matter. Special relativity Einstein's "Zur Elektrodynamik bewegter Körper" ("On the Electrodynamics of Moving Bodies") was received on 30 June 1905 and published 26 September of that same year. It reconciled conflicts between Maxwell's equations (the laws of electricity and magnetism) and the laws of Newtonian mechanics by introducing changes to the laws of mechanics. Observationally, the effects of these changes are most apparent at high speeds (where objects are moving at speeds close to the speed of light). The theory developed in this paper later became known as Einstein's special theory of relativity. There is evidence from Einstein's writings that he collaborated with his first wife, Mileva Marić, on this work. The decision to publish only under his name seems to have been mutual, but the exact reason is unknown. This paper predicted that, when measured in the frame of a relatively moving observer, a clock carried by a moving body would appear to slow down, and the body itself would contract in its direction of motion. This paper also argued that the idea of a luminiferous aether—one of the leading theoretical entities in physics at the time—was superfluous. In his paper on mass–energy equivalence, Einstein produced E = mc2 as a consequence of his special relativity equations. Einstein's 1905 work on relativity remained controversial for many years, but was accepted by leading physicists, starting with Max Planck. Einstein originally framed special relativity in terms of kinematics (the study of moving bodies). In 1908, Hermann Minkowski reinterpreted special relativity in geometric terms as a theory of spacetime. Einstein adopted Minkowski's formalism in his 1915 general theory of relativity. General relativity General relativity and the equivalence principle General relativity (GR) is a theory of gravitation that was developed by Einstein between 1907 and 1915. According to general relativity, the observed gravitational attraction between masses results from the warping of space and time by those masses. General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape. As Einstein later said, the reason for the development of general relativity was that the preference of inertial motions within special relativity was unsatisfactory, while a theory which from the outset prefers no state of motion (even accelerated ones) should appear more satisfactory. Consequently, in 1907 he published an article on acceleration under special relativity. In that article titled "On the Relativity Principle and the Conclusions Drawn from It", he argued that free fall is really inertial motion, and that for a free-falling observer the rules of special relativity must apply. This argument is called the equivalence principle. In the same article, Einstein also predicted the phenomena of gravitational time dilation, gravitational redshift and deflection of light. In 1911, Einstein published another article "On the Influence of Gravitation on the Propagation of Light" expanding on the 1907 article, in which he estimated the amount of deflection of light by massive bodies. Thus, the theoretical prediction of general relativity could for the first time be tested experimentally. Gravitational waves In 1916, Einstein predicted gravitational waves, ripples in the curvature of spacetime which propagate as waves, traveling outward from the source, transporting energy as gravitational radiation. The existence of gravitational waves is possible under general relativity due to its Lorentz invariance which brings the concept of a finite speed of propagation of the physical interactions of gravity with it. By contrast, gravitational waves cannot exist in the Newtonian theory of gravitation, which postulates that the physical interactions of gravity propagate at infinite speed. The first, indirect, detection of gravitational waves came in the 1970s through observation of a pair of closely orbiting neutron stars, PSR B1913+16. The explanation of the decay in their orbital period was that they were emitting gravitational waves. Einstein's prediction was confirmed on 11 February 2016, when researchers at LIGO published the first observation of gravitational waves, detected on Earth on 14 September 2015, nearly one hundred years after the prediction. Hole argument and Entwurf theory While developing general relativity, Einstein became confused about the gauge invariance in the theory. He formulated an argument that led him to conclude that a general relativistic field theory is impossible. He gave up looking for fully generally covariant tensor equations and searched for equations that would be invariant under general linear transformations only. In June 1913, the Entwurf ('draft') theory was the result of these investigations. As its name suggests, it was a sketch of a theory, less elegant and more difficult than general relativity, with the equations of motion supplemented by additional gauge fixing conditions. After more than two years of intensive work, Einstein realized that the hole argument was mistaken and abandoned the theory in November 1915. Physical cosmology In 1917, Einstein applied the general theory of relativity to the structure of the universe as a whole. He discovered that the general field equations predicted a universe that was dynamic, either contracting or expanding. As observational evidence for a dynamic universe was not known at the time, Einstein introduced a new term, the cosmological constant, to the field equations, in order to allow the theory to predict a static universe. The modified field equations predicted a static universe of closed curvature, in accordance with Einstein's understanding of Mach's principle in these years. This model became known as the Einstein World or Einstein's static universe. Following the discovery of the recession of the nebulae by Edwin Hubble in 1929, Einstein abandoned his static model of the universe, and proposed two dynamic models of the cosmos, The Friedmann-Einstein universe of 1931 and the Einstein–de Sitter universe of 1932. In each of these models, Einstein discarded the cosmological constant, claiming that it was "in any case theoretically unsatisfactory". In many Einstein biographies, it is claimed that Einstein referred to the cosmological constant in later years as his "biggest blunder". The astrophysicist Mario Livio has recently cast doubt on this claim, suggesting that it may be exaggerated. In late 2013, a team led by the Irish physicist Cormac O'Raifeartaigh discovered evidence that, shortly after learning of Hubble's observations of the recession of the nebulae, Einstein considered a steady-state model of the universe. In a hitherto overlooked manuscript, apparently written in early 1931, Einstein explored a model of the expanding universe in which the density of matter remains constant due to a continuous creation of matter, a process he associated with the cosmological constant. As he stated in the paper, "In what follows, I would like to draw attention to a solution to equation (1) that can account for Hubbel's [sic] facts, and in which the density is constant over time" ... "If one considers a physically bounded volume, particles of matter will be continually leaving it. For the density to remain constant, new particles of matter must be continually formed in the volume from space." It thus appears that Einstein considered a steady-state model of the expanding universe many years before Hoyle, Bondi and Gold. However, Einstein's steady-state model contained a fundamental flaw and he quickly abandoned the idea. Energy momentum pseudotensor General relativity includes a dynamical spacetime, so it is difficult to see how to identify the conserved energy and momentum. Noether's theorem allows these quantities to be determined from a Lagrangian with translation invariance, but general covariance makes translation invariance into something of a gauge symmetry. The energy and momentum derived within general relativity by Noether's prescriptions do not make a real tensor for this reason. Einstein argued that this is true for a fundamental reason: the gravitational field could be made to vanish by a choice of coordinates. He maintained that the non-covariant energy momentum pseudotensor was, in fact, the best description of the energy momentum distribution in a gravitational field. This approach has been echoed by Lev Landau and Evgeny Lifshitz, and others, and has become standard. The use of non-covariant objects like pseudotensors was heavily criticized in 1917 by Erwin Schrödinger and others. Wormholes In 1935, Einstein collaborated with Nathan Rosen to produce a model of a wormhole, often called Einstein–Rosen bridges. His motivation was to model elementary particles with charge as a solution of gravitational field equations, in line with the program outlined in the paper "Do Gravitational Fields play an Important Role in the Constitution of the Elementary Particles?". These solutions cut and pasted Schwarzschild black holes to make a bridge between two patches. If one end of a wormhole was positively charged, the other end would be negatively charged. These properties led Einstein to believe that pairs of particles and antiparticles could be described in this way. Einstein–Cartan theory In order to incorporate spinning point particles into general relativity, the affine connection needed to be generalized to include an antisymmetric part, called the torsion. This modification was made by Einstein and Cartan in the 1920s. Equations of motion The theory of general relativity has a fundamental lawthe Einstein field equations, which describe how space curves. The geodesic equation, which describes how particles move, may be derived from the Einstein field equations. Since the equations of general relativity are non-linear, a lump of energy made out of pure gravitational fields, like a black hole, would move on a trajectory which is determined by the Einstein field equations themselves, not by a new law. So Einstein proposed that the path of a singular solution, like a black hole, would be determined to be a geodesic from general relativity itself. This was established by Einstein, Infeld, and Hoffmann for pointlike objects without angular momentum, and by Roy Kerr for spinning objects. Old quantum theory Photons and energy quanta In a 1905 paper, Einstein postulated that light itself consists of localized particles (quanta). Einstein's light quanta were nearly universally rejected by all physicists, including Max Planck and Niels Bohr. This idea only became universally accepted in 1919, with Robert Millikan's detailed experiments on the photoelectric effect, and with the measurement of Compton scattering. Einstein concluded that each wave of frequency f is associated with a collection of photons with energy hf each, where h is Planck's constant. He does not say much more, because he is not sure how the particles are related to the wave. But he does suggest that this idea would explain certain experimental results, notably the photoelectric effect. Quantized atomic vibrations In 1907, Einstein proposed a model of matter where each atom in a lattice structure is an independent harmonic oscillator. In the Einstein model, each atom oscillates independently—a series of equally spaced quantized states for each oscillator. Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Peter Debye refined this model. Adiabatic principle and action-angle variables Throughout the 1910s, quantum mechanics expanded in scope to cover many different systems. After Ernest Rutherford discovered the nucleus and proposed that electrons orbit like planets, Niels Bohr was able to show that the same quantum mechanical postulates introduced by Planck and developed by Einstein would explain the discrete motion of electrons in atoms, and the periodic table of the elements. Einstein contributed to these developments by linking them with the 1898 arguments Wilhelm Wien had made. Wien had shown that the hypothesis of adiabatic invariance of a thermal equilibrium state allows all the blackbody curves at different temperature to be derived from one another by a simple shifting process. Einstein noted in 1911 that the same adiabatic principle shows that the quantity which is quantized in any mechanical motion must be an adiabatic invariant. Arnold Sommerfeld identified this adiabatic invariant as the action variable of classical mechanics. Bose–Einstein statistics In 1924, Einstein received a description of a statistical model from Indian physicist Satyendra Nath Bose, based on a counting method that assumed that light could be understood as a gas of indistinguishable particles. Einstein noted that Bose's statistics applied to some atoms as well as to the proposed light particles, and submitted his translation of Bose's paper to the Zeitschrift für Physik. Einstein also published his own articles describing the model and its implications, among them the Bose–Einstein condensate phenomenon that some particulates should appear at very low temperatures. It was not until 1995 that the first such condensate was produced experimentally by Eric Allin Cornell and Carl Wieman using ultra-cooling equipment built at the NIST–JILA laboratory at the University of Colorado at Boulder. Bose–Einstein statistics are now used to describe the behaviors of any assembly of bosons. Einstein's sketches for this project may be seen in the Einstein Archive in the library of the Leiden University. Wave–particle duality Although the patent office promoted Einstein to Technical Examiner Second Class in 1906, he had not given up on academia. In 1908, he became a Privatdozent at the University of Bern. In "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" ("The Development of our Views on the Composition and Essence of Radiation"), on the quantization of light, and in an earlier 1909 paper, Einstein showed that Max Planck's energy quanta must have well-defined momenta and act in some respects as independent, point-like particles. This paper introduced the photon concept (although the name photon was introduced later by Gilbert N. Lewis in 1926) and inspired the notion of wave–particle duality in quantum mechanics. Einstein saw this wave–particle duality in radiation as concrete evidence for his conviction that physics needed a new, unified foundation. Zero-point energy In a series of works completed from 1911 to 1913, Planck reformulated his 1900 quantum theory and introduced the idea of zero-point energy in his "second quantum theory". Soon, this idea attracted the attention of Einstein and his assistant Otto Stern. Assuming the energy of rotating diatomic molecules contains zero-point energy, they then compared the theoretical specific heat of hydrogen gas with the experimental data. The numbers matched nicely. However, after publishing the findings, they promptly withdrew their support, because they no longer had confidence in the correctness of the idea of zero-point energy. Stimulated emission In 1917, at the height of his work on relativity, Einstein published an article in Physikalische Zeitschrift that proposed the possibility of stimulated emission, the physical process that makes possible the maser and the laser. This article showed that the statistics of absorption and emission of light would only be consistent with Planck's distribution law if the emission of light into a mode with n photons would be enhanced statistically compared to the emission of light into an empty mode. This paper was enormously influential in the later development of quantum mechanics, because it was the first paper to show that the statistics of atomic transitions had simple laws. Matter waves Einstein discovered Louis de Broglie's work and supported his ideas, which were received skeptically at first. In another major paper from this era, Einstein gave a wave equation for de Broglie waves, which Einstein suggested was the Hamilton–Jacobi equation of mechanics. This paper would inspire Schrödinger's work of 1926. Quantum mechanics Einstein's objections to quantum mechanics Einstein played a major role in developing quantum theory, beginning with his 1905 paper on the photoelectric effect. However, he became displeased with modern quantum mechanics as it had evolved after 1925, despite its acceptance by other physicists. He was skeptical that the randomness of quantum mechanics was fundamental rather than the result of determinism, stating that God "is not playing at dice". Until the end of his life, he continued to maintain that quantum mechanics was incomplete. Bohr versus Einstein The Bohr–Einstein debates were a series of public disputes about quantum mechanics between Einstein and Niels Bohr, who were two of its founders. Their debates are remembered because of their importance to the philosophy of science. Their debates would influence later interpretations of quantum mechanics. Einstein–Podolsky–Rosen paradox In 1935, Einstein returned to quantum mechanics, in particular to the question of its completeness, in the "EPR paper". In a thought experiment, he considered two particles, which had interacted such that their properties were strongly correlated. No matter how far the two particles were separated, a precise position measurement on one particle would result in equally precise knowledge of the position of the other particle; likewise, a precise momentum measurement of one particle would result in equally precise knowledge of the momentum of the other particle, without needing to disturb the other particle in any way. Given Einstein's concept of local realism, there were two possibilities: (1) either the other particle had these properties already determined, or (2) the process of measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. Einstein rejected this second possibility (popularly called "spooky action at a distance"). Einstein's belief in local realism led him to assert that, while the correctness of quantum mechanics was not in question, it must be incomplete. But as a physical principle, local realism was shown to be incorrect when the Aspect experiment of 1982 confirmed Bell's theorem, which J. S. Bell had delineated in 1964. The results of these and subsequent experiments demonstrate that quantum physics cannot be represented by any version of the picture of physics in which "particles are regarded as unconnected independent classical-like entities, each one being unable to communicate with the other after they have separated." Although Einstein was wrong about local realism, his clear prediction of the unusual properties of its opposite, entangled quantum states, has resulted in the EPR paper becoming among the top ten papers published in Physical Review. It is considered a centerpiece of the development of quantum information theory. Unified field theory Following his research on general relativity, Einstein attempted to generalize his theory of gravitation to include electromagnetism as aspects of a single entity. In 1950, he described his "unified field theory" in a Scientific American article titled "On the Generalized Theory of Gravitation". Although he was lauded for this work, his efforts were ultimately unsuccessful. Notably, Einstein's unification project did not accommodate the strong and weak nuclear forces, neither of which were well understood until many years after his death. Although mainstream physics long ignored Einstein's approaches to unification, Einstein's work has motivated modern quests for a theory of everything, in particular string theory, where geometrical fields emerge in a unified quantum-mechanical setting. Other investigations Einstein conducted other investigations that were unsuccessful and abandoned. These pertain to force, superconductivity, and other research. Collaboration with other scientists In addition to longtime collaborators Leopold Infeld, Nathan Rosen, Peter Bergmann and others, Einstein also had some one-shot collaborations with various scientists. Einstein–de Haas experiment Einstein and De Haas demonstrated that magnetization is due to the motion of electrons, nowadays known to be the spin. In order to show this, they reversed the magnetization in an iron bar suspended on a torsion pendulum. They confirmed that this leads the bar to rotate, because the electron's angular momentum changes as the magnetization changes. This experiment needed to be sensitive because the angular momentum associated with electrons is small, but it definitively established that electron motion of some kind is responsible for magnetization. Schrödinger gas model Einstein suggested to Erwin Schrödinger that he might be able to reproduce the statistics of a Bose–Einstein gas by considering a box. Then to each possible quantum motion of a particle in a box associate an independent harmonic oscillator. Quantizing these oscillators, each level will have an integer occupation number, which will be the number of particles in it. This formulation is a form of second quantization, but it predates modern quantum mechanics. Erwin Schrödinger applied this to derive the thermodynamic properties of a semiclassical ideal gas. Schrödinger urged Einstein to add his name as co-author, although Einstein declined the invitation. Einstein refrigerator In 1926, Einstein and his former student Leó Szilárd co-invented (and in 1930, patented) the Einstein refrigerator. This absorption refrigerator was then revolutionary for having no moving parts and using only heat as an input. On 11 November 1930, was awarded to Einstein and Leó Szilárd for the refrigerator. Their invention was not immediately put into commercial production, and the most promising of their patents were acquired by the Swedish company Electrolux. Non-scientific legacy While traveling, Einstein wrote daily to his wife Elsa and adopted stepdaughters Margot and Ilse. The letters were included in the papers bequeathed to the Hebrew University of Jerusalem. Margot Einstein permitted the personal letters to be made available to the public, but requested that it not be done until twenty years after her death (she died in 1986). Barbara Wolff, of the Hebrew University's Albert Einstein Archives, told the BBC that there are about 3,500 pages of private correspondence written between 1912 and 1955. Einstein's right of publicity was litigated in 2015 in a federal district court in California. Although the court initially held that the right had expired, that ruling was immediately appealed, and the decision was later vacated in its entirety. The underlying claims between the parties in that lawsuit were ultimately settled. The right is enforceable, and the Hebrew University of Jerusalem is the exclusive representative of that right. Corbis, successor to The Roger Richman Agency, licenses the use of his name and associated imagery, as agent for the university. In popular culture Einstein became one of the most famous scientific celebrities, beginning with the confirmation of his theory of general relativity in 1919. Despite the general public having little understanding of his work, he was widely recognized and received adulation and publicity. In the period before World War II, The New Yorker published a vignette in their "The Talk of the Town" feature saying that Einstein was so well known in America that he would be stopped on the street by people wanting him to explain "that theory". He finally figured out a way to handle the incessant inquiries. He told his inquirers "Pardon me, sorry! Always I am mistaken for Professor Einstein." Einstein has been the subject of or inspiration for many novels, films, plays, and works of music. He is a favorite model for depictions of absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. Time magazine's Frederic Golden wrote that Einstein was "a cartoonist's dream come true". Many popular quotations are often misattributed to him. Awards and honors Einstein received numerous awards and honors, and in 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". None of the nominations in 1921 met the criteria set by Alfred Nobel, so the 1921 prize was carried forward and awarded to Einstein in 1922. Publications Scientific First of a series of papers on this topic. A reprint of this book was published by Edition Erbrich in 1982, . . Further information about the volumes published so far can be found on the webpages of the Einstein Papers Project and on the Princeton University Press Einstein Page Others . The chasing a light beam thought experiment is described on pages 48–51. See also Albert Einstein House in Princeton Einstein's thought experiments Einstein notation The Einstein Theory of Relativity, an educational film Frist Campus Center at Princeton University room 302 is associated with Einstein. (The center was once the Palmer Physical Laboratory.) Heinrich Burkhardt Bern Historical Museum (Einstein Museum) History of gravitational theory List of coupled cousins List of German inventors and discoverers Jewish Nobel laureates List of peace activists Relativity priority dispute Sticky bead argument Notes References Works cited Further reading , or External links Einstein's Personal Correspondence: Religion, Politics, The Holocaust, and Philosophy Shapell Manuscript Foundation Federal Bureau of Investigation file on Albert Einstein Einstein and his love of music, Physics World including the Nobel Lecture 11 July 1923 Fundamental ideas and problems of the theory of relativity Albert Einstein Archives Online (80,000+ Documents) (MSNBC, 19 March 2012) Einstein's declaration of intention for American citizenship on the World Digital Library Albert Einstein Collection at Brandeis University The Collected Papers of Albert Einstein "Digital Einstein" at Princeton University Home page of Albert Einstein at The Institute for Advanced Study Albert – The Digital Repository of the IAS, which contains many digitized original documents and photographs 1879 births 1955 deaths 20th-century American engineers 20th-century American physicists 20th-century American writers American agnostics American humanists American letter writers American Nobel laureates American pacifists American relativity theorists American science writers American Zionists American Ashkenazi Jews Charles University faculty Swiss cosmologists Deaths from abdominal aortic aneurysm Albert ETH Zurich alumni ETH Zurich faculty German agnostics German Ashkenazi Jews German emigrants to Switzerland German humanists 19th-century German Jews German Nobel laureates German relativity theorists Institute for Advanced Study faculty Jewish agnostics Jewish American physicists Jewish emigrants from Nazi Germany to the United States Jewish physicists Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Sciences Naturalised citizens of Austria Naturalised citizens of Switzerland Naturalized citizens of the United States New Jersey Hall of Fame inductees Nobel laureates in Physics Pantheists Patent examiners People who lost German citizenship Philosophers of mathematics Philosophers of science Philosophy of science Quantum physicists Scientists from Munich Spinozists Stateless people Denaturalized citizens of Germany Swiss agnostics Swiss emigrants to the United States Swiss Ashkenazi Jews 20th-century Swiss inventors 20th-century American inventors Swiss physicists Winners of the Max Planck Medal Google Doodles University of Zurich alumni University of Bern faculty University of Zurich faculty Swiss Nobel laureates Pipe smokers
Albert Einstein
In mathematics and computer science, an algorithm () is a finite sequence of well-defined instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. By making use of artificial intelligence, algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code through various routes (referred to as automated decision-making). Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus". In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result. As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. History The concept of algorithm has existed since antiquity. Arithmetic algorithms, such as a division algorithm, were used by ancient Babylonian mathematicians c. 2500 BC and Egyptian mathematicians c. 1550 BC. Greek mathematicians later used algorithms in 240 BC in the sieve of Eratosthenes for finding prime numbers, and the Euclidean algorithm for finding the greatest common divisor of two numbers. Arabic mathematicians such as al-Kindi in the 9th century used cryptographic algorithms for code-breaking, based on frequency analysis. The word algorithm is derived from the name of the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī, whose nisba (identifying him as from Khwarazm) was Latinized as Algoritmi (Arabized Persian الخوارزمی c. 780–850). Muḥammad ibn Mūsā al-Khwārizmī was a mathematician, astronomer, geographer, and scholar in the House of Wisdom in Baghdad, whose name means 'the native of Khwarazm', a region that was part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, which was translated into Latin during the 12th century. The manuscript starts with the phrase Dixit Algorizmi ('Thus spake Al-Khwarizmi'), where "Algorizmi" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through another of his books, the Algebra. In late medieval Latin, algorismus, English 'algorism', the corruption of his name, simply meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός (arithmos), 'number' (cf. 'arithmetic'), the Latin word was altered to algorithmus, and the corresponding English term 'algorithm' is first attested in the 17th century; the modern sense was introduced in the 19th century. Indian mathematics was predominantly algorithmic. Algorithms that are representative of the Indian mathematical tradition range from the ancient Śulbasūtrās to the medieval texts of the Kerala School. In English, the word algorithm was first used in about 1230 and then by Chaucer in 1391. English adopted the French term, but it was not until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins with: which translates to: The poem is a few hundred lines long and summarizes the art of calculating with the new styled Indian dice (Tali Indorum), or Hindu numerals. A partial formalization of the modern concept of algorithm began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert in 1928. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939. Informal definition An informal definition could be "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure or cook-book recipe. In general, a program is only an algorithm if it stops eventually—even though infinite loops may sometimes prove desirable. A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section. offer an informal meaning of the word "algorithm" in the following quotation: No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ... you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols. An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as y = m + n (i.e., two arbitrary "input variables" m and n that produce an output y), but various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example): Precise instructions (in a language understood by "the computer") for a fast, efficient, "good" process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format. The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device. Formalization Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000): Minsky: "But we will also maintain, with Turing ... that any procedure which could "naturally" be called effective, can, in fact, be realized by a (simple) machine. Although this may seem extreme, the arguments ... in its favor are hard to refute". Gurevich: "… Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine … according to Savage [1987], an algorithm is a computational process defined by a Turing machine".Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem. Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures. For some of these computational processes, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable). Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom"—an idea that is described more formally by flow of control. So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one which attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of "memory" as a scratchpad. An example of such an assignment can be found below. For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming. Expressing algorithms Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms. There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more). Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows: 1 High-level description "...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head." 2 Implementation description "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function." 3 Formal description Most detailed, "lowest level", gives the Turing machine's "state table". For an example of the simple algorithm "Add m+n" described in all three levels, see Examples. Design Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm's run-time growth as the size of its input increases. Typical steps in the development of algorithms: Problem definition Development of a model Specification of the algorithm Designing an algorithm Checking the correctness of the algorithm Analysis of algorithm Implementation of algorithm Program testing Documentation preparation Computer algorithms "Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: Knuth: " ... we want good algorithms in some loosely defined aesthetic sense. One criterion ... is the length of time taken to perform the algorithm .... Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc." Chaitin: " ... a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does" Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant—such a proof would solve the Halting problem (ibid). Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms". Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below. Computers (and computors), models of computation: A computer (or human "computor") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent. Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions unless either a conditional IF-THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional. Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't, then the algorithm, to be effective, must provide a set of rules for extracting a square root. This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor). But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement"). Structured programming, canonical structures: Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. Canonical flowchart symbols: The graphical aide called a flowchart, offers a way to describe and document an algorithm (and a computer program of one). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols, and their use to build the canonical structures are shown in the diagram. Examples Algorithm example One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as: High-level description: If there are no numbers in the set then there is no highest number. Assume the first number in the set is the largest number in the set. For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set. When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Input: A list of numbers L. Output: The largest number in the list L. if L.size = 0 return null largest ← L[0] for each item in L, do if item > largest, then largest ← item return largest Euclid's algorithm In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division. For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero). Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm. Computer language for Euclid's algorithm Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction. A location is symbolized by upper case letter(s), e.g. S, A, etc. The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009. An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4: INPUT: [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S [Initialize R: make the remaining length r equal to the starting/initial/input length l]: R ← L E0: [Ensure r ≥ s.] [Ensure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 7 ELSE swap the contents of R and S. L ← R (this first step is redundant, but is useful for later discussion). R ← S S ← L E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R. IF S > R THEN done measuring so GOTO 10 ELSE measure again, R ← R − S [Remainder-loop]: GOTO 7. E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S. IF R = 0 THEN done so GOTO step 15 ELSE CONTINUE TO step 11, E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s; L serves as a temporary location. L ← R R ← S S ← L [Repeat the measuring process]: GOTO 7 OUTPUT: [Done. S contains the greatest common divisor]: PRINT S DONE: HALT, END, STOP. An elegant program for Euclid's algorithm The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←. 5 REM Euclid's algorithm for greatest common divisor 6 PRINT "Type two integers greater than 0" 10 INPUT A,B 20 IF B=0 THEN GOTO 80 30 IF A > B THEN GOTO 60 40 LET B=B-A 50 GOTO 20 60 LET A=A-B 70 GOTO 20 80 PRINT A 90 END How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses. The following version can be used with programming languages from the C-family: // Euclid's algorithm for greatest common divisor int euclidAlgorithm (int A, int B){ A=abs(A); B=abs(B); while (B!=0){ while (A>B) A=A-B; B=B-A; } return A; } Testing the Euclid algorithms Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950. But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996). Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof. Measuring and improving the Euclid algorithms Elegance (compactness) versus goodness (speed): With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed. Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps. The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis. Algorithmic analysis It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of O(n), using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n)) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays. Formal versus empirical The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner. Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Classification There are various ways to classify algorithms, each with its own merits. By implementation One way to classify algorithms is by implementation means. Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Logical An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages, the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms produces a well-defined change in the algorithm. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a computer network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are: Brute-force or exhaustive search This is the naive method of trying every possible solution to see which is best. Divide and conquer A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity. The greedy method A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. By field of study Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques. Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields. By complexity Algorithms can be classified by the amount of time they need to complete compared to their input size: Constant time: if the time needed by the algorithm is the same, regardless of the input size. E.g. an access to an array element. Logarithmic time: if the time is a logarithmic function of the input size. E.g. binary search algorithm. Linear time: if the time is proportional to the input size. E.g. the traverse of a list. Polynomial time: if the time is a power of the input size. E.g. the bubble sort algorithm has quadratic time complexity. Exponential time: if the time is an exponential function of the input size. E.g. Brute-force search. Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them. Continuous algorithms The adjective "continuous" when applied to the word "algorithm" can mean: An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer. Legal issues Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). History: Development of the notion of "algorithm" Ancient Near East The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to circa 2500 BC described the earliest division algorithm. During the Hammurabi dynasty circa 1800-1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus circa 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC). Discrete and distinguishable symbols Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations. Manipulation of symbols as "place holders" for numbers: algebra Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the Al-jabr in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book Al-jabr. In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator (ca 1680): Cryptographic algorithms The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. Mechanical contrivances with discrete states The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime. Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc.] ...". With this machine he could analyze a "syllogism or any other simple logical argument". This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine". Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned." Mathematics during the 19th century up to the mid-20th century Symbols and rules: In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language". But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913). The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers. Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis". Emil Post (1936) and Alan Turing (1936–37, 1939) Emil Post (1936) described the actions of a "computer" (human being) as follows: "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions. His symbol space would be "a two-way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. "One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise, the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes... "A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machine Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence of Morse code and telegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences. Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers. "Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book...I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite... "The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite... "Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided." Turing's reduction yields the following: "The simple operations must therefore include: "(a) Changes of the symbol on one of the observed squares "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. "It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following: "(A) A possible change (a) of symbol together with a possible change of state of mind. "(B) A possible change (b) of observed squares, together with a possible change of state of mind" "We may now construct a machine to do the work of this computer." A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition ... [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing, and Post] ... We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability ... . "† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions". J.B. Rosser (1939) and S.C. Kleene (1943) J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added): "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–226) Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular, Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion, in particular, Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation. Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original): "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273) History after 1950 A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments about artificial intelligence). For more, see Algorithm characterizations. See also Abstract machine Algorithm engineering Algorithm characterizations Algorithmic bias Algorithmic composition Algorithmic entities Algorithmic synthesis Algorithmic technique Algorithmic topology Garbage in, garbage out Introduction to Algorithms (textbook) List of algorithms List of algorithm general topics List of important publications in theoretical computer science – Algorithms Regulation of algorithms Theory of computation Computability theory Computational complexity theory Notes Bibliography Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. . Includes an excellent bibliography of 56 references. , : cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable". Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109 Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc. Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc. , Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources. , 3rd edition 1976[?], (pbk.) , . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result). Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis). Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981 A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .] Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines. Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable) Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4). . Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK. Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton. United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006 Further reading Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms. Stanford, California: Center for the Study of Language and Information. Knuth, Donald E. (2010). Selected Papers on Design of Algorithms. Stanford, California: Center for the Study of Language and Information. External links Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology Algorithm repositories The Stony Brook Algorithm Repository – State University of New York at Stony Brook Collected Algorithms of the ACM – Association for Computing Machinery The Stanford GraphBase – Stanford University Articles with example pseudocode Mathematical logic Theoretical computer science
Algorithm
Alexander Graham Bell (, born Alexander Bell; March 3, 1847 – August 2, 1922) was a Scottish-born inventor, scientist, and engineer who is credited with patenting the first practical telephone. He also co-founded the American Telephone and Telegraph Company (AT&T) in 1885. Bell's father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf; profoundly influencing Bell's life's work. His research on hearing and speech further led him to experiment with hearing devices which eventually culminated in Bell being awarded the first U.S. patent for the telephone, on March 7, 1876. Bell considered his invention an intrusion on his real work as a scientist and refused to have a telephone in his study. Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903. Beyond his work in engineering, Bell had a deep interest in the emerging science of heredity. Early life Alexander Bell was born in Edinburgh, Scotland, on March 3, 1847. The family home was at South Charlotte Street, and has a stone inscription marking it as Alexander Graham Bell's birthplace. He had two brothers: Melville James Bell (1845–1870) and Edward Charles Bell (1848–1867), both of whom would die of tuberculosis. His father was Professor Alexander Melville Bell, a phonetician, and his mother was Eliza Grace Bell (née Symonds). Born as just "Alexander Bell", at age 10, he made a plea to his father to have a middle name like his two brothers. For his 11th birthday, his father acquiesced and allowed him to adopt the name "Graham", chosen out of respect for Alexander Graham, a Canadian being treated by his father who had become a family friend. To close relatives and friends he remained "Aleck". First invention As a child, young Bell displayed a curiosity about his world; he gathered botanical specimens and ran experiments at an early age. His best friend was Ben Herdman, a neighbour whose family operated a flour mill. At the age of 12, Bell built a homemade device that combined rotating paddles with sets of nail brushes, creating a simple dehusking machine that was put into operation at the mill and used steadily for a number of years. In return, Ben's father John Herdman gave both boys the run of a small workshop in which to "invent". From his early years, Bell showed a sensitive nature and a talent for art, poetry, and music that was encouraged by his mother. With no formal training, he mastered the piano and became the family's pianist. Despite being normally quiet and introspective, he revelled in mimicry and "voice tricks" akin to ventriloquism that continually entertained family guests during their occasional visits. Bell was also deeply affected by his mother's gradual deafness (she began to lose her hearing when he was 12), and learned a manual finger language so he could sit at her side and tap out silently the conversations swirling around the family parlour. He also developed a technique of speaking in clear, modulated tones directly into his mother's forehead wherein she would hear him with reasonable clarity. Bell's preoccupation with his mother's deafness led him to study acoustics. His family was long associated with the teaching of elocution: his grandfather, Alexander Bell, in London, his uncle in Dublin, and his father, in Edinburgh, were all elocutionists. His father published a variety of works on the subject, several of which are still well known, especially his The Standard Elocutionist (1860), which appeared in Edinburgh in 1868. The Standard Elocutionist appeared in 168 British editions and sold over a quarter of a million copies in the United States alone. In this treatise, his father explains his methods of how to instruct deaf-mutes (as they were then known) to articulate words and read other people's lip movements to decipher meaning. Bell's father taught him and his brothers not only to write Visible Speech but to identify any symbol and its accompanying sound. Bell became so proficient that he became a part of his father's public demonstrations and astounded audiences with his abilities. He could decipher Visible Speech representing virtually every language, including Latin, Scottish Gaelic, and even Sanskrit, accurately reciting written tracts without any prior knowledge of their pronunciation. Education As a young child, Bell, like his brothers, received his early schooling at home from his father. At an early age, he was enrolled at the Royal High School, Edinburgh, Scotland, which he left at the age of 15, having completed only the first four forms. His school record was undistinguished, marked by absenteeism and lacklustre grades. His main interest remained in the sciences, especially biology, while he treated other school subjects with indifference, to the dismay of his father. Upon leaving school, Bell travelled to London to live with his grandfather, Alexander Bell, on Harrington Square. During the year he spent with his grandfather, a love of learning was born, with long hours spent in serious discussion and study. The elder Bell took great efforts to have his young pupil learn to speak clearly and with conviction, the attributes that his pupil would need to become a teacher himself. At the age of 16, Bell secured a position as a "pupil-teacher" of elocution and music, in Weston House Academy at Elgin, Moray, Scotland. Although he was enrolled as a student in Latin and Greek, he instructed classes himself in return for board and £10 per session. The following year, he attended the University of Edinburgh, joining his older brother Melville who had enrolled there the previous year. In 1868, not long before he departed for Canada with his family, Bell completed his matriculation exams and was accepted for admission to University College London. First experiments with sound His father encouraged Bell's interest in speech and, in 1863, took his sons to see a unique automaton developed by Sir Charles Wheatstone based on the earlier work of Baron Wolfgang von Kempelen. The rudimentary "mechanical man" simulated a human voice. Bell was fascinated by the machine and after he obtained a copy of von Kempelen's book, published in German, and had laboriously translated it, he and his older brother Melville built their own automaton head. Their father, highly interested in their project, offered to pay for any supplies and spurred the boys on with the enticement of a "big prize" if they were successful. While his brother constructed the throat and larynx, Bell tackled the more difficult task of recreating a realistic skull. His efforts resulted in a remarkably lifelike head that could "speak", albeit only a few words. The boys would carefully adjust the "lips" and when a bellows forced air through the windpipe, a very recognizable "Mama" ensued, to the delight of neighbours who came to see the Bell invention. Intrigued by the results of the automaton, Bell continued to experiment with a live subject, the family's Skye Terrier, "Trouve". After he taught it to growl continuously, Bell would reach into its mouth and manipulate the dog's lips and vocal cords to produce a crude-sounding "Ow ah oo ga ma ma". With little convincing, visitors believed his dog could articulate "How are you, grandmama?" Indicative of his playful nature, his experiments convinced onlookers that they saw a "talking dog". These initial forays into experimentation with sound led Bell to undertake his first serious work on the transmission of sound, using tuning forks to explore resonance. At age 19, Bell wrote a report on his work and sent it to philologist Alexander Ellis, a colleague of his father. Ellis immediately wrote back indicating that the experiments were similar to existing work in Germany, and also lent Bell a copy of Hermann von Helmholtz's work, The Sensations of Tone as a Physiological Basis for the Theory of Music. Dismayed to find that groundbreaking work had already been undertaken by Helmholtz who had conveyed vowel sounds by means of a similar tuning fork "contraption", Bell pored over the German scientist's book. Working from his own erroneous mistranslation of a French edition, Bell fortuitously then made a deduction that would be the underpinning of all his future work on transmitting sound, reporting: "Without knowing much about the subject, it seemed to me that if vowel sounds could be produced by electrical means, so could consonants, so could articulate speech." He also later remarked: "I thought that Helmholtz had done it ... and that my failure was due only to my ignorance of electricity. It was a valuable blunder ... If I had been able to read German in those days, I might never have commenced my experiments!" Family tragedy In 1865, when the Bell family moved to London, Bell returned to Weston House as an assistant master and, in his spare hours, continued experiments on sound using a minimum of laboratory equipment. Bell concentrated on experimenting with electricity to convey sound and later installed a telegraph wire from his room in Somerset College to that of a friend. Throughout late 1867, his health faltered mainly through exhaustion. His younger brother, Edward "Ted," was similarly bed-ridden, suffering from tuberculosis. While Bell recovered (by then referring to himself in correspondence as "A. G. Bell") and served the next year as an instructor at Somerset College, Bath, England, his brother's condition deteriorated. Edward would never recover. Upon his brother's death, Bell returned home in 1867. His older brother Melville had married and moved out. With aspirations to obtain a degree at University College London, Bell considered his next years as preparation for the degree examinations, devoting his spare time at his family's residence to studying. Helping his father in Visible Speech demonstrations and lectures brought Bell to Susanna E. Hull's private school for the deaf in South Kensington, London. His first two pupils were deaf-mute girls who made remarkable progress under his tutelage. While his older brother seemed to achieve success on many fronts including opening his own elocution school, applying for a patent on an invention, and starting a family, Bell continued as a teacher. However, in May 1870, Melville died from complications due to tuberculosis, causing a family crisis. His father had also suffered a debilitating illness earlier in life and had been restored to health by a convalescence in Newfoundland. Bell's parents embarked upon a long-planned move when they realized that their remaining son was also sickly. Acting decisively, Alexander Melville Bell asked Bell to arrange for the sale of all the family property, conclude all of his brother's affairs (Bell took over his last student, curing a pronounced lisp), and join his father and mother in setting out for the "New World". Reluctantly, Bell also had to conclude a relationship with Marie Eccleston, who, as he had surmised, was not prepared to leave England with him. Canada In 1870, 23-year-old Bell travelled with his parents and his brother's widow, Caroline Margaret Ottaway, to Paris, Ontario, to stay with Thomas Henderson, a Baptist minister and family friend. The Bell family soon purchased a farm of at Tutelo Heights (now called Tutela Heights), near Brantford, Ontario. The property consisted of an orchard, large farmhouse, stable, pigsty, hen-house, and a carriage house, which bordered the Grand River. At the homestead, Bell set up his own workshop in the converted carriage house near to what he called his "dreaming place", a large hollow nestled in trees at the back of the property above the river. Despite his frail condition upon arriving in Canada, Bell found the climate and environs to his liking, and rapidly improved. He continued his interest in the study of the human voice and when he discovered the Six Nations Reserve across the river at Onondaga, he learned the Mohawk language and translated its unwritten vocabulary into Visible Speech symbols. For his work, Bell was awarded the title of Honorary Chief and participated in a ceremony where he donned a Mohawk headdress and danced traditional dances. After setting up his workshop, Bell continued experiments based on Helmholtz's work with electricity and sound. He also modified a melodeon (a type of pump organ) so that it could transmit its music electrically over a distance. Once the family was settled in, both Bell and his father made plans to establish a teaching practice and in 1871, he accompanied his father to Montreal, where Melville was offered a position to teach his System of Visible Speech. Work with the deaf Bell's father was invited by Sarah Fuller, principal of the Boston School for Deaf Mutes (which continues today as the public Horace Mann School for the Deaf), in Boston, Massachusetts, United States, to introduce the Visible Speech System by providing training for Fuller's instructors, but he declined the post in favour of his son. Travelling to Boston in April 1871, Bell proved successful in training the school's instructors. He was subsequently asked to repeat the programme at the American Asylum for Deaf-mutes in Hartford, Connecticut, and the Clarke School for the Deaf in Northampton, Massachusetts. Returning home to Brantford after six months abroad, Bell continued his experiments with his "harmonic telegraph". The basic concept behind his device was that messages could be sent through a single wire if each message was transmitted at a different pitch, but work on both the transmitter and receiver was needed. Unsure of his future, he first contemplated returning to London to complete his studies, but decided to return to Boston as a teacher. His father helped him set up his private practice by contacting Gardiner Greene Hubbard, the president of the Clarke School for the Deaf for a recommendation. Teaching his father's system, in October 1872, Alexander Bell opened his "School of Vocal Physiology and Mechanics of Speech" in Boston, which attracted a large number of deaf pupils, with his first class numbering 30 students. While he was working as a private tutor, one of his pupils was Helen Keller, who came to him as a young child unable to see, hear, or speak. She was later to say that Bell dedicated his life to the penetration of that "inhuman silence which separates and estranges". In 1893, Keller performed the sod-breaking ceremony for the construction of Bell's new Volta Bureau, dedicated to "the increase and diffusion of knowledge relating to the deaf". Throughout his lifetime, Bell sought to integrate the deaf and hard of hearing with the hearing world. To achieve complete assimilation in society, Bell encouraged speech therapy and lip reading as well as sign language. He outlined this in a 1898 paper detailing his belief that with resources and effort, the deaf could be taught to read lips and speak (known as oralism) thus enabling their integration within the wider society from which many were often being excluded. Owing to his efforts to balance oralism with the teaching of sign language, Bell is often viewed negatively by those embracing Deaf culture. Ironically, Bell's last words to his deaf wife, Mabell, were signed. Continuing experimentation In 1872, Bell became professor of Vocal Physiology and Elocution at the Boston University School of Oratory. During this period, he alternated between Boston and Brantford, spending summers in his Canadian home. At Boston University, Bell was "swept up" by the excitement engendered by the many scientists and inventors residing in the city. He continued his research in sound and endeavored to find a way to transmit musical notes and articulate speech, but although absorbed by his experiments, he found it difficult to devote enough time to experimentation. While days and evenings were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a far-reaching decision to concentrate on his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students, six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl who was ten years Bell's junior but became the object of his affection. Having lost her hearing after a near-fatal bout of scarlet fever close to her fifth birthday, she had learned to read lips but her father, Gardiner Greene Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. The telephone By 1874, Bell's initial work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory" (a rented facility) and at his family home in Canada a big success. While working that summer in Brantford, Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President William Orton, had become "the nervous system of commerce". Orton had contracted with inventors Thomas Edison and Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875, Bell and Pollok visited the scientist Joseph Henry, who was then director of the Smithsonian Institution, and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However, a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell hired Thomas Watson as his assistant, and the two of them experimented with acoustic telegraphy. On June 2, 1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit indistinct, voice-like sounds, but not clear speech. The race to the patent office In 1875, Bell developed an acoustic telegraph and drew up a patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound" Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the sentence "Mr. Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device in his previous application in which he described a cup of mercury, not water. He had filed the mercury application at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100 (). Bell claimed they discussed the patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details. Bell denied in an affidavit that he ever gave Wilber any money. Later developments On March 10, 1876, Bell used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side. Continuing his experiments in Brantford, Bell brought home a working model of his telephone. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant distant, indicating that he was ready. He made a telephone call via telegraph wires and faint voices were heard replying. The following night, he amazed guests as well as his family with a call between the Bell Homestead and the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, Ontario, distant. This test was said by many sources to be the "world's first long-distance call". The final test certainly proved that the telephone could work over long distances, at least as a one-way call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires while he fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public demonstrations and lectures to introduce the new invention to the scientific community as well as the general public. A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II of Brazil. One of the judges at the Exhibition, Sir William Thomson (later, Lord Kelvin), a renowned Scottish scientist, described the telephone as "the greatest by far of all the marvels of the electric telegraph". On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The Bell Telephone Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In 1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. Emperor Pedro II of Brazil was the first person to buy stock in Bell's company, the Bell Telephone Company. One of the first telephones in a private residence was installed in his palace in Petrópolis, his summer retreat from Rio de Janeiro. In January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at 15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. The New York Times reported: Competitors As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U.S. Government moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles, the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876, and No. 186,787 dated January 30, 1877) were no longer in effect, although the presiding judges agreed to continue the proceedings due to the case's importance as a precedent. With a change in administration and charges of conflict of interest (on both sides) arising from the original trial, the US Attorney General dropped the lawsuit on November 30, 1897, leaving several issues undecided on the merits. During a deposition filed for the 1887 trial, Italian inventor Antonio Meucci also claimed to have created the first working model of a telephone in Italy in 1834. In 1886, in the first of three cases in which he was involved, Meucci took the stand as a witness in the hope of establishing his invention's priority. Meucci's testimony in this case was disputed due to a lack of material evidence for his inventions, as his working models were purportedly lost at the laboratory of American District Telegraph (ADT) of New York, which was later incorporated as a subsidiary of Western Union in 1901. Meucci's work, like many other inventors of the period, was based on earlier acoustic principles and despite evidence of earlier experiments, the final case involving Meucci was eventually dropped upon Meucci's death. However, due to the efforts of Congressman Vito Fossella, the U.S. House of Representatives on June 11, 2002, stated that Meucci's "work in the invention of the telephone should be acknowledged". This did not put an end to the still-contentious issue. Some modern scholars do not agree with the claims that Bell's work on the telephone was influenced by Meucci's inventions. The value of the Bell patent was acknowledged throughout the world, and patent applications were made in most major countries, but when Bell delayed the German patent application, the electrical firm of Siemens & Halske set up a rival manufacturer of Bell telephones under their own patent. The Siemens company produced near-identical copies of the Bell telephone without having to pay royalties. The establishment of the International Bell Telephone Company in Brussels, Belgium in 1880, as well as a series of agreements in other countries eventually consolidated a global telephone operation. The strain put on Bell by his constant appearances in court, necessitated by the legal battles, eventually resulted in his resignation from the company. Family life On July 11, 1877, a few days after the Bell Telephone Company was established, Bell married Mabel Hubbard (1857–1923) at the Hubbard estate in Cambridge, Massachusetts. His wedding present to his bride was to turn over 1,487 of his 1,497 shares in the newly formed Bell Telephone Company. Shortly thereafter, the newlyweds embarked on a year-long honeymoon in Europe. During that excursion, Bell took a handmade model of his telephone with him, making it a "working holiday". The courtship had begun years earlier; however, Bell waited until he was more financially secure before marrying. Although the telephone appeared to be an "instant" success, it was not initially a profitable venture and Bell's main sources of income were from lectures until after 1897. One unusual request exacted by his fiancée was that he use "Alec" rather than the family's earlier familiar name of "Aleck". From 1876, he would sign his name "Alec Bell". They had four children: Elsie May Bell (1878–1964) who married Gilbert Hovey Grosvenor of National Geographic fame. Marian Hubbard Bell (1880–1962) who was referred to as "Daisy". Married David Fairchild. Two sons who died in infancy (Edward in 1881 and Robert in 1883). The Bell family home was in Cambridge, Massachusetts, until 1880 when Bell's father-in-law bought a house in Washington, D.C.; in 1882 he bought a home in the same city for Bell's family, so they could be with him while he attended to the numerous court cases involving patent disputes. Bell was a British subject throughout his early life in Scotland and later in Canada until 1882 when he became a naturalized citizen of the United States. In 1915, he characterized his status as: "I am not one of those hyphenated Americans who claim allegiance to two countries." Despite this declaration, Bell has been proudly claimed as a "native son" by all three countries he resided in: the United States, Canada, and the United Kingdom. By 1885, a new summer retreat was contemplated. That summer, the Bells had a vacation on Cape Breton Island in Nova Scotia, spending time at the small village of Baddeck. Returning in 1886, Bell started building an estate on a point across from Baddeck, overlooking Bras d'Or Lake. By 1889, a large house, christened The Lodge was completed and two years later, a larger complex of buildings, including a new laboratory, were begun that the Bells would name Beinn Bhreagh (Gaelic: Beautiful Mountain) after Bell's ancestral Scottish highlands. Bell also built the Bell Boatyard on the estate, employing up to 40 people building experimental craft as well as wartime lifeboats and workboats for the Royal Canadian Navy and pleasure craft for the Bell family. He was an enthusiastic boater, and Bell and his family sailed or rowed a long series of vessels on Bras d'Or Lake, ordering additional vessels from the H.W. Embree and Sons boatyard in Port Hawkesbury, Nova Scotia. In his final, and some of his most productive years, Bell split his residency between Washington, D.C., where he and his family initially resided for most of the year, and Beinn Bhreagh, where they spent increasing amounts of time. Until the end of his life, Bell and his family would alternate between the two homes, but Beinn Bhreagh would, over the next 30 years, become more than a summer home as Bell became so absorbed in his experiments that his annual stays lengthened. Both Mabel and Bell became immersed in the Baddeck community and were accepted by the villagers as "their own". The Bells were still in residence at Beinn Bhreagh when the Halifax Explosion occurred on December 6, 1917. Mabel and Bell mobilized the community to help victims in Halifax. Later inventions Although Alexander Graham Bell is most often associated with the invention of the telephone, his interests were extremely varied. According to one of his biographers, Charlotte Gray, Bell's work ranged "unfettered across the scientific landscape" and he often went to bed voraciously reading the Encyclopædia Britannica, scouring it for new areas of interest. The range of Bell's inventive genius is represented only in part by the 18 patents granted in his name alone and the 12 he shared with his collaborators. These included 14 for the telephone and telegraph, four for the photophone, one for the phonograph, five for aerial vehicles, four for "hydroairplanes", and two for selenium cells. Bell's inventions spanned a wide range of interests and included a metal jacket to assist in breathing, the audiometer to detect minor hearing problems, a device to locate icebergs, investigations on how to separate salt from seawater, and work on finding alternative fuels. Bell worked extensively in medical research and invented techniques for teaching speech to the deaf. During his Volta Laboratory period, Bell and his associates considered impressing a magnetic field on a record as a means of reproducing sound. Although the trio briefly experimented with the concept, they could not develop a workable prototype. They abandoned the idea, never realizing they had glimpsed a basic principle which would one day find its application in the tape recorder, the hard disc and floppy disc drive, and other magnetic media. Bell's own home used a primitive form of air conditioning, in which fans blew currents of air across great blocks of ice. He also anticipated modern concerns with fuel shortages and industrial pollution. Methane gas, he reasoned, could be produced from the waste of farms and factories. At his Canadian estate in Nova Scotia, he experimented with composting toilets and devices to capture water from the atmosphere. In a magazine interview published shortly before his death, he reflected on the possibility of using solar panels to heat houses. Photophone Bell and his assistant Charles Sumner Tainter jointly invented a wireless telephone, named a photophone, which allowed for the transmission of both sounds and normal human conversations on a beam of light. Both men later became full associates in the Volta Laboratory Association. On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message a considerable distance, from the roof of the Franklin School in Washington, D.C., to Bell at the window of his laboratory, some away, 19 years before the first voice radio transmissions. Bell believed the photophone's principles were his life's "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication systems which achieved popular worldwide usage in the 1980s. Its master patent was issued in December 1880, many decades before the photophone's principles came into popular use. Metal detector Bell is also credited with developing one of the early versions of a metal detector through the use of an induction balance, after the shooting of U.S. President James A. Garfield in 1881. According to some accounts, the metal detector worked flawlessly in tests but did not find Guiteau's bullet, partly because the metal bed frame on which the President was lying disturbed the instrument, resulting in static. Garfield's surgeons, led by self-appointed chief physician Doctor Willard Bliss, were skeptical of the device, and ignored Bell's requests to move the President to a bed not fitted with metal springs. Alternatively, although Bell had detected a slight sound on his first test, the bullet may have been lodged too deeply to be detected by the crude apparatus. Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs in several particulars from most of the many and varied versions now in circulation, by concluding that extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained during an examination of Garfield, Bell "proceeded to the Executive Mansion the next morning ... to ascertain from the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires, with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect." In a footnote, Bell adds, "The death of President Garfield and the subsequent post-mortem examination, however, proved that the bullet was at too great a distance from the surface to have affected our apparatus." Hydrofoils The March 1906 Scientific American article by American pioneer William E. Meacham explained the basic principle of hydrofoils and hydroplanes. Bell considered the invention of the hydroplane as a very significant achievement. Based on information gained from that article, he began to sketch concepts of what is now called a hydrofoil boat. Bell and assistant Frederick W. "Casey" Baldwin began hydrofoil experimentation in the summer of 1908 as a possible aid to airplane takeoff from water. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models. This led him and Bell to the development of practical hydrofoil watercraft. During his world tour of 1910–11, Bell and Baldwin met with Forlanini in France. They had rides in the Forlanini hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Baddeck, a number of initial concepts were built as experimental models, including the Dhonnas Beag (Scottish Gaelic for little devil), the first self-propelled Bell-Baldwin hydrofoil. The experimental boats were essentially proof-of-concept prototypes that culminated in the more substantial HD-4, powered by Renault engines. A top speed of was achieved, with the hydrofoil exhibiting rapid acceleration, good stability, and steering, along with the ability to take waves without difficulty. In 1913, Dr. Bell hired Walter Pinaud, a Sydney yacht designer and builder as well as the proprietor of Pinaud's Yacht Yard in Westmount, Nova Scotia, to work on the pontoons of the HD-4. Pinaud soon took over the boatyard at Bell Laboratories on Beinn Bhreagh, Bell's estate near Baddeck, Nova Scotia. Pinaud's experience in boat-building enabled him to make useful design changes to the HD-4. After the First World War, work began again on the HD-4. Bell's report to the U.S. Navy permitted him to obtain two engines in July 1919. On September 9, 1919, the HD-4 set a world marine speed record of , a record which stood for ten years. Aeronautics In 1891, Bell had begun experiments to develop motor-powered heavier-than-air aircraft. The AEA was first formed as Bell shared the vision to fly with his wife, who advised him to seek "young" help as Bell was at the age of 60. In 1898, Bell experimented with tetrahedral box kites and wings constructed of multiple compound tetrahedral kites covered in maroon silk. The tetrahedral wings were named Cygnet I, II, and III, and were flown both unmanned and manned (Cygnet I crashed during a flight carrying Selfridge) in the period from 1907 to 1912. Some of Bell's kites are on display at the Alexander Graham Bell National Historic Site. Bell was a supporter of aerospace engineering research through the Aerial Experiment Association (AEA), officially formed at Baddeck, Nova Scotia, in October 1907 at the suggestion of his wife Mabel and with her financial support after the sale of some of her real estate. The AEA was headed by Bell and the founding members were four young men: American Glenn H. Curtiss, a motorcycle manufacturer at the time and who held the title "world's fastest man", having ridden his self-constructed motor bicycle around in the shortest time, and who was later awarded the Scientific American Trophy for the first official one-kilometre flight in the Western hemisphere, and who later became a world-renowned airplane manufacturer; Lieutenant Thomas Selfridge, an official observer from the U.S. Federal government and one of the few people in the army who believed that aviation was the future; Frederick W. Baldwin, the first Canadian and first British subject to pilot a public flight in Hammondsport, New York; and J. A. D. McCurdy–Baldwin and McCurdy being new engineering graduates from the University of Toronto. The AEA's work progressed to heavier-than-air machines, applying their knowledge of kites to gliders. Moving to Hammondsport, the group then designed and built the Red Wing, framed in bamboo and covered in red silk and powered by a small air-cooled engine. On March 12, 1908, over Keuka Lake, the biplane lifted off on the first public flight in North America. The innovations that were incorporated into this design included a cockpit enclosure and tail rudder (later variations on the original design would add ailerons as a means of control). One of the AEA's inventions, a practical wingtip form of the aileron, was to become a standard component on all aircraft. The White Wing and June Bug were to follow and by the end of 1908, over 150 flights without mishap had been accomplished. However, the AEA had depleted its initial reserves and only a $15,000 grant from Mrs. Bell allowed it to continue with experiments. Lt. Selfridge had also become the first person killed in a powered heavier-than-air flight in a crash of the Wright Flyer at Fort Myer, Virginia, on September 17, 1908. Their final aircraft design, the Silver Dart, embodied all of the advancements found in the earlier machines. On February 23, 1909, Bell was present as the Silver Dart flown by J. A. D. McCurdy from the frozen ice of Bras d'Or made the first aircraft flight in Canada. Bell had worried that the flight was too dangerous and had arranged for a doctor to be on hand. With the successful flight, the AEA disbanded and the Silver Dart would revert to Baldwin and McCurdy, who began the Canadian Aerodrome Company and would later demonstrate the aircraft to the Canadian Army. Heredity and genetics Bell, along with many members of the scientific community at the time, took an interest in the popular science of heredity which grew out of the publication of Charles Darwin's book On the Origin of Species in 1859. On his estate in Nova Scotia, Bell conducted meticulously recorded breeding experiments with rams and ewes. Over the course of more than 30 years, Bell sought to produce a breed of sheep with multiple nipples that would bear twins. He specifically wanted to see if selective breeding could produce sheep with four functional nipples with enough milk for twin lambs. This interest in animal breeding caught the attention of scientists focused on the study of heredity and genetics in humans. In November 1883, Bell presented a paper at a meeting of the National Academy of Sciences titled "Upon the Formation of a Deaf Variety of the Human Race". The paper is a compilation of data on the hereditary aspects of deafness. Bell's research indicated that a hereditary tendency toward deafness, as indicated by the possession of deaf relatives, was an important element in determining the production of deaf offspring. He noted that the proportion of deaf children born to deaf parents was many times greater than the proportion of deaf children born to the general population. In the paper, Bell delved into social commentary and discussed hypothetical public policies to bring an end to deafness. He also criticized educational practices that segregated deaf children rather than integrated them fulling into mainstream classrooms. The paper did not propose sterilization of deaf people or prohibition on intermarriage, noting that “We cannot dictate to men and women whom they should marry and natural selection no longer influences mankind to any great extent.” A review of Bell's "Memoir upon the Formation of a Deaf Variety of the Human Race" appearing in an 1885 issue of the "American Annals of the Deaf and Dumb" states that "Dr. Bell does not advocate legislative interference with the marriages of the deaf for several reasons one of which is that the results of such marriages have not yet been sufficiently investigated." The article goes on to say that "the editorial remarks based thereon did injustice to the author." The paper's author concludes by saying “A wiser way to prevent the extension of hereditary deafness, it seems to us, would be to continue the investigations which Dr. Bell has so admirable begun until the laws of the transmission of the tendency to deafness are fully understood, and then by explaining those laws to the pupils of our schools to lead them to choose their partners in marriage in such a way that deaf-mute offspring will not be the result." Historians have noted that Bell explicitly opposed laws regulating marriage, and never mentioned sterilization in any of his writings. Even after Bell agreed to engage with scientists conducting eugenic research, he consistently refused to support public policy that limited the rights or privileges of the deaf. Bell's interest and research on heredity attracted the interest of Charles Davenport, a Harvard professor and head of the Cold Spring Harbor Laboratory. In 1906, Davenport, who was also the founder of the American Breeder's Association, approached Bell about joining a new committee on eugenics chaired by David Starr Jordan. In 1910, Davenport opened the Eugenics Records office at Cold Spring Harbor. To give the organization scientific credibility, Davenport set up a Board of Scientific Directors naming Bell as chairman. Other members of the board included Luther Burbank, Roswell H. Johnson, Vernon L. Kellogg, and William E. Castle. In 1921, a Second International Congress of Eugenics was held in New York at the Museum of Natural History and chaired by Davenport. Although Bell did not present any research or speak as part of the proceedings, he was named as honorary president as a means to attract other scientists to attend the event. A summary of the event notes that Bell was a "pioneering investigator in the field of human heredity". Death Bell died of complications arising from diabetes on August 2, 1922, at his private estate in Cape Breton, Nova Scotia, at age 75. Bell had also been afflicted with pernicious anemia. His last view of the land he had inhabited was by moonlight on his mountain estate at 2:00 a.m. While tending to him after his long illness, Mabel, his wife, whispered, "Don't leave me." By way of reply, Bell signed "no...", lost consciousness, and died shortly after. On learning of Bell's death, the Canadian Prime Minister, Mackenzie King, cabled Mrs. Bell, saying: Bell's coffin was constructed of Beinn Bhreagh pine by his laboratory staff, lined with the same red silk fabric used in his tetrahedral kite experiments. To help celebrate his life, his wife asked guests not to wear black (the traditional funeral color) while attending his service, during which soloist Jean MacDonald sang a verse of Robert Louis Stevenson's "Requiem": Upon the conclusion of Bell's funeral, for one minute at 6:25 p.m. Eastern Time, "every phone on the continent of North America was silenced in honor of the man who had given to mankind the means for direct communication at a distance". Alexander Graham Bell was buried atop Beinn Bhreagh mountain, on his estate where he had resided increasingly for the last 35 years of his life, overlooking Bras d'Or Lake. He was survived by his wife Mabel, his two daughters, Elsie May and Marian, and nine of his grandchildren. Legacy and honors Honors and tributes flowed to Bell in increasing numbers as his invention became ubiquitous and his personal fame grew. Bell received numerous honorary degrees from colleges and universities to the point that the requests almost became burdensome. During his life, he also received dozens of major awards, medals, and other tributes. These included statuary monuments to both him and the new form of communication his telephone created, including the Bell Telephone Memorial erected in his honor in Alexander Graham Bell Gardens in Brantford, Ontario, in 1917. A large number of Bell's writings, personal correspondence, notebooks, papers, and other documents reside in both the United States Library of Congress Manuscript Division (as the Alexander Graham Bell Family Papers), and at the Alexander Graham Bell Institute, Cape Breton University, Nova Scotia; major portions of which are available for online viewing. A number of historic sites and other marks commemorate Bell in North America and Europe, including the first telephone companies in the United States and Canada. Among the major sites are: The Alexander Graham Bell National Historic Site, maintained by Parks Canada, which incorporates the Alexander Graham Bell Museum, in Baddeck, Nova Scotia, close to the Bell estate Beinn Bhreagh The Bell Homestead National Historic Site, includes the Bell family home, "Melville House", and farm overlooking Brantford, Ontario and the Grand River. It was their first home in North America; Canada's first telephone company building, the "Henderson Home" of the late 1870s, a predecessor of the Bell Telephone Company of Canada (officially chartered in 1880). In 1969, the building was carefully moved to the historic Bell Homestead National Historic Site in Brantford, Ontario, and was refurbished to become a telephone museum. The Bell Homestead, the Henderson Home telephone museum, and the National Historic Site's reception centre are all maintained by the Bell Homestead Society; The Alexander Graham Bell Memorial Park, which features a broad neoclassical monument built in 1917 by public subscription. The monument depicts mankind's ability to span the globe through telecommunications; The Alexander Graham Bell Museum (opened in 1956), part of the Alexander Graham Bell National Historic Site which was completed in 1978 in Baddeck, Nova Scotia. Many of the museum's artifacts were donated by Bell's daughters; In 1880, Bell received the Volta Prize with a purse of 50,000 French francs (approximately US$ in today's dollars) for the invention of the telephone from the French government. Among the luminaries who judged were Victor Hugo and Alexandre Dumas, fils. The Volta Prize was conceived by Napoleon III in 1852, and named in honor of Alessandro Volta, with Bell becoming the second recipient of the grand prize in its history. Since Bell was becoming increasingly affluent, he used his prize money to create endowment funds (the 'Volta Fund') and institutions in and around the United States capital of Washington, D.C.. These included the prestigious 'Volta Laboratory Association' (1880), also known as the Volta Laboratory and as the 'Alexander Graham Bell Laboratory', and which eventually led to the Volta Bureau (1887) as a center for studies on deafness which is still in operation in Georgetown, Washington, D.C. The Volta Laboratory became an experimental facility devoted to scientific discovery, and the very next year it improved Edison's phonograph by substituting wax for tinfoil as the recording medium and incising the recording rather than indenting it, key upgrades that Edison himself later adopted. The laboratory was also the site where he and his associate invented his "proudest achievement", "the photophone", the "optical telephone" which presaged fibre optical telecommunications while the Volta Bureau would later evolve into the Alexander Graham Bell Association for the Deaf and Hard of Hearing (the AG Bell), a leading center for the research and pedagogy of deafness. In partnership with Gardiner Greene Hubbard, Bell helped establish the publication Science during the early 1880s. In 1898, Bell was elected as the second president of the National Geographic Society, serving until 1903, and was primarily responsible for the extensive use of illustrations, including photography, in the magazine. He also served for many years as a Regent of the Smithsonian Institution (1898–1922). The French government conferred on him the decoration of the Légion d'honneur (Legion of Honor); the Royal Society of Arts in London awarded him the Albert Medal in 1902; the University of Würzburg, Bavaria, granted him a PhD, and he was awarded the Franklin Institute's Elliott Cresson Medal in 1912. He was one of the founders of the American Institute of Electrical Engineers in 1884 and served as its president from 1891 to 1892. Bell was later awarded the AIEE's Edison Medal in 1914 "For meritorious achievement in the invention of the telephone". The bel (B) and the smaller decibel (dB) are units of measurement of sound pressure level (SPL) invented by Bell Labs and named after him. Since 1976, the IEEE's Alexander Graham Bell Medal has been awarded to honor outstanding contributions in the field of telecommunications. In 1936, the US Patent Office declared Bell first on its list of the country's greatest inventors, leading to the US Post Office issuing a commemorative stamp honoring Bell in 1940 as part of its 'Famous Americans Series'. The First Day of Issue ceremony was held on October 28 in Boston, Massachusetts, the city where Bell spent considerable time on research and working with the deaf. The Bell stamp became very popular and sold out in little time. The stamp became, and remains to this day, the most valuable one of the series. The 150th anniversary of Bell's birth in 1997 was marked by a special issue of commemorative £1 banknotes from the Royal Bank of Scotland. The illustrations on the reverse of the note include Bell's face in profile, his signature, and objects from Bell's life and career: users of the telephone over the ages; an audio wave signal; a diagram of a telephone receiver; geometric shapes from engineering structures; representations of sign language and the phonetic alphabet; the geese which helped him to understand flight; and the sheep which he studied to understand genetics. Additionally, the Government of Canada honored Bell in 1997 with a C$100 gold coin, in tribute also to the 150th anniversary of his birth, and with a silver dollar coin in 2009 in honor of the 100th anniversary of flight in Canada. That first flight was made by an airplane designed under Dr. Bell's tutelage, named the Silver Dart. Bell's image, and also those of his many inventions have graced paper money, coinage, and postal stamps in numerous countries worldwide for many dozens of years. Alexander Graham Bell was ranked 57th among the 100 Greatest Britons (2002) in an official BBC nationwide poll, and among the Top Ten Greatest Canadians (2004), and the 100 Greatest Americans (2005). In 2006, Bell was also named as one of the 10 greatest Scottish scientists in history after having been listed in the National Library of Scotland's 'Scottish Science Hall of Fame'. Bell's name is still widely known and used as part of the names of dozens of educational institutes, corporate namesakes, street and place names around the world. Honorary degrees Alexander Graham Bell, who could not complete the university program of his youth, received at least a dozen honorary degrees from academic institutions, including eight honorary LL.D.s (Doctorate of Laws), two Ph.D.s, a D.Sc., and an M.D.: Gallaudet College (then named National Deaf-Mute College) in Washington, D.C. (Ph.D.) in 1880 University of Würzburg in Würzburg, Bavaria (Ph.D.) in 1882 Heidelberg University in Heidelberg, Germany (M.D.) in 1886 Harvard University in Cambridge, Massachusetts (LL.D.) in 1896 Illinois College, in Jacksonville, Illinois (LL.D.) in 1896, possibly 1881 Amherst College in Amherst, Massachusetts (LL.D.) in 1901 St. Andrew's University in St Andrews, Scotland (LL.D) in 1902 University of Oxford in Oxford, England (D.Sc.) in 1906 University of Edinburgh in Edinburgh, Scotland (LL.D.) in 1906 George Washington University in Washington, D.C. (LL.D.) in 1913 Queen's University at Kingston in Kingston, Ontario, Canada (LL.D.) in 1908 Dartmouth College in Hanover, New Hampshire (LL.D.) in 1913, possibly 1914 Portrayal in film and television The 1939 film The Story of Alexander Graham Bell was based on his life and works. The 1992 film The Sound and the Silence was a TV film. Biography aired an episode Alexander Graham Bell: Voice of Invention on August 6, 1996. Eyewitness No. 90 A Great Inventor Is Remembered, a 1957 NFB short about Bell. Bibliography Also published as: See also Alexander Graham Bell Association for the Deaf and Hard of Hearing Alexander Graham Bell National Historic Site Bell Boatyard Bell Homestead National Historic Site Bell Telephone Memorial Berliner, Emile Bourseul, Charles IEEE Alexander Graham Bell Medal John Peirce, submitted telephone ideas to Bell Manzetti, Innocenzo Meucci, Antonio Oriental Telephone Company People on Scottish banknotes Pioneers, a Volunteer Network Reis, Philipp The Story of Alexander Graham Bell, a 1939 movie of his life The Telephone Cases Volta Laboratory and Bureau William Francis Channing, submitted telephone ideas to Bell References Notes Citations Further reading Mullett, Mary B. The Story of A Famous Inventor. New York: Rogers and Fowle, 1921. Walters, Eric. The Hydrofoil Mystery. Toronto, Ontario, Canada: Puffin Books, 1999. . Winzer, Margret A. The History Of Special Education: From Isolation To Integration. Washington, D.C.: Gallaudet University Press, 1993. . External links Alexander and Mabel Bell Legacy Foundation Alexander Graham Bell Institute at Cape Breton University Bell Telephone Memorial, Brantford, Ontario Bell Homestead National Historic Site, Brantford, Ontario Alexander Graham Bell National Historic Site of Canada, Baddeck, Nova Scotia Alexander Graham Bell Family Papers at the Library of Congress Biography at the Dictionary of Canadian Biography Online Science.ca profile: Alexander Graham Bell Alexander Graham Bell's notebooks at the Internet Archive "Téléphone et photophone : les contributions indirectes de Graham Bell à l'idée de la vision à distance par l'électricité" at the Histoire de la télévision Multimedia Alexander Graham Bell at The Biography Channel Shaping The Future, from the Heritage Minutes and Radio Minutes collection at HistoricaCanada.ca (1:31 audio drama, Adobe Flash required) 1847 births 1922 deaths 19th-century Scottish scientists Alumni of the University of Edinburgh Alumni of University College London American agnostics American educational theorists American eugenicists American physicists American Unitarians Aviation pioneers Canadian agnostics Canadian Aviation Hall of Fame inductees Canadian emigrants to the United States Canadian eugenicists 19th-century Canadian inventors Canadian physicists Canadian Unitarians Deaths from diabetes Fellows of the American Academy of Arts and Sciences History of telecommunications IEEE Edison Medal recipients Language teachers Members of the American Philosophical Society Members of the American Antiquarian Society Members of the United States National Academy of Sciences National Aviation Hall of Fame inductees National Geographic Society Officiers of the Légion d'honneur People educated at the Royal High School, Edinburgh People from Baddeck, Nova Scotia Businesspeople from Boston People from Brantford Scientists from Edinburgh People from Washington, D.C. Scottish agnostics 19th-century Scottish businesspeople Scottish emigrants to Canada Scottish eugenicists Scottish inventors Scottish Unitarians Smithsonian Institution people Hall of Fame for Great Americans inductees George Washington University trustees Canadian activists Gardiner family Articles containing video clips 19th-century British inventors Scottish emigrants to the United States John Fritz Medal recipients 20th-century American scientists 20th-century American inventors Canadian educational theorists Scottish physicists 19th-century Canadian scientists 20th-century Canadian scientists Scottish Engineering Hall of Fame inductees
Alexander Graham Bell
Anatolia, also known as Asia Minor, is a large peninsula in Western Asia and the westernmost protrusion of the Asian continent. It constitutes the major part of modern-day Turkey. The region is bounded by the Turkish Straits to the northwest, the Black Sea to the north, the Armenian Highlands to the east, the Mediterranean Sea to the south, and the Aegean Sea to the west. The Sea of Marmara forms a connection between the Black and Aegean seas through the Bosporus and Dardanelles straits and separates Anatolia from Thrace on the Balkan peninsula of Southeast Europe. The eastern border of Anatolia has been held to be a line between the Gulf of Alexandretta and the Black Sea, bounded by the Armenian Highlands to the east and Mesopotamia to the southeast. By this definition Anatolia comprises approximately the western two-thirds of the Asian part of Turkey. Today, Anatolia is sometimes considered to be synonymous with Asian Turkey, thereby including the western part of the Armenian Highlands and northern Mesopotamia; its eastern and southern borders are coterminous with Turkey's borders. The ancient Anatolian peoples spoke the now-extinct Anatolian languages of the Indo-European language family, which were largely replaced by the Greek language during classical antiquity as well as during the Hellenistic, Roman, and Byzantine periods. The major Anatolian languages included Hittite, Luwian, and Lydian, while other, poorly attested local languages included Phrygian and Mysian. Hurro-Urartian languages were spoken in the southeastern kingdom of Mitanni, while Galatian, a Celtic language, was spoken in Galatia, central Anatolia. The Turkification of Anatolia began under the rule of the Seljuk Empire in the late 11th century and it continued under the rule of the Ottoman Empire between the late 13th and the early 20th century and it has continued under the rule of today's Republic of Turkey. However, various non-Turkic languages continue to be spoken by minorities in Anatolia today, including Kurdish, Neo-Aramaic, Armenian, Arabic, Laz, Georgian and Greek. Other ancient peoples in the region included Galatians, Hurrians, Assyrians, Hattians, Cimmerians, as well as Ionian, Dorian, and Aeolic Greeks. Geography Traditionally, Anatolia is considered to extend in the east to an indefinite line running from the Gulf of Alexandretta to the Black Sea, coterminous with the Anatolian Plateau. This traditional geographical definition is used, for example, in the latest edition of Merriam-Webster's Geographical Dictionary. Under this definition, Anatolia is bounded to the east by the Armenian Highlands, and the Euphrates before that river bends to the southeast to enter Mesopotamia. To the southeast, it is bounded by the ranges that separate it from the Orontes valley in Syria and the Mesopotamian plain. Following the Armenian genocide, Western Armenia was renamed the Eastern Anatolia Region by the newly established Turkish government. In 1941, with the First Geography Congress which divided Turkey into seven geographical regions based on differences in climate and landscape, the eastern provinces of Turkey were placed into the Eastern Anatolia Region, which largely corresponds to the historical region of Western Armenia (named as such after the division of Greater Armenia between the Roman/Byzantine Empire (Western Armenia) and Sassanid Persia (Eastern Armenia) in 387 AD). Vazken Davidian terms the expanded use of "Anatolia" to apply to territory in eastern Turkey that was formerly referred to as Armenia (which had a sizeable Armenian population before the Armenian genocide) an "ahistorical imposition" and notes that a growing body of literature is uncomfortable with referring to the Ottoman East as "Eastern Anatolia." The highest mountain in the Eastern Anatolia Region (also the highest peak in the Armenian Highlands) is Mount Ararat (5123 m). The Euphrates, Araxes, Karasu and Murat rivers connect the Armenian Highlands to the South Caucasus and the Upper Euphrates Valley. Along with the Çoruh, these rivers are the longest in the Eastern Anatolia Region. Etymology The English-language name Anatolia derives from the Greek () meaning "the East" and designating (from a Greek point of view) eastern regions in general. The Greek word refers to the direction where the sun rises, coming from ἀνατέλλω anatello '(Ι) rise up,' comparable to terms in other languages such as "levant" from Latin levo 'to rise,' "orient" from Latin orior 'to arise, to originate,' Hebrew מִזְרָח mizraḥ 'east' from זָרַח zaraḥ 'to rise, to shine,' Aramaic מִדְנָח midnaḥ from דְּנַח denaḥ 'to rise, to shine.' The use of Anatolian designations has varied over time, perhaps originally referring to the Aeolian, Ionian and Dorian colonies situated along the eastern coasts of the Aegean Sea, but also encompassing eastern regions in general. Such use of Anatolian designations was employed during the reign of Roman Emperor Diocletian (284–305), who created the Diocese of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Diocese, but completely unrelated to the regions of Asia Minor. In their widest territorial scope, Anatolian designations were employed during the reign of Roman Emperor Constantine I (306–337), who created the Praetorian prefecture of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Prefecture, encompassing all eastern regions of the Late Roman Empire and spaning from Thrace to Egypt. Only after the loss of other eastern regions during the 7th century and the reduction of Byzantine eastern domains to Asia Minor, that region became the only remaining part of the Byzantine East, and thus commonly referred to (in Greek) as the Eastern (Ανατολής / Anatolian) part of the Empire. In the same time, the Anatolic Theme (Ἀνατολικὸν θέμα / "the Eastern theme") was created, as a province (theme) covering the western and central parts of Turkey's present-day Central Anatolia Region, centered around Iconium, but ruled from the city of Amorium. The Latinized form "," with its -ia ending, is probably a Medieval Latin innovation. The modern Turkish form Anadolu derives directly from the Greek name Aνατολή (Anatolḗ). The Russian male name Anatoly, the French Anatole and plain Anatol, all stemming from saints Anatolius of Laodicea (d. 283) and Anatolius of Constantinople (d. 458; the first Patriarch of Constantinople), share the same linguistic origin. Names The oldest known name for any region within Anatolia is related to its central area, known as the "Land of Hatti" – a designation that was initially used for the land of ancient Hattians, but later became the most common name for the entire territory under the rule of ancient Hittites. The first recorded name the Greeks used for the Anatolian peninsula, though not particularly popular at the time, was Ἀσία (Asía), perhaps from an Akkadian expression for the "sunrise" or possibly echoing the name of the Assuwa league in western Anatolia. The Romans used it as the name of their province, comprising the west of the peninsula plus the nearby Aegean Islands. As the name "Asia" broadened its scope to apply to the vaster region east of the Mediterranean, some Greeks in Late Antiquity came to use the name Asia Minor (Μικρὰ Ἀσία, Mikrà Asía), meaning "Lesser Asia" to refer to present-day Anatolia, whereas the administration of the Empire preferred the description Ἀνατολή (Anatolḗ "the East"). The endonym Ῥωμανία (Rōmanía "the land of the Romans, i.e. the Eastern Roman Empire") was understood as another name for the province by the invading Seljuq Turks, who founded a Sultanate of Rûm in 1077. Thus (land of the) Rûm became another name for Anatolia. By the 12th century Europeans had started referring to Anatolia as Turchia. During the era of the Ottoman Empire, mapmakers outside the Empire referred to the mountainous plateau in eastern Anatolia as Armenia. Other contemporary sources called the same area Kurdistan. Geographers have variously used the terms East Anatolian Plateau and Armenian Plateau to refer to the region, although the territory encompassed by each term largely overlaps with the other. According to archaeologist Lori Khatchadourian, this difference in terminology "primarily result[s] from the shifting political fortunes and cultural trajectories of the region since the nineteenth century." Turkey's First Geography Congress in 1941 created two geographical regions of Turkey to the east of the Gulf of Iskenderun-Black Sea line, the Eastern Anatolia Region and the Southeastern Anatolia Region, the former largely corresponding to the western part of the Armenian Highlands, the latter to the northern part of the Mesopotamian plain. According to Richard Hovannisian, this changing of toponyms was "necessary to obscure all evidence" of the Armenian presence as part of the policy of Armenian genocide denial embarked upon by the newly established Turkish government and what Hovannisian calls its "foreign collaborators." History Prehistoric Anatolia Human habitation in Anatolia dates back to the Paleolithic. Neolithic settlements include Çatalhöyük, Çayönü, Nevali Cori, Aşıklı Höyük, Boncuklu Höyük Hacilar, Göbekli Tepe, Norşuntepe, Kosk, and Mersin. Çatalhöyük (7.000 BCE) is considered the most advanced of these. Neolithic Anatolia has been proposed as the homeland of the Indo-European language family, although linguists tend to favour a later origin in the steppes north of the Black Sea. However, it is clear that the Anatolian languages, the earliest attested branch of Indo-European, have been spoken in Anatolia since at least the 19th century BCE. Ancient Anatolia The earliest historical data related to Anatolia appear during the Bronze Age and continue throughout the Iron Age. The most ancient period in the history of Anatolia spans from the emergence of ancient Hattians, up to the conquest of Anatolia by the Achaemenid Empire in the 6th century BCE. Hattians and Hurrians The earliest historically attested populations of Anatolia were the Hattians in central Anatolia, and Hurrians further to the east. The Hattians were an indigenous people, whose main center was the city of Hattush. Affiliation of Hattian language remains unclear, while Hurrian language belongs to a distinctive family of Hurro-Urartian languages. All of those languages are extinct; relationships with indigenous languages of the Caucasus have been proposed, but are not generally accepted. The region became famous for exporting raw materials. Organized trade between Anatolia and Mesopotamia started to emerge during the period of the Akkadian Empire, and was continued and intensified during the period of the Old Assyrian Empire, between the 21st and the 18th centuries BCE. Assyrian traders were bringing tin and textiles in exchange for copper, silver or gold. Cuneiform records, dated circa 20th century BCE, found in Anatolia at the Assyrian colony of Kanesh, use an advanced system of trading computations and credit lines. Hittite Anatolia (18th–12th century BCE) Unlike the Akkadians and Assyrians, whose Anatolian trading posts were peripheral to their core lands in Mesopotamia, the Hittites were centered at Hattusa (modern Boğazkale) in north-central Anatolia by the 17th century BCE. They were speakers of an Indo-European language, the Hittite language, or nesili (the language of Nesa) in Hittite. The Hittites originated from local ancient cultures that grew in Anatolia, in addition to the arrival of Indo-European languages. Attested for the first time in the Assyrian tablets of Nesa around 2000 BCE, they conquered Hattusa in the 18th century BCE, imposing themselves over Hattian- and Hurrian-speaking populations. According to the widely accepted Kurgan theory on the Proto-Indo-European homeland, however, the Hittites (along with the other Indo-European ancient Anatolians) were themselves relatively recent immigrants to Anatolia from the north. However, they did not necessarily displace the population genetically; they assimilated into the former peoples' culture, preserving the Hittite language. The Hittites adopted the Mesopotamian cuneiform script. In the Late Bronze Age, Hittite New Kingdom (c. 1650 BCE) was founded, becoming an empire in the 14th century BCE after the conquest of Kizzuwatna in the south-east and the defeat of the Assuwa league in western Anatolia. The empire reached its height in the 13th century BCE, controlling much of Asia Minor, northwestern Syria, and northwest upper Mesopotamia. However, the Hittite advance toward the Black Sea coast was halted by the semi-nomadic pastoralist and tribal Kaskians, a non-Indo-European people who had earlier displaced the Palaic-speaking Indo-Europeans. Much of the history of the Hittite Empire concerned war with the rival empires of Egypt, Assyria and the Mitanni. The Egyptians eventually withdrew from the region after failing to gain the upper hand over the Hittites and becoming wary of the power of Assyria, which had destroyed the Mitanni Empire. The Assyrians and Hittites were then left to battle over control of eastern and southern Anatolia and colonial territories in Syria. The Assyrians had better success than the Egyptians, annexing much Hittite (and Hurrian) territory in these regions. Post-Hittite Anatolia (12th–6th century BCE) After 1180 BCE, during the Late Bronze Age collapse, the Hittite empire disintegrated into several independent Syro-Hittite states, subsequent to losing much territory to the Middle Assyrian Empire and being finally overrun by the Phrygians, another Indo-European people who are believed to have migrated from the Balkans. The Phrygian expansion into southeast Anatolia was eventually halted by the Assyrians, who controlled that region. Luwians Another Indo-European people, the Luwians, rose to prominence in central and western Anatolia circa 2000 BCE. Their language belonged to the same linguistic branch as Hittite. The general consensus amongst scholars is that Luwian was spoken across a large area of western Anatolia, including (possibly) Wilusa (Troy), the Seha River Land (to be identified with the Hermos and/or Kaikos valley), and the kingdom of Mira-Kuwaliya with its core territory of the Maeander valley. From the 9th century BCE, Luwian regions coalesced into a number of states such as Lydia, Caria, and Lycia, all of which had Hellenic influence. Arameans Arameans encroached over the borders of south-central Anatolia in the century or so after the fall of the Hittite empire, and some of the Syro-Hittite states in this region became an amalgam of Hittites and Arameans. These became known as Syro-Hittite states. Neo-Assyrian Empire From the 10th to late 7th centuries BCE, much of Anatolia (particularly the southeastern regions) fell to the Neo-Assyrian Empire, including all of the Syro-Hittite states, Tabal, Kingdom of Commagene, the Cimmerians and Scythians and swathes of Cappadocia. The Neo-Assyrian empire collapsed due to a bitter series of civil wars followed by a combined attack by Medes, Persians, Scythians and their own Babylonian relations. The last Assyrian city to fall was Harran in southeast Anatolia. This city was the birthplace of the last king of Babylon, the Assyrian Nabonidus and his son and regent Belshazzar. Much of the region then fell to the short-lived Iran-based Median Empire, with the Babylonians and Scythians briefly appropriating some territory. Cimmerian and Scythian invasions From the late 8th century BCE, a new wave of Indo-European-speaking raiders entered northern and northeast Anatolia: the Cimmerians and Scythians. The Cimmerians overran Phrygia and the Scythians threatened to do the same to Urartu and Lydia, before both were finally checked by the Assyrians. Early Greek presence The north-western coast of Anatolia was inhabited by Greeks of the Achaean/Mycenaean culture from the 20th century BCE, related to the Greeks of southeastern Europe and the Aegean. Beginning with the Bronze Age collapse at the end of the 2nd millennium BCE, the west coast of Anatolia was settled by Ionian Greeks, usurping the area of the related but earlier Mycenaean Greeks. Over several centuries, numerous Ancient Greek city-states were established on the coasts of Anatolia. Greeks started Western philosophy on the western coast of Anatolia (Pre-Socratic philosophy). Classical Anatolia In classical antiquity, Anatolia was described by Herodotus and later historians as divided into regions that were diverse in culture, language and religious practices. The northern regions included Bithynia, Paphlagonia and Pontus; to the west were Mysia, Lydia and Caria; and Lycia, Pamphylia and Cilicia belonged to the southern shore. There were also several inland regions: Phrygia, Cappadocia, Pisidia and Galatia. Languages spoken included the late surviving Anatolic languages Isaurian and Pisidian, Greek in Western and coastal regions, Phrygian spoken until the 7th century CE, local variants of Thracian in the Northwest, the Galatian variant of Gaulish in Galatia until the 6th century CE, Cappadocian and Armenian in the East, and Kartvelian languages in the Northeast. Anatolia is known as the birthplace of minted coinage (as opposed to unminted coinage, which first appears in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BCE in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras. During the 6th century BCE, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BCE, the Ionian city-states on the west coast of Anatolia rebelled against Persian rule. The Ionian Revolt, as it became known, though quelled, initiated the Greco-Persian Wars, which ended in a Greek victory in 449 BCE, and the Ionian cities regained their independence. By the Peace of Antalcidas (387 BCE), which ended the Corinthian War, Persia regained control over Ionia. In 334 BCE, the Macedonian Greek king Alexander the Great conquered the peninsula from the Achaemenid Persian Empire. Alexander's conquest opened up the interior of Asia Minor to Greek settlement and influence. Following the death of Alexander and the breakup of his empire, Anatolia was ruled by a series of Hellenistic kingdoms, such as the Attalids of Pergamum and the Seleucids, the latter controlling most of Anatolia. A period of peaceful Hellenization followed, such that the local Anatolian languages had been supplanted by Greek by the 1st century BCE. In 133 BCE the last Attalid king bequeathed his kingdom to the Roman Republic, and western and central Anatolia came under Roman control, but Hellenistic culture remained predominant. Further annexations by Rome, in particular of the Kingdom of Pontus by Pompey, brought all of Anatolia under Roman control, except for the eastern frontier with the Parthian Empire, which remained unstable for centuries, causing a series of wars, culminating in the Roman-Parthian Wars. Early Christian Period After the division of the Roman Empire, Anatolia became part of the East Roman, or Byzantine Empire. Anatolia was one of the first places where Christianity spread, so that by the 4th century CE, western and central Anatolia were overwhelmingly Christian and Greek-speaking. For the next 600 years, while Imperial possessions in Europe were subjected to barbarian invasions, Anatolia would be the center of the Hellenic world. It was one of the wealthiest and most densely populated places in the Late Roman Empire. Anatolia's wealth grew during the 4th and 5th centuries thanks, in part, to the Pilgrim's Road that ran through the peninsula. Literary evidence about the rural landscape stems from the hagiographies of 6th century Nicholas of Sion and 7th century Theodore of Sykeon. Large urban centers included Ephesus, Pergamum, Sardis and Aphrodisias. Scholars continue to debate the cause of urban decline in the 6th and 7th centuries variously attributing it to the Plague of Justinian (541), and the 7th century Persian incursion and Arab conquest of the Levant. In the ninth and tenth century a resurgent Byzantine Empire regained its lost territories, including even long lost territory such as Armenia and Syria (ancient Aram). Medieval Period In the 10 years following the Battle of Manzikert in 1071, the Seljuk Turks from Central Asia migrated over large areas of Anatolia, with particular concentrations around the northwestern rim. The Turkish language and the Islamic religion were gradually introduced as a result of the Seljuk conquest, and this period marks the start of Anatolia's slow transition from predominantly Christian and Greek-speaking, to predominantly Muslim and Turkish-speaking (although ethnic groups such as Armenians, Greeks, and Assyrians remained numerous and retained Christianity and their native languages). In the following century, the Byzantines managed to reassert their control in western and northern Anatolia. Control of Anatolia was then split between the Byzantine Empire and the Seljuk Sultanate of Rûm, with the Byzantine holdings gradually being reduced. In 1255, the Mongols swept through eastern and central Anatolia, and would remain until 1335. The Ilkhanate garrison was stationed near Ankara. After the decline of the Ilkhanate from 1335 to 1353, the Mongol Empire's legacy in the region was the Uyghur Eretna Dynasty that was overthrown by Kadi Burhan al-Din in 1381. By the end of the 14th century, most of Anatolia was controlled by various Anatolian beyliks. Smyrna fell in 1330, and the last Byzantine stronghold in Anatolia, Philadelphia, fell in 1390. The Turkmen Beyliks were under the control of the Mongols, at least nominally, through declining Seljuk sultans. The Beyliks did not mint coins in the names of their own leaders while they remained under the suzerainty of the Mongol Ilkhanids. The Osmanli ruler Osman I was the first Turkish ruler who minted coins in his own name in 1320s; they bear the legend "Minted by Osman son of Ertugrul". Since the minting of coins was a prerogative accorded in Islamic practice only to a sovereign, it can be considered that the Osmanli, or Ottoman Turks, had become formally independent from the Mongol Khans. Ottoman Empire Among the Turkish leaders, the Ottomans emerged as great power under Osman I and his son Orhan I. The Anatolian beyliks were successively absorbed into the rising Ottoman Empire during the 15th century. It is not well understood how the Osmanlı, or Ottoman Turks, came to dominate their neighbours, as the history of medieval Anatolia is still little known. The Ottomans completed the conquest of the peninsula in 1517 with the taking of Halicarnassus (modern Bodrum) from the Knights of Saint John. Modern times With the acceleration of the decline of the Ottoman Empire in the early 19th century, and as a result of the expansionist policies of the Russian Empire in the Caucasus, many Muslim nations and groups in that region, mainly Circassians, Tatars, Azeris, Lezgis, Chechens and several Turkic groups left their homelands and settled in Anatolia. As the Ottoman Empire further shrank in the Balkan regions and then fragmented during the Balkan Wars, much of the non-Christian populations of its former possessions, mainly Balkan Muslims (Bosnian Muslims, Albanians, Turks, Muslim Bulgarians and Greek Muslims such as the Vallahades from Greek Macedonia), were resettled in various parts of Anatolia, mostly in formerly Christian villages throughout Anatolia. A continuous reverse migration occurred since the early 19th century, when Greeks from Anatolia, Constantinople and Pontus area migrated toward the newly independent Kingdom of Greece, and also towards the United States, the southern part of the Russian Empire, Latin America, and the rest of Europe. Following the Russo-Persian Treaty of Turkmenchay (1828) and the incorporation of Eastern Armenia into the Russian Empire, another migration involved the large Armenian population of Anatolia, which recorded significant migration rates from Western Armenia (Eastern Anatolia) toward the Russian Empire, especially toward its newly established Armenian provinces. Anatolia remained multi-ethnic until the early 20th century (see the rise of nationalism under the Ottoman Empire). During World War I, the Armenian genocide, the Greek genocide (especially in Pontus), and the Assyrian genocide almost entirely removed the ancient indigenous communities of Armenian, Greek, and Assyrian populations in Anatolia and surrounding regions. Following the Greco-Turkish War of 1919–1922, most remaining ethnic Anatolian Greeks were forced out during the 1923 population exchange between Greece and Turkey. Of the remainder, most have left Turkey since then, leaving fewer than 5,000 Greeks in Anatolia today. Geology Anatolia's terrain is structurally complex. A central massif composed of uplifted blocks and downfolded troughs, covered by recent deposits and giving the appearance of a plateau with rough terrain, is wedged between two folded mountain ranges that converge in the east. True lowland is confined to a few narrow coastal strips along the Aegean, Mediterranean, and the Black Sea coasts. Flat or gently sloping land is rare and largely confined to the deltas of the Kızıl River, the coastal plains of Çukurova and the valley floors of the Gediz River and the Büyük Menderes River as well as some interior high plains in Anatolia, mainly around Lake Tuz (Salt Lake) and the Konya Basin (Konya Ovasi). There are two mountain ranges in southern Anatolia: the Taurus and the Zagros mountains. Climate Anatolia has a varied range of climates. The central plateau is characterized by a continental climate, with hot summers and cold snowy winters. The south and west coasts enjoy a typical Mediterranean climate, with mild rainy winters, and warm dry summers. The Black Sea and Marmara coasts have a temperate oceanic climate, with cool foggy summers and much rainfall throughout the year. Ecoregions There is a diverse number of plant and animal communities. The mountains and coastal plain of northern Anatolia experience a humid and mild climate. There are temperate broadleaf, mixed and coniferous forests. The central and eastern plateau, with its drier continental climate, has deciduous forests and forest steppes. Western and southern Anatolia, which have a Mediterranean climate, contain Mediterranean forests, woodlands, and scrub ecoregions. Euxine-Colchic deciduous forests: These temperate broadleaf and mixed forests extend across northern Anatolia, lying between the mountains of northern Anatolia and the Black Sea. They include the enclaves of temperate rainforest lying along the southeastern coast of the Black Sea in eastern Turkey and Georgia. Northern Anatolian conifer and deciduous forests: These forests occupy the mountains of northern Anatolia, running east and west between the coastal Euxine-Colchic forests and the drier, continental climate forests of central and eastern Anatolia. Central Anatolian deciduous forests: These forests of deciduous oaks and evergreen pines cover the plateau of central Anatolia. Central Anatolian steppe: These dry grasslands cover the drier valleys and surround the saline lakes of central Anatolia, and include halophytic (salt tolerant) plant communities. Eastern Anatolian deciduous forests: This ecoregion occupies the plateau of eastern Anatolia. The drier and more continental climate is beneficial for steppe-forests dominated by deciduous oaks, with areas of shrubland, montane forest, and valley forest. Anatolian conifer and deciduous mixed forests: These forests occupy the western, Mediterranean-climate portion of the Anatolian plateau. Pine forests and mixed pine and oak woodlands and shrublands are predominant. Aegean and Western Turkey sclerophyllous and mixed forests: These Mediterranean-climate forests occupy the coastal lowlands and valleys of western Anatolia bordering the Aegean Sea. The ecoregion has forests of Turkish pine (Pinus brutia), oak forests and woodlands, and maquis shrubland of Turkish pine and evergreen sclerophyllous trees and shrubs, including Olive (Olea europaea), Strawberry Tree (Arbutus unedo), Arbutus andrachne, Kermes Oak (Quercus coccifera), and Bay Laurel (Laurus nobilis). Southern Anatolian montane conifer and deciduous forests: These mountain forests occupy the Mediterranean-climate Taurus Mountains of southern Anatolia. Conifer forests are predominant, chiefly Anatolian black pine (Pinus nigra), Cedar of Lebanon (Cedrus libani), Taurus fir (Abies cilicica), and juniper (Juniperus foetidissima and J. excelsa). Broadleaf trees include oaks, hornbeam, and maples. Eastern Mediterranean conifer-sclerophyllous-broadleaf forests: This ecoregion occupies the coastal strip of southern Anatolia between the Taurus Mountains and the Mediterranean Sea. Plant communities include broadleaf sclerophyllous maquis shrublands, forests of Aleppo Pine (Pinus halepensis) and Turkish Pine (Pinus brutia), and dry oak (Quercus spp.) woodlands and steppes. Demographics See also Aeolis Anatolian hypothesis Anatolianism Anatolian leopard Anatolian Plate Anatolian Shepherd Ancient kingdoms of Anatolia Antigonid dynasty Doris (Asia Minor) Empire of Nicaea Empire of Trebizond Gordium Lycaonia Midas Miletus Myra Pentarchy Pontic Greeks Rumi Saint Anatolia Saint John Saint Nicholas Saint Paul Seleucid Empire Seven churches of Asia Seven Sleepers Tarsus Troad Turkic migration Notes References Citations Sources Further reading Akat, Uücel, Neşe Özgünel, and Aynur Durukan. 1991. Anatolia: A World Heritage. Ankara: Kültür Bakanliǧi. Brewster, Harry. 1993. Classical Anatolia: The Glory of Hellenism. London: I.B. Tauris. Donbaz, Veysel, and Şemsi Güner. 1995. The Royal Roads of Anatolia. Istanbul: Dünya. Dusinberre, Elspeth R. M. 2013. Empire, Authority, and Autonomy In Achaemenid Anatolia. Cambridge: Cambridge University Press. Gates, Charles, Jacques Morin, and Thomas Zimmermann. 2009. Sacred Landscapes In Anatolia and Neighboring Regions. Oxford: Archaeopress. Mikasa, Takahito, ed. 1999. Essays On Ancient Anatolia. Wiesbaden: Harrassowitz. Takaoğlu, Turan. 2004. Ethnoarchaeological Investigations In Rural Anatolia. İstanbul: Ege Yayınları. Taracha, Piotr. 2009. Religions of Second Millennium Anatolia. Wiesbaden: Harrassowitz. Taymaz, Tuncay, Y. Yilmaz, and Yildirim Dilek. 2007. The Geodynamics of the Aegean and Anatolia. London: Geological Society. External links Peninsulas of Asia Geography of Western Asia Geography of the Middle East Near East Geography of Armenia Geography of Turkey Peninsulas of Turkey Regions of Turkey Regions of Asia Ancient Near East Ancient Greek geography Physiographic provinces Historical regions Eurasia
Anatolia
Apple Inc. is an American multinational technology company that specializes in consumer electronics, software and online services. Apple is the largest information technology company by revenue (totaling in 2021) and, as of January 2021, it is the world's most valuable company, the fourth-largest personal computer vendor by unit sales and second-largest mobile phone manufacturer. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Meta, and Microsoft. Apple was founded as Apple Computer Company on April 1, 1976, by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer. It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977 and the company's next computer, the Apple II became a best seller. Apple went public in 1980, to instant financial success. The company went onto develop new computers featuring innovative graphical user interfaces, including the original Macintosh, announced in a critically acclaimed advertisement, "1984", directed by Ridley Scott. By 1985, the high cost of its products and power struggles between executives caused problems. Wozniak stepped back from Apple amicably, while Jobs resigned to found NeXT, taking some Apple employees with him. As the market for personal computers expanded and evolved throughout the 1990s, Apple lost considerable market share to the lower-priced duopoly of the Microsoft Windows operating system on Intel-powered PC clones (also known as "Wintel"). In 1997, weeks away from bankruptcy, the company bought NeXT to resolve Apple's unsuccessful operating system strategy and entice Jobs back to the company. Over the next decade, Jobs guided Apple back to profitability through a number of tactics including introducing the iMac, iPod, iPhone and iPad to critical acclaim, launching memorable advertising campaigns, opening the Apple Store retail chain, and acquiring numerous companies to broaden the company's product portfolio. Jobs resigned in 2011 for health reasons, and died two months later. He was succeeded as CEO by Tim Cook. Apple became the first publicly traded U.S. company to be valued at over $1 trillion in August 2018, then $2 trillion in August 2020, and most recently $3 trillion in January 2022. The company receives criticism regarding the labor practices of its contractors, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. The company enjoys a high level of brand loyalty, and is ranked as one of the world's most valuable brands. History 1976–1980: Founding and incorporation Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a business partnership. The company's first product was the Apple I, a computer designed and hand-built entirely by Wozniak. To finance its creation, Jobs sold his only motorized means of transportation, a VW Bus, for a few hundred dollars, and Wozniak sold his HP-65 calculator for . Wozniak debuted the first prototype Apple I at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which would not yet be marketed as a complete personal computer. It went on sale soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits". Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%. The Apple II, also invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differed from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While the Apple I and early Apple II models used ordinary audio cassette tapes as storage devices, they were superseded by the introduction of a -inch floppy disk drive and interface called the Disk II in 1978. The Apple II was chosen to be the desktop platform for the first "killer application" of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office. Before VisiCalc, Apple had been a distant third place competitor to Commodore and Tandy. By the end of the 1970s, Apple had become the leading computer manufacturer in the United States. On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.39 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, 300 millionaires were created, from a stock price of $29 per share and a market cap of $1.778 billion. 1980–1990: Success with Macintosh A critical moment in the company's history came in December 1979 when Jobs and several Apple employees, including human–computer interface expert Jef Raskin, visited Xerox PARC in to see a demonstration of the Xerox Alto, a computer using a graphical user interface. Xerox granted Apple engineers three days of access to the PARC facilities in return for the option to buy 100,000 shares (5.6 million split-adjusted shares ) of Apple at the pre-IPO price of $10 a share. After the demonstration, Jobs was immediately convinced that all future computers would use a graphical user interface, and development of a GUI began for the Apple Lisa, named after Jobs's daughter. The Lisa division would be plagued by infighting, and in 1982 Jobs was pushed off the project. The Lisa launched in 1983 and became the first personal computer sold to the public with a GUI, but was a commercial failure due to its high price and limited software titles. Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as low-cost-computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue. In 1984, Apple launched the Macintosh, the first personal computer to be sold without a programming language. Its debut was signified by "1984", a $1.5 million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This is now hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide. The advertisement created great interest in the original Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had made the decision to equip the original Macintosh with 128 kilobytes of RAM, attempting to reach a price point, which limited its speed and the software that could be used. The Macintosh would eventually ship for , a price panned by critics in light of its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs using the famous line, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley decided to remove Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors. The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a boardroom coup and called an emergency meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took a number of Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Despite Wozniak's grievances, he officially remained employed by Apple, and to this day continues to work for the company as a representative, receiving a stipend estimated to be $120,000 per year for this role. Both Jobs and Wozniak remained Apple shareholders after their departures. After the departures of Jobs and Wozniak, Sculley worked to improve the Macintosh in 1985 by quadrupling the RAM and introducing the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter and PageMaker was responsible for the creation of the desktop publishing market. This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, notably due to Jean-Louis Gassée's mantra of "fifty-five or die", referring to the 55% profit margins of the Macintosh II. This policy began to backfire in the last years of the decade as desktop publishing programs appeared on PC clones that offered some or much of the same functionality of the Macintosh, but at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford their high-priced products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year. 1990–1997: Decline and restructuring The company pivoted strategy and in October 1990 introduced three lower-cost models, the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which saw significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities. The success of the lower-cost Macs and PowerBook brought increasing revenue. For some time, Apple was doing incredibly well, introducing fresh new products and generating increasing profits in the process. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh. The success of Apple's lower-cost consumer models, especially the LC, also led to the cannibalization of their higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, aimed at different markets: the high-end Quadra models, the mid-range Centris line, and the consumer-marketed Performa series. This led to significant market confusion, as customers did not understand the difference between models. The early 1990s also saw the discontinuation of the Apple II series, which was expensive to produce, and the company felt was still taking sales away from lower-cost Macintosh models. After the launch of the LC, Apple began encouraging developers to create applications for Macintosh rather than Apple II, and authorized salespersons to direct consumers towards Macintosh and away from Apple II. The Apple IIe was discontinued in 1993. Throughout this period, Microsoft continued to gain market share with its Windows graphical user interface that it sold to manufacturers of generally less expensive PC clones. While the Macintosh was more expensive, it offered a more tightly integrated user experience, but the company struggled to make the case to consumers. Apple also experimented with a number of other unsuccessful consumer targeted products during the 1990s, including digital cameras, portable CD audio players, speakers, video game consoles, the eWorld online service, and TV appliances. Most notably, enormous resources were invested in the problem-plagued Newton tablet division, based on John Sculley's unrealistic market forecasts. personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; instead, they sued Microsoft for using a GUI similar to the Apple Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years before it was finally dismissed. The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler. With Spindler at the helm Apple, IBM, and Motorola formed the AIM alliance in 1994 with the goal of creating a new computing platform (the PowerPC Reference Platform; PReP), which would use IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. The same year, Apple introduced the Power Macintosh, the first of many Apple computers to use Motorola's PowerPC processor. In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996 Apple executives were worried that the clones were cannibalizing sales of their own high-end computers, where profit margins were highest. In 1996, Spindler was replaced by Gil Amelio as CEO. Hired for his reputation as a corporate rehabilitator, Amelio made deep changes, including extensive layoffs and cost-cutting. This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this with by introducing cooperative multitasking in System 5, but the company still felt it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and the attempted purchase of BeOS in 1996. Talks with Be stalled the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million instead of the $125 million Apple wanted to pay. Only weeks away from bankruptcy, Apple's board decided NeXTSTEP was a better choice for its next operating system and purchased NeXT in late 1996 for $429 million, bringing back Apple co-founder Steve Jobs. 1997–2007: Return to profitability The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup that resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses. The board named Jobs as interim CEO and he immediately began a review of the company's products. Jobs would order 70% of the company's products to be cancelled, resulting in the loss of 3,000 jobs, and taking Apple back to the core of its computer offerings. The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing software for the Mac. The investment was seen as an "antitrust insurance policy" for Microsoft who had recently settled with the Department of Justice over anti-competitive practices. Jobs also ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, Apple introduced the Apple Store website, which was tied to a new build-to-order manufacturing that had been successfully used by PC manufacturer Dell. The moves paid off for Jobs, at the end of his first year as CEO, the company turned a $309 million profit. On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success for Apple selling 800,000 units in its first five months and ushered in major shifts in the industry by abandoning legacy technologies like the 3½-inch diskette, being an early adopter of the USB connector, and coming pre-installed with internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. The device also had a striking eardrop shape and translucent materials, designed by Jonathan Ive, who although hired by Amelio, would go on to work collaboratively with Jobs for the next decade to chart a new course the design of Apple's products. A little more than a year later on July 21, 1999, Apple introduced the iBook, a laptop for consumers. It was the culmination of a strategy established by Jobs to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, along with the iMac desktop and iBook laptop for consumers. Jobs felt the small product line allowed for a greater focus on quality and innovation. At around the same time, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired of Macromedia's Key Grip digital video editing software project which was renamed Final Cut Pro when it was launched on the retail market in April 1999. The development of Key Grip also led to Apple's release of the consumer video-editing product iMovie in October 1999. Next, Apple successfully acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple would sell as the professional-oriented DVD Studio Pro software product, and used the same technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, while simplifying the user interface and adding the ability to burn CDs. 2001 would be a pivotal year for the Apple with the company making three announcements that would change the course of the company. The first announcement came on March 24, 2001, that Apple was nearly ready to release a new modern operating system, Mac OS X. The announcement came after numerous failed attempts in the early 1990s, and several years of development. Mac OS X was based on NeXTSTEP, OPENSTEP, and BSD Unix, with Apple aiming to combine the stability, reliability, and security of Unix with the ease of use afforded by an overhauled user interface, heavily influenced by NeXTSTEP. To aid users in migrating from Mac OS 9, the new operating system allowed the use of OS 9 applications within Mac OS X via the Classic Environment. In May 2001 the company opened its first two Apple Store retail locations in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they went on to become highly successful, and the first of more than 500 stores around the world. On October 23, 2001, Apple debuted the iPod portable digital audio player. The product, which was first sold on November 10, 2001, was phenomenally successful with over 100 million units sold within six years. In 2003, Apple's iTunes Store was introduced. The service offered music downloads for $0.99 a song and integration with the iPod. The iTunes Store quickly became the market leader in online music services, with over five billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer. In 2002, Apple purchased Nothing Real for their advanced digital compositing application Shake, as well as Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto in the same year completed the iLife suite. At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. On April 29, 2009, The Wall Street Journal reported that Apple was building its own team of engineers to design microchips. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X. Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders". 2007–2011: Success with mobile devices During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced that Apple Computer, Inc. would thereafter be known as "Apple Inc.", because the company had shifted its emphasis from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 iPhone units during the first 30 hours of sales, and the device was called "a game changer for the industry". In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management (DRM) , thereby allowing tracks to be played on third-party players, if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM. In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone. On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession with revenue of $8.16 billion and profit of $1.21 billion. After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the US. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May of the same year, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989. In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new uninsulated stainless steel design that acted as the phone's antenna. Later that year, Apple again refreshed its iPod line of MP3 players by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second generation Apple TV which allowed renting of movies and shows. On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief Operating Officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death. On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors, Andrea Jung and Arthur D. Levinson, who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs' death. 2011–present: Post–Jobs era, Tim Cook's leadership On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The first major product announcement by Apple following Jobs's passing occurred on January 19, 2012, when Apple's Phil Schiller introduced iBook's Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography "Jobs" that he wanted to reinvent the textbook industry and education. From 2011 to 2012, Apple released the iPhone 4S and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third and fourth generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers. On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make $280 million a year from this deal with HTC. In May 2014, the company confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for $3 billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology." The acquisition was the largest purchase in Apple's history. During a press event on September 9, 2014, Apple introduced a smartwatch, the Apple Watch. Initially, Apple marketed the device as a fashion accessory and a complement to the iPhone, that would allow people to look at their smartphones less. Over time, the company has focused on developing health and fitness-oriented features on the watch, in an effort to compete with dedicated activity trackers. In January 2016, it was announced that one billion Apple devices were in active use worldwide. On June 6, 2016, Fortune released Fortune 500, their list of companies ranked on revenue generation. In the trailing fiscal year (2015), Apple appeared on the list as the top tech company. It ranked third, overall, with $233 billion in revenue. This represents a movement upward of two spots from the previous year's list. In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Towards the end of the year, TechCrunch reported that Apple was acquiring Shazam, a company that introduced its products at WWDC and specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple $400 million, with media reports noting that the purchase looked like a move to acquire data and tools bolstering the Apple Music streaming service. The purchase was approved by the European Union in September 2018. Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writers Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, as well as a partnership with A24 to create original films. On August 19, 2020, Apple's share price briefly topped $467.77, making Apple the first US company with a market capitalization of $2 trillion. During its annual WWDC keynote speech on June 22, 2020, Apple announced it would move away from Intel processors, and the Mac would transition to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that Macs featuring Apple's processors would allow for big increases in performance over current Intel-based models. On November 10, 2020, the MacBook Air, MacBook Pro, and the Mac Mini became the first Mac devices powered by an Apple-designed processor, the Apple M1. Products Macintosh Macintosh, commonly known as Mac, is Apple's line of personal computers that use the company's proprietary macOS operating system. Personal computers were Apple's original business line, but they account for only about 10 percent of the company's revenue. The company is in the process of switching Mac computers from Intel processors to Apple silicon, a custom-designed system on a chip platform. , there are five Macintosh computer families in production: iMac: Consumer all-in-one desktop computer, introduced in 1998. Mac Mini: Consumer sub-desktop computer, introduced in 2005. MacBook Pro: Professional notebook, introduced in 2006. Mac Pro: Professional workstation, introduced in 2006. MacBook Air: Consumer ultra-thin notebook, introduced in 2008. Apple also sells a variety of accessories for Macs, including the Pro Display XDR, Magic Mouse, Magic Trackpad, and Magic Keyboard. The company also develops several pieces of software that are included in the purchase price of a Mac, including the Safari web browser, the iMovie video editor, the GarageBand audio editor and the iWork productivity suite. Additionally, the company sells several professional software applications including the Final Cut Pro video editor, Motion for video animations, the Logic Pro audio editor, MainStage for live audio production, and Compressor for media compression and encoding. iPhone iPhone is Apple's line of smartphones that use the company's proprietary iOS operating system, derived from macOS. The first-generation iPhone was announced by then-Apple CEO Steve Jobs on January 9, 2007. Since then, Apple has annually released new iPhone models and iOS updates. The iPhone has a user interface built around a multi-touch screen, which at the time of its introduction was described as "revolutionary" and a "game-changer" for the mobile phone industry. The device has been credited with popularizing the smartphone and slate form factor, and with creating a large market for smartphone apps, or "app economy". iOS is one of the two largest smartphone platforms in the world alongside Android. The iPhone has generated large profits for the company, and is credited with helping to make Apple one of the world's most valuable publicly traded companies. , the iPhone accounts for more than half of the company's revenue. , 33 iPhone models have been produced, with five smartphone families in production: iPhone 13 iPhone 13 Pro iPhone 12 iPhone SE (2nd generation) iPhone 11 iPad iPad is Apple's line of tablet computers that use the company's proprietary iPadOS operating system, derived from macOS and iOS. The first-generation iPad was announced on January 27, 2010. The iPad took the multi-touch user interface first introduced in the iPhone, and adapted it to a larger screen, marked for interaction with multimedia formats including newspapers, books, photos, videos, music, documents, video games, and most existing iPhone apps. Earlier generations of the iPad used the same iOS operating system as the company's smartphones before being split off in 2019. Apple has sold more than 500 million iPads, though sales peaked in 2013. However, the iPad remains the most popular tablet computer by sales , and accounted for nine percent of the company's revenue . In recent years, Apple has started offering more powerful versions of the device, with the current iPad Pro sharing the same Apple silicon as Macintosh computers, along with a smaller version of the device called iPad mini, and an upgraded version called iPad Air. , there are four iPad families in production: iPad (9th generation) iPad mini (6th generation) iPad Pro (5th generation) iPad Air (4th generation) Wearables, Home and Accessories Apple also makes several other products that it categorizes as "Wearables, Home and Accessories." These products include the AirPods line of wireless headphones, Apple TV digital media players, Apple Watch smartwatches, Beats headphones, HomePod Mini smart speakers, and the iPod touch, the last remaining device in Apple's successful line of iPod portable media players. , this broad line of products comprises about 11% of the company's revenues. Services Apple also offers a broad line of services that it earns revenue on, including advertising in the App Store and Apple News app, the AppleCare+ extended warranty plan, the iCloud+ cloud-based data storage service, payment services through the Apple Card credit card and the Apple Pay processing platform, a digital content services including Apple Books, Apple Fitness+, Apple Music, Apple News+, Apple TV+, and the iTunes Store. , services comprise about 19% of the company's revenue. Many of the services have been launched since 2019 when Apple announced it would be making a concerted effort to expand its service revenues. Corporate identity Logo According to Steve Jobs, the company's name was inspired by his visit to an apple farm while on a fruitarian diet. Jobs thought the name "Apple" was "fun, spirited and not intimidating". Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. Janoff presented Jobs with several different monochromatic themes for the "bitten" logo, and Jobs immediately took a liking to it. However, Jobs insisted that the logo be colorized to humanize the company. The logo was designed with a bite so that it would not be confused with a cherry. The colored stripes were conceived to make the logo more accessible, and to represent the fact the Apple II could generate graphics in color. This logo is often erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his method of suicide. Both Janoff and Apple deny any homage to Turing in the design of the logo. On August 27, 1999 (the year following the introduction of the iMac G3), Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation. An Aqua-themed version of the monochrome logo was used from 1998 to 2003, and a glass-themed version was used from 2007 to 2013. Steve Jobs and Steve Wozniak were fans of the Beatles, but Apple Inc. had name and logo trademark issues with Apple Corps Ltd., a multimedia company started by the Beatles in 1968. This resulted in a series of lawsuits and tension between the two companies. These issues ended with the settling of their lawsuit in 2007. Advertising Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think Different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines — for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod. From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts towards effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained a lot of attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul". Brand loyalty Apple customers gained a reputation for devotion and loyalty early in the company's history. In 1984, BYTE stated that: Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon," while Ive explained in 2014 that "People have an incredibly personal relationship" with Apple's products. Apple Store openings and new product releases can draw crowds of hundreds, with some waiting in line as much as a day before the opening. The opening of New York City's Apple Fifth Avenue store in 2006 was highly attended, and had visitors from Europe who flew in for the event. In June 2017, a newlywed couple took their wedding photos inside the then-recently opened Orchard Road Apple Store in Singapore. The high level of brand loyalty has been criticized and ridiculed, applying the epithet "Apple fanboy" and mocking the lengthy lines before a product launch. An internal memo leaked in 2015 suggested the company planned to discourage long lines and direct customers to purchase its products on its website. Fortune magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year since 2005. The New York Times in 1985 stated that "Apple above all else is a marketing company". John Sculley agreed, telling The Guardian newspaper in 1997 that "People talk about technology, but Apple was a marketing company. It was the marketing company of the decade." Research in 2002 by NetRatings indicate that the average Apple consumer was usually more affluent and better educated than other PC company consumers. The research indicated that this correlation could stem from the fact that on average Apple Inc. products were more expensive than other PC products. In response to a query about the devotion of loyal Apple consumers, Jonathan Ive responded: there are 1.65 billion Apple products in active use. Headquarters and major facilities Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death. Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university. In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings that also served as prior headquarters: "Stephens Creek Three" (1977–1978), Bandley One" (1978–1982), and "Mariani One" (1982–1993). In total, Apple occupies almost 40% of the available office space in the city. Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork. Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people to work in technical support, supply chain management, online store curation, and Apple Maps data management. The company, also has several other locations in Boulder, Colo., Culver City, Calif., Herzliya (Israel), London, New York, Pittsburgh, San Diego and Seattle that each employ hundreds of people. Stores The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so. Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. In May 2016, Angela Ahrendts, Apple's then Senior Vice President of Retail, unveiled a significantly redesigned Apple Store in Union Square, San Francisco, featuring large glass doors for the entry, open spaces, and re-branded rooms. In addition to purchasing products, consumers can get advice and help from "Creative Pros" – individuals with specialized knowledge of creative arts; get product support in a tree-lined Genius Grove; and attend sessions, conferences and community events, with Ahrendts commenting that the goal is to make Apple Stores into "town squares", a place where people naturally meet up and spend time. The new design will be applied to all Apple Stores worldwide, a process that has seen stores temporarily relocate or close. Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Apple's notable brand loyalty among consumers causes long lines of hundreds of people at new Apple Store openings or product releases. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement. A May 2016 report with an anonymous retail employee highlighted a hostile work environment with harassment from customers, intense internal criticism, and a lack of significant bonuses for securing major business contracts. Due to the COVID-19 pandemic, Apple closed its stores outside China until March 27, 2020. Despite the stores being closed, hourly workers continue to be paid. Workers across the company are allowed to work remotely if their jobs permit it. On March 24, 2020, in a memo, Senior Vice President of People and Retail Deirdre O’Brien announced that some of its retail stores are expected to reopen at the beginning of April. Corporate affairs Corporate culture Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions while in his youth as a source of inspiration for his co-founding Apple. As the company has grown and been led by a series of differently opinionated chief executives, it has arguably lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned to the company. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than projects with it. To recognize the best of its employees, Apple created the Apple Fellows program which awards individuals who make extraordinary technical or leadership contributions to personal computing while at the company. The Apple Fellowship has so far been awarded to individuals including Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller. At Apple, employees are intended to be specialists who are not exposed to functions outside their area of expertise. Jobs saw this as a means of having "best-in-class" employees in every role. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. As an example, when iOS senior vice president Scott Forstall refused to sign Apple's official apology for numerous errors in the redesigned Maps app, he was forced to resign. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year. In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees. Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. Lack of innovation An editorial article in The Verge in September 2016 by technology journalist Thomas Ricker explored some of the public's perceived lack of innovation at Apple in recent years, specifically stating that Samsung has "matched and even surpassed Apple in terms of smartphone industrial design" and citing the belief that Apple is incapable of producing another breakthrough moment in technology with its products. He goes on to write that the criticism focuses on individual pieces of hardware rather than the ecosystem as a whole, stating "Yes, iteration is boring. But it's also how Apple does business. [...] It enters a new market and then refines and refines and continues refining until it yields a success". He acknowledges that people are wishing for the "excitement of revolution", but argues that people want "the comfort that comes with harmony". Furthermore, he writes that "a device is only the starting point of an experience that will ultimately be ruled by the ecosystem in which it was spawned", referring to how decent hardware products can still fail without a proper ecosystem (specifically mentioning that Walkman did not have an ecosystem to keep users from leaving once something better came along), but how Apple devices in different hardware segments are able to communicate and cooperate through the iCloud cloud service with features including Universal Clipboard (in which text copied on one device can be pasted on a different device) as well as inter-connected device functionality including Auto Unlock (in which an Apple Watch can unlock a Mac in close proximity). He argues that Apple's ecosystem is its greatest innovation. The Wall Street Journal reported in June 2017 that Apple's increased reliance on Siri, its virtual personal assistant, has raised questions about how much Apple can actually accomplish in terms of functionality. Whereas Google and Amazon make use of big data and analyze customer information to personalize results, Apple has a strong pro-privacy stance, intentionally not retaining user data. "Siri is a textbook of leading on something in tech and then losing an edge despite having all the money and the talent and sitting in Silicon Valley", Holger Mueller, a technology analyst, told the Journal. The report further claims that development on Siri has suffered due to team members and executives leaving the company for competitors, a lack of ambitious goals, and shifting strategies. Though switching Siri's functions to machine learning and algorithms, which dramatically cut its error rate, the company reportedly still failed to anticipate the popularity of Amazon's Echo, which features the Alexa personal assistant. Improvements to Siri stalled, executives clashed, and there were disagreements over the restrictions imposed on third-party app interactions. While Apple acquired an England-based startup specializing in conversational assistants, Google's Assistant had already become capable of helping users select Wi-Fi networks by voice, and Siri was lagging in functionality. In December 2017, two articles from The Verge and ZDNet debated what had been a particularly devastating week for Apple's macOS and iOS software platforms. The former had experienced a severe security vulnerability, in which Macs running the then-latest macOS High Sierra software were vulnerable to a bug that let anyone gain administrator privileges by entering "root" as the username in system prompts, leaving the password field empty and twice clicking "unlock", gaining full access. The bug was publicly disclosed on Twitter, rather than through proper bug bounty programs. Apple released a security fix within a day and issued an apology, stating that "regrettably we stumbled" in regards to the security of the latest updates. After installing the security patch, however, file sharing was broken for users, with Apple releasing a support document with instructions to separately fix that issue. Though Apple publicly stated the promise of "auditing our development processes to help prevent this from happening again", users who installed the security update while running the older 10.13.0 version of the High Sierra operating system rather than the then-newest 10.13.1 release experienced that the "root" security vulnerability was re-introduced, and persisted even after fully updating their systems. On iOS, a date bug caused iOS devices that received local app notifications at 12:15am on December 2, 2017, to repeatedly restart. Users were recommended to turn off notifications for their apps. Apple quickly released an update, done during the nighttime in Cupertino, California time and outside of their usual software release window, with one of the headlining features of the update needing to be delayed for a few days. The combined problems of the week on both macOS and iOS caused The Verges Tom Warren to call it a "nightmare" for Apple's software engineers and described it as a significant lapse in Apple's ability to protect its more than 1 billion devices. ZDNets Adrian Kingsley-Hughes wrote that "it's hard to not come away from the last week with the feeling that Apple is slipping". Kingsley-Hughes also concluded his piece by referencing an earlier article, in which he wrote that "As much as I don't want to bring up the tired old 'Apple wouldn't have done this under Steve Jobs's watch' trope, a lot of what's happening at Apple lately is different from what they came to expect under Jobs. Not to say that things didn't go wrong under his watch, but product announcements and launches felt a lot tighter for sure, as did the overall quality of what Apple was releasing." He did, however, also acknowledge that such failures "may indeed have happened" with Jobs in charge, though returning to the previous praise for his demands of quality, stating "it's almost guaranteed that given his personality that heads would have rolled, which limits future failures". Manufacturing and assembling The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk". In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in its iPhone devices. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit. ... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation". As of 2021, Apple uses components from 43 different countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics mostly in factories located inside China, but also Brazil, and India. During the Mac's early history Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all Cable TV boxes in the United States. Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. Labor practices The company advertised its products as being made in America until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that "Made in the USA" is no longer a viable option for most Apple products". In 2006, one complex of factories in Shenzhen, China that assembled the iPod and other items had over 200,000 workers living and working within it. Employees regularly worked more than 60 hours per week and made around $100 per month. A little over half of the workers' earnings was required to pay for rent and food from the company. Apple immediately launched an investigation after the 2006 media report, and worked with their manufacturers to ensure acceptable working conditions. In 2007, Apple started yearly audits of all its suppliers regarding worker's rights, slowly raising standards and pruning suppliers that did not comply. Yearly progress reports have been published since 2008. In 2011, Apple admitted that its suppliers' child labor practices in China had worsened. The Foxconn suicides occurred between January and November 2010, when 18 Foxconn (Chinese: 富士康) employees attempted suicide, resulting in 14 deaths—the company was the world's largest contract electronics manufacturer, for clients including Apple, at the time. The suicides drew media attention, and employment practices at Foxconn were investigated by Apple. Apple issued a public statement about the suicides, and company spokesperson Steven Dowling said: The statement was released after the results from the company's probe into its suppliers' labor practices were published in early 2010. Foxconn was not specifically named in the report, but Apple identified a series of serious labor violations of labor laws, including Apple's own rules, and some child labor existed in a number of factories. Apple committed to the implementation of changes following the suicides. Also in 2010, workers in China planned to sue iPhone contractors over poisoning by a cleaner used to clean LCD screens. One worker claimed that he and his coworkers had not been informed of possible occupational illnesses. After a high suicide rate in a Foxconn facility in China making iPads and iPhones, albeit a lower rate than that of China as a whole, workers were forced to sign a legally binding document guaranteeing that they would not kill themselves. Workers in factories producing Apple products have also been exposed to hexane, a neurotoxin that is a cheaper alternative than alcohol for cleaning the products. A 2014 BBC investigation found excessive hours and other problems persisted, despite Apple's promise to reform factory practice after the 2010 Foxconn suicides. The Pegatron factory was once again the subject of review, as reporters gained access to the working conditions inside through recruitment as employees. While the BBC maintained that the experiences of its reporters showed that labor violations were continuing since 2010, Apple publicly disagreed with the BBC and stated: "We are aware of no other company doing as much as Apple to ensure fair and safe working conditions". In December 2014, the Institute for Global Labour and Human Rights published a report which documented inhumane conditions for the 15,000 workers at a Zhen Ding Technology factory in Shenzhen, China, which serves as a major supplier of circuit boards for Apple's iPhone and iPad. According to the report, workers are pressured into 65-hour work weeks which leaves them so exhausted that they often sleep during lunch breaks. They are also made to reside in "primitive, dark and filthy dorms" where they sleep "on plywood, with six to ten workers in each crowded room." Omnipresent security personnel also routinely harass and beat the workers. In 2019, there were reports stating that some of Foxconn's managers had used rejected parts to build iPhones and that Apple was investigating the issue. Environmental practices and initiatives Apple Energy Apple Energy, LLC is a wholly owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. In addition to the company's solar energy production, Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina. Apple will use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely with energy from renewable sources. Energy and resources Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources. In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer". In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet." , Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015. During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products. Apple working in partnership with Conservation Fund, have preserved 36,000 acres of working forests in Maine and North Carolina. Another partnership announced is with the World Wildlife Fund to preserve up to of forests in China. Featured was the company's installation of a 40 MW solar power plant in the Sichuan province of China that was tailor-made to coexist with the indigenous yaks that eat hay produced on the land, by raising the panels to be several feet off of the ground so the yaks and their feed would be unharmed grazing beneath the array. This installation alone compensates for more than all of the energy used in Apple's Stores and Offices in the whole of China, negating the company's energy carbon footprint in the country. In Singapore, Apple has worked with the Singaporean government to cover the rooftops of 800 buildings in the city-state with solar panels allowing Apple's Singapore operations to be run on 100% renewable energy. Liam was introduced to the world, an advanced robotic disassembler and sorter designed by Apple Engineers in California specifically for recycling outdated or broken iPhones. Reuses and recycles parts from traded in products. Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly, 140,000 metric tons of waste have been diverted from landfills. On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal. In April 2021, Apple said that it had started a $200 million fund in order to combat climate change by removing 1 million metric tons of carbon dioxide from the atmosphere each year. Toxins Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to fully eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category. In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praises Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. It continues to score well on the products rating with all Apple products now being free of PVC plastic and BFRs. However, the guide criticizes Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data and for not setting out any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables. Green bonds In February 2016, Apple issued a US$1.5 billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects. Racial Justice and Equality Initiatives In June 2020, Apple committed $100 million for its Racial Equity and Justice initiative (REJI) and in Jan 2021 announced various projects as part of the initiative. Finance Apple is the world's largest information technology company by revenue, the world's largest technology company by total assets, and the world's second-largest mobile phone manufacturer after Samsung. In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors. The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes. Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings. On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later. , Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. Tax practices Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich," which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean. British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporation tax. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax. Apple Inc. claims to be the single largest taxpayer to the Department of the Treasury of the United States of America with an effective tax rate of approximately of 26% as of the second quarter of the Apple fiscal year 2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated, that Apple is the biggest taxpayer worldwide. In 2015, Reuters reported that Apple had earnings abroad of $54.4 billion which were untaxed by the IRS of the United States. Under U.S. tax law governed by the IRC, corporations don't pay income tax on overseas profits unless the profits are repatriated into the United States and as such Apple argues that to benefit its shareholders it will leave it overseas until a repatriation holiday or comprehensive tax reform takes place in the United States. On July 12, 2016, the Central Statistics Office of Ireland announced that 2015 Irish GDP had grown by 26.3%, and 2015 Irish GNP had grown by 18.7%. The figures attracted international scorn, and were labelled by Nobel-prize winning economist, Paul Krugman, as leprechaun economics. It was not until 2018 that Irish economists could definitively prove that the 2015 growth was due to Apple restructuring its controversial double Irish subsidiaries (Apple Sales International), which Apple converted into a new Irish capital allowances for intangible assets tax scheme (expires in January 2020). The affair required the Central Bank of Ireland to create a new measure of Irish economic growth, Modified GNI* to replace Irish GDP, given the distortion of Apple's tax schemes. Irish GDP is 143% of Irish Modified GNI*. On August 30, 2016, after a two-year investigation, the EU Competition Commissioner concluded Apple received "illegal state aid" from Ireland. The EU ordered Apple to pay 13 billion euros ($14.5 billion), plus interest, in unpaid Irish taxes for 2004–2014. It is the largest tax fine in history. The Commission found that Apple had benefited from a private Irish Revenue Commissioners tax ruling regarding its double Irish tax structure, Apple Sales International (ASI). Instead of using two companies for its double Irish structure, Apple was given a ruling to split ASI into two internal "branches". The Chancellor of Austria, Christian Kern, put this decision into perspective by stating that "every Viennese cafe, every sausage stand pays more tax in Austria than a multinational corporation". , Apple agreed to start paying €13 billion in back taxes to the Irish government, the repayments will be held in an escrow account while Apple and the Irish government continue their appeals in EU courts. On July 15, 2020, the EU General Court annuls the European Commission's decision in Apple State aid case: Apple will not have to repay €13 billion to Ireland. Board of directors the following individuals sit on the board of Apple Inc. Arthur D. Levinson (chairman) Tim Cook (executive director and CEO) James A. Bell (non-executive director) Al Gore (non-executive director) Andrea Jung (non-executive director) Ronald Sugar (non-executive director) Susan Wagner (non-executive director) Executive management the management of Apple Inc. includes: Tim Cook (chief executive officer) Jeff Williams (chief operating officer) Luca Maestri (senior vice president and chief financial officer) Katherine L. Adams (senior vice president and general counsel) Eddy Cue (senior vice president – Internet Software and Services) Craig Federighi (senior vice president – Software Engineering) John Giannandrea (senior vice president – Machine Learning and AI Strategy) Deirdre O'Brien (senior vice president – Retail + People) John Ternus (senior vice president – Hardware Engineering) Greg Josiwak (senior vice president – Worldwide Marketing) Johny Srouji (senior vice president – Hardware Technologies) Sabih Khan (senior vice president – Operations) Lisa P. Jackson (vice president – Environment, Policy, and Social Initiatives) Isabel Ge Mahe (vice president and managing director – Greater China) Tor Myhren (vice president – Marketing Communications) Adrian Perica (vice president – Corporate Development) List of chief executives Michael Scott (1977–1981) Mike Markkula (1981–1983) John Sculley (1983–1993) Michael Spindler (1993–1996) Gil Amelio (1996–1997) Steve Jobs (1997–2011) Tim Cook (2011–present) List of chairmen The role of chairman of the Board has not always been in use; notably, between 1981 to 1985, and 1997 to 2011. Mike Markkula (1977–1981) Steve Jobs (1985) Mike Markkula (1985–1993); second term John Sculley (1993) Mike Markkula (1993–1997); third term Steve Jobs (2011); second term Arthur D. Levinson (2011–present) Litigation Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents. Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. Privacy stance Apple has a notable pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out. With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod touch applications to directly ask iPhone users permission to track them. The feature, titled "App Tracking Transparency", received heavy criticism from Facebook, whose primary business model revolves around the tracking of users' data and sharing such data with advertisers so users can see more relevant ads, a technique commonly known as targeted advertising. Despite Facebook's measures, including purchasing full-page newspaper advertisements protesting App Tracking Transparency, Apple released the update in mid-spring 2021. A study by Verizon subsidiary Flurry Analytics reported only 4% of iOS users in the United States and 12% worldwide have opted into tracking. However, Apple aids law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which don't have the same level of constitutional privacy as a passcode in the United States. Prior to the release of iOS 15, Apple announced new efforts at combating child sexual abuse material on iOS and Mac platforms. Parents of minor iMessage users can now be alerted if their child sends or receives nude photographs. Additionally, on-device hashing would take place on media destined for upload to iCloud, and hashes would be compared to a list of known abusive images provided by law enforcement; if enough matches were found, Apple would be alerted and authorities informed. The new features received praise from law enforcement and victims rights advocates, however privacy advocates, including the Electronic Frontier Foundation, condemned the new features as invasive and highly prone to abuse by authoritarian governments. Charitable causes Apple is a partner of (PRODUCT)RED, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish. Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, as well as for the 2017 Central Mexico earthquake. The company has also used its iTunes platform to encourage donations in the wake of environmental disasters and humanitarian crises, such as the 2010 Haiti earthquake, the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and the 2015 European migrant crisis. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible. On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet." Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco. During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe. On January 13, 2021, Apple announced a $100 million "Racial Equity and Justice Initiative" to help combat institutional racism worldwide. Criticism and controversies Apple has been criticized for alleged unethical business practices such as anti-competitive behavior, rash litigation, dubious tax tactics, production methods involving the use of sweatshop labor, customer service issues involving allegedly misleading warranties and insufficient data security, and its products' environmental footprint. Apple has also received criticism for its willingness to work and conduct business with nations such as China and Russia, engaging in practices that have been criticized by human rights groups. Critics have claimed that Apple products combine stolen or purchased designs that Apple claims are its original creations. It has been criticized for its alleged collaboration with the U.S. surveillance program PRISM. The company denied any collaboration. Products and services Apple's issues regarding music over the years include those with the European Union regarding iTunes, trouble over updating the Spotify app on Apple devices and collusion with record labels. In 2018–19, Apple faced criticism for its failure to approve NVIDIA web drivers for GPUs installed on legacy Mac Pro machines (up to mid 2012 5,1 running macOS Mojave 10.14). Without access to Apple-approved NVIDIA web drivers, Apple users faced replacing their NVIDIA cards with graphic cards produced by supported brands (such as the AMD Radeon), from a list of recommendations provided by Apple to its consumers. In June 2019, Apple issued a recall for its 2015 MacBook Pro Retina 15" following reports of batteries catching fire. The recall affected 432,000 units, and Apple was criticized for the long waiting periods consumers experienced, sometimes extending up to 3 weeks for replacements to arrive; the company also did not provide alternative replacements or repair options. In July 2019, following a campaign by the "right to repair" movement, challenging Apple's tech repair restrictions on devices, the FTC held a workshop to establish the framework of a future nationwide Right to Repair rule. The movement argues Apple is preventing consumers from legitimately fixing their devices at local repair shops which is having a negative impact on consumers. On November 19, 2020, it was announced that Apple will be paying out $113 million related to lawsuits stemming from their iPhone's battery problems and subsequent performance slow-downs. Apple continues to face litigation related to the performance throttling of iPhone 6 and 7 devices, an action that Apple argued was done in order to balance the functionality of the software with the impacts of a chemically aged battery. On January 25, 2021, Apple was hit with another lawsuit from an Italian consumer group, with more groups to follow, despite the rationale for the throttling. On November 30, 2020, the Italian antitrust authority AGCM fined Apple $12 million for misleading trade practices. AGCM stated that Apple's claims of the iPhone's water resistance weren't true as the phones could only resist water up to 4 meters deep in ideal laboratory conditions and not in regular circumstances. The authority added that Apple provided no assistance to customers with water-damaged phones, which it said constituted an aggressive trade practice. Privacy Ireland's Data Protection Commission also launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform. In December 2019, a report found that the iPhone 11 Pro continues tracking location and collecting user data even after users have disabled location services. In response, an Apple engineer said the Location Services icon "appears for system services that do not have a switch in settings." Antitrust The United States Department of Justice also began a review of Big Tech firms to establish whether they could be unlawfully stifling competition in a broad antitrust probe in 2019. On March 16, 2020, France fined Apple €1.1 billion for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017 but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012. On August 13, 2020, Epic Games, the maker of the popular game Fortnite, sued Apple and Google after its hugely popular video game was removed from Apple and Google's App Store. The suits come after both Apple and Google blocked the game after it introduced a direct payment system, effectively shutting out the tech titans from collecting fees. In September 2020 Epic Games founded the Coalition for App Fairness together with other thirteen companies, which aims for better conditions for the inclusion of apps in the app stores. Later in December 2020, Facebook agreed to assist Epic in their legal game against Apple, planning to support the company by providing materials and documents to Epic. Facebook had, however, stated that the company will not participate directly with the lawsuit, although did commit to helping with the discovery of evidence relating to the trial of 2021. In the months prior to their agreement, Facebook had been dealing with feuds against Apple relating to the prices of paid apps as well as privacy rule changes. Head of ad products for Facebook Dan Levy commented, saying that "this is not really about privacy for them, this is about an attack on personalized ads and the consequences it's going to have on small-business owners," commenting on the full-page ads placed by Facebook in various newspapers in December 2020. Politics In January 2020, US President Donald Trump and attorney general William P. Barr criticized Apple for refusing to unlock two iPhones of a Saudi national, Mohammed Saeed Alshamrani, who shot and killed three American sailors and injured eight others in the Naval Air Station Pensacola. The shooting was declared an "act of terrorism" by the FBI, but Apple denied the request to crack the phones to reveal possible terrorist information citing its data privacy policy. Apple Inc., shareholders increased pressure on the company to publicly commit “to respect freedom of expression as a human right”, upon which Apple committed to freedom of expression and information in its human rights policy document. It said that the policy is based on the guidelines of the United Nations on business and human rights, in early September 2020. In 2021, Apple complied with a request by the Chinese government to ban a Quran app from its devices and platforms. The request occurred in the context of the Chinese government's ongoing mass repression of Muslims, particularly Uyghurs, in Xinjiang, which some have labeled a genocide. In December 2021, The Information reported that CEO Tim Cook had negotiated in 2016 a five-year agreement with the Chinese government, motivated in part to allay regulatory issues that had harmed the company's business in China. The agreement entailed promised investments totaling $275 billion. In September 2021, Apple removed an app from its App Store created by Alexei Navalny meant to coordinate protest voting during the 2021 Russian legislative election. The Russian government had threatened to arrest individual Apple employees working in the country unless Apple complied. Patents In January 2022, Ericsson sued Apple over payment of royalty of 5G technology. See also List of Apple Inc. media events Pixar References Bibliography Further reading External links 1976 establishments in California 1980s initial public offerings American brands Companies based in Cupertino, California Companies in the Dow Jones Industrial Average Companies in the PRISM network Companies listed on the Nasdaq Computer companies established in 1976 Computer companies of the United States Display technology companies Electronics companies of the United States Home computer hardware companies Mobile phone manufacturers Multinational companies headquartered in the United States Networking hardware companies Portable audio player manufacturers Retail companies of the United States Software companies based in the San Francisco Bay Area Software companies established in 1976 Steve Jobs Technology companies based in the San Francisco Bay Area Technology companies established in 1976 Technology companies of the United States
Apple Inc.
Argon is a chemical element with the symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third-most abundant gas in the Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust. Nearly all of the argon in the Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in the Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas. The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990. Argon is extracted industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon is also used in incandescent, fluorescent lighting, and other gas-discharge tubes. Argon makes a distinctive blue-green gas laser. Argon is also used in fluorescent glow starters. Characteristics Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature. Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized. History Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785. Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon. Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements. Until 1957, the symbol for argon was "A", but now it is "Ar". Occurrence Argon constitutes 0.934% by volume and 1.288% by mass of the Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. The Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. Isotopes The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In the Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars). The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Compounds Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Production Industrial Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. In radioactive decays 40Ar, the most abundant isotope of argon, is produced by the decay of 40K with a half-life of 1.25 years by electron capture or positron emission. Because of this, it is used in potassium–argon dating to determine the age of rocks. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Scientific research Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in the Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within the Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Preservative Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Laboratory equipment Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Medical use Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Lighting Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Miscellaneous uses Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating is used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse. Safety Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling. See also Industrial gas Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors. References Further reading On triple point pressure at 69 kPa. On triple point pressure at 83.8058 K. External links Argon at The Periodic Table of Videos (University of Nottingham) USGS Periodic Table – Argon Diving applications: Why Argon? Chemical elements E-number additives Noble gases Industrial gases
Argon
Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the gray form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices. It is also a component of the III-V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds. A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its 2001 Priority List of Hazardous Substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. Characteristics Physical characteristics The three most common arsenic allotropes are gray, yellow, and black arsenic, with gray being the most common. Gray arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, gray arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Gray arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Gray arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into gray arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. It is also a poor electrical conductor. As arsenic's triple point is at 3.628 MPa (35.81 atm), it does not have a melting point at standard pressure but instead sublimes from solid to vapor at 887 K (615 °C or 1137 °F). Isotopes Arsenic occurs in nature as a monoisotopic element, composed of one stable isotope, 75As. As of 2003, at least 33 radioisotopes have also been synthesized, ranging in atomic mass from 60 to 92. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=1.0942 days), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Chemistry Arsenic has a similar electronegativity and ionization energies to its lighter congener phosphorus and accordingly readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic (and some arsenic compounds) sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is 3.63 MPa and . Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the group oxidation state of +5 than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds Compounds of arsenic resemble in some respects those of phosphorus which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square As ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. Inorganic compounds One of the simplest arsenic compound is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive odor; it is very poisonous. Occurrence and production Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word (al) zarniqa, from Arabic al-zarnīḵ 'the orpiment’, based on Persian zar 'gold' from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek as arsenikon (), a form that is folk etymology, being the neuter form of the Greek word arsenikos (), meaning "male", "virile". The Greek word was adopted in Latin as arsenicum, which in French became arsenic, from which the English word arsenic is taken. Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos (circa 300 AD) describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, it was frequently used for murder until the advent of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". During the Bronze Age, arsenic was often included in bronze, which made the alloy harder (so-called "arsenical bronze"). The isolation of arsenic was described by Jabir ibn Hayyan before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rare. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt by the reaction of potassium acetate with arsenic trioxide. In the Victorian era, "arsenic" ("white arsenic" or arsenic trioxide) was mixed with vinegar and chalk and eaten by women to improve the complexion of their faces, making their skin paler to show they did not work in the fields. The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. Wallpaper production also began to use dyes made from arsenic, which was thought to increase the pigment's brightness. Two arsenic pigments have been widely used since their discovery – Paris Green and Scheele's Green. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular in the U.S. to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continues to sell nitarsone, primarily for use in turkeys. Arsenic is intentionally added to the feed of chickens raised for human consumption. Organic arsenic compounds are less toxic than pure arsenic, and promote the growth of chickens. Under some conditions, the arsenic in chicken feed is converted to the toxic inorganic form. A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs have the disadvantage of severe toxicity, the disease is almost uniformly fatal if untreated. Arsenic trioxide has been used in a variety of ways over the past 500 years, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries; its use as a stimulant was especially prevalent as sport animals such as race horses or with work dogs. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing and pyrotechnics. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers, under pressure from environmentalists, have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain PHS-1 has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that a strain of Halomonadaceae could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Essential trace element in higher animals Some evidence indicates that arsenic is an essential trace mineral in birds (chickens), and in mammals (rats, hamsters, and goats). However, the biological function is not known. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a recent report of Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 µg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 part per billion drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 parts per billion (i.e., above the 10 parts per billion drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminum oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [H3AsO3; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity. See also Aqua Tofana Arsenic and Old Lace Arsenic biochemistry Arsenic compounds Arsenic poisoning Arsenic toxicity Arsenic trioxide Fowler's solution GFAJ-1 Grainger challenge Hypothetical types of biochemistry Organoarsenic chemistry Toxic heavy metal White arsenic References Bibliography Further reading External links Arsenic Cancer Causing Substances, U.S. National Cancer Institute. CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database Arsenic intoxication: general aspects and chelating agents, by Geir Bjørklund, Massimiliano Peana et al. Archives of Toxicology (2020) 94:1879–1897. A Small Dose of Toxicology Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society Contaminant Focus: Arsenic by the EPA. Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO. National Institute for Occupational Safety and Health – Arsenic Page Arsenic at The Periodic Table of Videos (University of Nottingham) Chemical elements Metalloids Hepatotoxins Pnictogens Biology and pharmacology of chemical elements Endocrine disruptors IARC Group 1 carcinogens Trigonal minerals Minerals in space group 166 Teratogens Fetotoxicants Suspected testicular toxicants Native element minerals Chemical elements with rhombohedral structure
Arsenic
Antimony is a chemical element with the symbol Sb (from ) and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of the metal in the West was written in 1540 by Vannoccio Biringuccio. China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron. The largest applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices. Characteristics Properties Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature, but reacts with oxygen if heated to produce antimony trioxide, Sb2O3. Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to make hard objects. Coins of antimony were issued in China's Guizhou province in 1931; durability was poor, and minting was soon discontinued. Antimony is resistant to attack by acids. Four allotropes of antimony are known: a stable metallic form, and three metastable forms (explosive, black, and yellow). Elemental antimony is a brittle, silver-white, shiny metalloid. When slowly cooled, molten antimony crystallizes into a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable; it has been generated only by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope. Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Isotopes Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Occurrence The abundance of antimony in the Earth's crust is estimated to be 0.2 to 0.5 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Compounds Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more stable. Oxides and hydroxides Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. Many antimony ores are sulfides, including stibnite (), pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric and features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and . Halides Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: + 6 HF → 2 + 3 It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: + 6 HCl → 2 + 3 The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Antimonides, hydrides, and organoantimony compounds Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : + 3 → Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include Sb(C6H5)3 (triphenylstibine), Sb2(C6H5)4 (with an Sb-Sb bond), and cyclic [Sb(C6H5)]n. Pentacoordinated organoantimony compounds are common, examples being Sb(C6H5)5 and several related halides. History Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable." The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. The intentional isolation of antimony is described by Jabir ibn Hayyan before 815 AD. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. Etymology The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely. Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence. The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek. The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium. The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony. The Egyptians called antimony mśdmt; in hieroglyphs, the vowels are uncertain, but the Coptic form of the word is ⲥⲧⲏⲙ (stēm). Egyptian stm: O34:D46-G17-F21:D4 The Greek word, στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm. Later Greeks also used στἰβι stibi, as did Celsus and Pliny, writing in Latin, in the first century AD. Pliny also gives the names stimi, larbaris, alabaster, and the "very common" platyophthalmos, "wide-eye" (from the effect of the cosmetic). Later Latin authors adapted the word to Latin as stibium. The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. Production Process The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron: + 3 Fe → 2 Sb + 3 FeS The sulfide is converted to an oxide; the product is then roasted, sometimes for the purpose of vaporizing the volatile antimony(III) oxide, which is recovered. This material is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction: 2 + 3 C → 4 Sb + 3 The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces. Top producers and production volumes The British Geological Survey (BGS) reported that in 2005 China was the top producer of antimony with approximately 84% of the world share, followed at a distance by South Africa, Bolivia and Tajikistan. Xikuangshan Mine in Hunan province has the largest deposits in China with an estimated deposit of 2.1 million metric tons. In 2016, according to the US Geological Survey, China accounted for 76.9% of total antimony production, followed in second place by Russia with 6.9% and Tajikistan with 6.2%. Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher. According to the National Bureau of Statistics in China, by September 2015 50% of antimony production capacity in the Hunan province (the province with biggest antimony reserves in China) had not been used. Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted. The world's largest antimony producers, according to Roskill, are listed below: Reserves Supply risk For antimony-importing regions such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan(8%), and Russia(4%), these sources are critical to supply. European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%). United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index. United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2021, no antimony was mined in the U.S. Applications About 60% of antimony is consumed in flame retardants, and 20% is used in alloys for batteries, plain bearings, and solders. Flame retardants Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed. Alloys Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes. Other applications Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens - antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments. In 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide is used as a material for mid-infrared detectors. Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals. Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis in domestic animals. Besides having low therapeutic indices, the drugs have minimal penetration of the bone marrow, where some of the Leishmania amastigotes reside, and curing the disease – especially the visceral form – is very difficult. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination. Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources. Historically, the powder derived from crushed antimony (kohl) has been applied to the eyes with a metal rod and with one's spittle, thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries. Precautions The effects of antimony and its compounds on human and environmental health differ widely. Elemental antimony metal does not affect human and environmental health. Inhalation of antimony trioxide (and similar poorly soluble Sb(III) dust particles such as antimony dust) is considered harmful and suspected of causing cancer. However, these effects are only observed with female rats and after long-term exposure to high dust concentrations. The effects are hypothesized to be attributed to inhalation of poorly soluble Sb particles leading to impaired lung clearance, lung overload, inflammation and ultimately tumour formation, not to exposure to antimony ions (OECD, 2008). Antimony chlorides are corrosive to skin. The effects of antimony are not comparable to those of arsenic; this might be caused by the significant differences of uptake, metabolism, and excretion between arsenic and antimony. For oral absorption, ICRP (1994) has recommended values of 10% for tartar emetic and 1% for all other antimony compounds. Dermal absorption for metals is estimated to be at most 1% (HERAG, 2007). Inhalation absorption of antimony trioxide and other poorly soluble Sb(III) substances (such as antimony dust) is estimated at 6.8% (OECD, 2008), whereas a value <1% is derived for Sb(V) substances. Antimony(V) is not quantitatively reduced to antimony(III) in the cell, and both species exist simultaneously. Antimony is mainly excreted from the human body via urine. Antimony and its compounds do not cause acute human health effects, with the exception of antimony potassium tartrate ("tartar emetic"), a prodrug that is intentionally used to treat leishmaniasis patients. Prolonged skin contact with antimony dust may cause dermatitis. However, it was agreed at the European Union level that the skin rashes observed are not substance-specific, but most probably due to a physical blocking of sweat ducts (ECHA/PR/09/09, Helsinki, 6 July 2009). Antimony dust may also be explosive when dispersed in the air; when in a bulk solid it is not combustible. Antimony is incompatible with strong acids, halogenated acids, and oxidizers; when exposed to newly formed hydrogen it may form stibine (SbH3). The 8-hour time-weighted average (TWA) is set at 0.5 mg/m3 by the American Conference of Governmental Industrial Hygienists and by the Occupational Safety and Health Administration (OSHA) as a legal permissible exposure limit (PEL) in the workplace. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3 as an 8-hour TWA. Antimony compounds are used as catalysts for polyethylene terephthalate (PET) production. Some studies report minor antimony leaching from PET bottles into liquids, but levels are below drinking water guidelines. Antimony concentrations in fruit juice concentrates were somewhat higher (up to 44.7 µg/L of antimony), but juices do not fall under the drinking water regulations. The drinking water guidelines are: World Health Organization: 20 µg/L Japan: 15 µg/L United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 µg/L EU and German Federal Ministry of Environment: 5 µg/L The tolerable daily intake (TDI) proposed by WHO is 6 µg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3. Toxicity Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans. Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay. See also Phase change memory Notes References Bibliography Edmund Oscar von Lippmann (1919) Entstehung und Ausbreitung der Alchemie, teil 1. Berlin: Julius Springer (in German). Public Health Statement for Antimony External links International Antimony Association vzw (i2a) Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony Antimony at The Periodic Table of Videos (University of Nottingham) CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony Antimony Mineral data and specimen images Chemical elements Metalloids Native element minerals Nuclear materials Pnictogens Trigonal minerals Minerals in space group 166 Materials that expand upon freezing Chemical elements with rhombohedral structure
Antimony
Actinium is a chemical element with the symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. Actinium gave the name to the actinide series, a group of 15 similar elements between actinium and lawrencium in the periodic table. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated. A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy. History André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times. Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89. The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde. Properties Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation. The first element of the actinides, actinium gave the group its name, much as lanthanum had done for the lanthanides. The group of elements is more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett). Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn]6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. The rare oxidation state +2 is only known for actinium dihydride (AcH2); even this may in reality be an electride compound like its lighter congener LaH2 and thus have actinium(III). Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules. Chemical compounds Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. Except for AcPO4, they are all similar to the corresponding lanthanum compounds. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent. Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters. Oxides Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals. Halides Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product. AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia. Reaction of aluminium bromide and actinium oxide yields actinium tribromide: Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3 and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr. Other compounds Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain. Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C. Isotopes Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-six radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 205 u () to 236 u (). Occurrence and synthesis Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. ^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile. Applications Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies. 227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations. 225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers. The medium half-life of 227Ac (21.77 years) makes it very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior. There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K. Precautions 227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary. See also Actinium series Notes References Bibliography Meyer, Gerd and Morss, Lester R. (1991) Synthesis of lanthanide and actinide compounds, Springer. External links Actinium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Actinium, Radioactive Actinium in Chemical elements Actinides
Actinium
Americium is a synthetic radioactive chemical element with the symbol Am and atomic number 95. It is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium, and thus by analogy was named after the Americas. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattice of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. History Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. The times are half-lives The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries/g (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about US$1,500 per gram of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order 100,000–160,000 USD/g. Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it spontaneously converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3,. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), AmV; (yellow), AmVI (brown) and AmVII (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction 3[AmO2]+ + 4H+ -> 2[AmO2]2+ + Am3+ + 2H2O is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like Li3AmO4 and Li6AmO6 are comparable to uranates and the ion AmO22+ is comparable to the uranyl ion, UO22+. Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -> AmOCl + 2HCl Chalcogenides and pnictides The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Organoamericium compounds Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Biological aspects Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. Fission The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of two other readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. Isotopes About 19 isotopes and 8 nuclear isomers are known for americium. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Applications Ionization-type smoke detector Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. Radionuclide As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. Neutron source The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Production of other elements Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: ^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Spectrometer Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. Health concerns As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease. See also Actinides in the environment :Category:Americium compounds Notes References Bibliography Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 Further reading Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989. External links Americium at The Periodic Table of Videos (University of Nottingham) ATSDR – Public Health Statement: Americium World Nuclear Association – Smoke Detectors and Americium Chemical elements Actinides Carcinogens Synthetic elements
Americium
Astatine is a chemical element with the symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. A sample of the pure element has never been assembled, because any macroscopic specimen would be immediately vaporized by the heat of its own radioactivity. The bulk properties of astatine are not known with certainty. Many of them have been estimated based on the element's position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, and iodine). However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver. The first synthesis of the element was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley, who named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope astatine-210, nor the medically useful astatine-211, occur naturally; they can only be produced synthetically, usually by bombarding bismuth-209 with alpha particles. Characteristics Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of one second or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than francium are in any case synthetic and do not occur in nature. The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted. Physical Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow green, bromine is red brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal). Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions. The structure of solid astatine is unknown. As an analogue of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure it may well be a superconductor, like the similar high-pressure phase of iodine. Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2. Chemical The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution, but electromigration evidence suggests that the cationic At(I) species is protonated hypoastatous acid (H2OAt+), showing analogy to iodine. Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionisation energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionisation energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008). Compounds Less reactive than iodine, astatine is the least reactive of the halogens. Its compounds have been synthesized in microscopic amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7. Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides. The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide. Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms. With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid: the latter species might also be protonated astatous acid, . The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. History In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek astatos (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." Isotopes There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213. Natural occurrence Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams). Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes. Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed. Synthesis Formation Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 1014 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method. The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes). Separation methods Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam. Dry The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine. Wet The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as butyl or isopropyl ether, diisopropylether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry. Uses and precautions {| class="wikitable" |+ Several 211At-containing molecules and their experimental uses ! Agent ! Applications |- | [211At]astatine-tellurium colloids | Compartmental tumors |- | 6-[211At]astato-2-methyl-1,4-naphtaquinol diphosphate | Adenocarcinomas |- | 211At-labeled methylene blue | Melanomas |- | Meta-[211At]astatobenzyl guanidine | Neuroendocrine tumors |- | 5-[211At]astato-2'-deoxyuridine | Various |- | 211At-labeled biotin conjugates | Various pretargeting |- | 211At-labeled octreotide | Somatostatin receptor |- | 211At-labeled monoclonal antibodies and fragments | Various |- | 211At-labeled bisphosphonates | Bone metastases |} Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell. Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue. Animal studies show that astatine, similarly to iodine – although to a lesser extent, perhaps because of its slightly more metallic nature  – is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided. See also Radiation protection Notes References Bibliography External links Astatine at The Periodic Table of Videos (University of Nottingham) Astatine: Halogen or Metal? Halogens Metalloids Chemical elements
Astatine
An atom is the smallest unit of ordinary matter that forms a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small, typically around 100 picometers across. They are so small that accurately predicting their behavior using classical physics—as if they were tennis balls, for example—is not possible due to quantum effects. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, then the atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively – such atoms are called ions. The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. The number of protons in the nucleus is the atomic number and it defines to which chemical element the atom belongs. For example, any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes. History of atomic theory In philosophy The basic idea that matter is made up of tiny, indivisible particles appears in many ancient cultures such as those of Greece and India. The word atom is derived from the ancient Greek word atomos (a combination of the negative term "a-" and "τομή," the term for "cut") that means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning; modern atomic theory is not based on these old concepts. Nonetheless, the term "atom" was used throughout the ages by thinkers who suspected that matter was ultimately granular in nature. It has since been discovered that "atoms" can be split, but the misnomer is still used. Dalton's law of multiple proportions In the early 1800s, the English chemist John Dalton compiled experimental data gathered by himself and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in chemical compounds which contain a particular chemical element, the content of that element in these compounds will differ by ratios of small whole numbers. This pattern suggested to Dalton that each chemical element combines with other elements by some basic and consistent unit of mass. For example, there are two types of tin oxide: one is a black powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the black oxide there is about 13.5 g of oxygen for every 100 g of tin, and in the white oxide there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. In these oxides, for every tin atom there are one or two oxygen atoms respectively (SnO and SnO2). As a second example, Dalton considered two iron oxides: a black powder which is 78.1% iron and 21.9% oxygen, and a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black oxide there is about 28 g of oxygen for every 100 g of iron, and in the red oxide there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. In these respective oxides, for every two atoms of iron, there are two or three atoms of oxygen (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Kinetic theory of gases In the late 18th century, a number of scientists found that they could better explain the behavior of gases by describing them as collections of sub-microscopic particles and modelling their behavior using statistics and probability. Unlike Dalton's atomic theory, the kinetic theory of gases describes not how gases react chemically with each other to form compounds, but how they behave physically: diffusion, viscosity, conductivity, pressure, etc. Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of molecules, thereby providing physical evidence for the particle nature of matter. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays are not electromagnetic waves but made of particles that are 1,800 times lighter than hydrogen (the lightest atom). Thomson concluded that these particles came from the atoms within the cathode — they were subatomic particles. He called these new particles corpuscles but they were later renamed electrons. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that electrons are the particles that carry electric currents in metal wires. Thomson concluded that these electrons emerged from the very atoms of the cathode in his instruments, which meant that atoms are not indivisible as the name atomos suggests. Discovery of the nucleus J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutheford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J. J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. Bohr model In 1913, the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were explained by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Bohr model of the atom was the first complete physical model of the atom. It described the overall structure of the atom, how atoms bond to each other, and predicted the spectral lines of hydrogen. Bohr's model was not perfect and was soon superseded by the more accurate Schrödinger model, but it was sufficient to evaporate any remaining doubts that matter is composed of atoms. For chemists, the idea of the atom had been a useful heuristic tool, but physicists had doubts as to whether matter really is made up of atoms as nobody had yet developed a complete physical model of the atom. The Schrödinger model The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission. In 1944, Hahn received the Nobel Prize in Chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to  femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass-energy equivalence formula, , where is the mass loss and is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 252 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 162 (bringing the total to 252) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 252 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Radioactive decay Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. Energy levels The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin-orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. States Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. Identification While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state Baryonic matter forms about 4% of the total energy density of the observable Universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Formation Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Earth Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms Superheavy elements All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics. See also Notes References Bibliography Further reading External links Chemistry Articles containing video clips
Atom
Aluminium (or aluminum in American English and Canadian English) is a chemical element with the symbol Al and atomic number 13. Aluminium has a density lower than those of other common metals, at approximately one third that of steel. It has a great affinity towards oxygen, and forms a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, non-magnetic and ductile. It has one stable isotope, 27Al; this isotope is very common, making aluminium the twelfth most common element in the Universe. The radioactivity of 26Al is used in radiodating. Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it is polarizing, and bonds aluminium forms tend towards covalency. The strong affinity towards oxygen leads to aluminium's common association with oxygen in nature in the form of oxides; for this reason, aluminium is found on Earth primarily in rocks in the crust, where it is the third most abundant element after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In World Wars I and II, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan. Despite its prevalence in the environment, no living organism is known to use aluminium salts metabolically, but aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of continuing interest, and studies continue. Physical characteristics Isotopes Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. Nearly all aluminium on Earth is present as this isotope, which makes it a mononuclidic element and means that its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals. All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago. The remaining isotopes of aluminium, with mass numbers ranging from 22 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute. Electron shell An aluminium atom has 13 electrons, arranged in an electron configuration of [Ne] 3s2 3p1, with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can relatively easily surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale). A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group; boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium metal are a probable cause for it being soft with a low melting point and low electrical resistivity. Bulk Aluminium metal has an appearance ranging from silvery white to dull gray, depending on the surface roughness. A fresh film of aluminium serves as a good reflector (approximately 92%) of visible light and an excellent reflector (as much as 98%) of medium and far infrared radiation. Aluminium mirrors are the most reflective of all metal mirrors for the near ultraviolet and far infrared light, and one of the most reflective in the visible spectrum, nearly on par with silver, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface. The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial. Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50-70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast. Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents. Chemistry Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship. The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class. Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium metal reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids. In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals. Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table. Inorganic compounds The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless. In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed. Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead: 2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at . With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium metal with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction). Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium metal. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement). The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination. Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements. Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing. There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand. Organoaluminium compounds and related hydrides A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds. The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used in as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2. Natural occurrence Space Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the Universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which, upon capturing free protons and neutrons becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter. Earth Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the Universe at large, because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 2 μg/kg. Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−. Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India. History The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century. The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash. Attempts to produce aluminium metal date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium. As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher. By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958. Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013. The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity. Etymology The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer". Coinage British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was coined from the English word alum and the Latin suffix -ium; however, it was customary at the time that the elements should have names originating in the Latin language, and as such, this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English word name alum does not directly reference the Latin language, whereas alumine/alumina easily references the Latin word alumen (upon declension, alumen changes to alumin-). One example was a writing in French by Swedish chemist Jöns Jacob Berzelius titled Essai sur la Nomenclature chimique, published in July 1811; in this essay, among other things, Berzelius used the name aluminium for the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide forms the basis of sapphire as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The following year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since; however, their usage has split by region: aluminum is the primary spelling in the United States and Canada while aluminium is in the rest of the English-speaking world. Spelling In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he felt had a "less classical sound". This name did catch on: while the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists used throughout the world in the 19th century, and it was entrenched in many other European languages, such as French, German, or Dutch. In 1828, American lexicographer Noah Webster used exclusively the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling started to gain usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It remains unknown whether this spelling was introduced by mistake or intentionally; however, Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the U.S. overall, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; during the following decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling. The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry acknowledges this spelling as well. IUPAC official publications use the spelling as primary but list both where appropriate. Production and refinement The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal. Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent. According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita). Bayer process Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds: After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled. Hall–Héroult process The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing. Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode. The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%. Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible. Recycling Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%. White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Applications Metal The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons). Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries. The major uses for aluminium metal are in: Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density; Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof; Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important; Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion; A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage; Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength. Portable computer cases. Currently rarely used without alloying, but aluminium can be recycled and clean aluminium has residual market value: for example, the used beverage can (UBC) material was used to encase the electronic components of MacBook Air laptop, Pixel 5 smartphone or Summit Lite smartwatch. Compounds The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent. Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement. Many aluminium compounds have niche applications, for example: Aluminium acetate in solution is used as an astringent. Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement. Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics. Lithium aluminium hydride is a powerful reducing agent used in organic chemistry. Organoaluminiums are used as Lewis acids and co-catalysts. Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene. Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris. In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Biology Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium salts are nontoxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams for an person, though lethality and neurotoxicity differ in their implications. Andrási et al. discovered "significantly higher Aluminum" content in some brain regions when necroscopies of subjects with Alzheimer disease were compared to subjects without. Aluminium chelates with glyphosate. Toxicity Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus. Effects Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia. During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems. Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect. Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium. Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard. Exposure routes Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients. Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues. Treatment In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron. Environmental effects High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time. Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air. In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice. Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism. Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain. A Spanish scientific report from 2001 claimed that the fungus Geotrichum candidum consumes the aluminium in compact discs. Other reports all refer back to that report and there is no supporting original research. Better documented, the bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium. However, these life forms do not directly attack or consume the aluminium; rather, the metal is corroded by microbe waste products. See also Aluminium granules Aluminium joining Aluminium–air battery Panel edge staining Quantum clock Notes References Bibliography Further reading Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014. External links Aluminium at The Periodic Table of Videos (University of Nottingham) Toxic Substances Portal – Aluminum – from the Agency for Toxic Substances and Disease Registry, United States Department of Health and Human Services CDC – NIOSH Pocket Guide to Chemical Hazards – Aluminum World production of primary aluminium, by country Price history of aluminum, according to the IMF History of Aluminium – from the website of the International Aluminium Institute Emedicine – Aluminium Aluminium Electrical conductors Pyrotechnic fuels Airship technology Chemical elements Post-transition metals Reducing agents E-number additives Native element minerals Chemical elements with face-centered cubic structure
Aluminium
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'. The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. As used in modern logic, an axiom is a premise or starting point for reasoning. As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms (e.g., ) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there may be multiple ways to axiomatize a given mathematical domain. Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics. Etymology The word axiom comes from the Greek word (axíōma), a verbal noun from the verb (axioein), meaning "to deem worthy", but also "to require", which in turn comes from (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers an axiom was a claim which could be seen to be self-evidently true without any need for proof. The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line). Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept. Historical development Early Greeks The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid. The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view. An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that When an equal amount is taken from equals, an equal amount results. At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates. The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions). Postulates It is possible to draw a straight line from any point to any other point. It is possible to extend a line segment continuously in both directions. It is possible to describe a circle with any center and any radius. It is true that all right angles are equal to one another. ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles. Common notions Things which are equal to the same thing are also equal to one another. If equals are added to equals, the wholes are equal. If equals are subtracted from equals, the remainders are equal. Things which coincide with one another are equal to one another. The whole is greater than the part. Modern development A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement. Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience. When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all. It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system. Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development. Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions. In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom. It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent. The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory. It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics. Other sciences Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mandel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms. As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified. Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidian geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidian length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds. In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this ideas seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980's, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc). Mathematical logic In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively). Logical axioms These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense. Examples Propositional logic In propositional logic it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions: Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens. Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed. These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus. First-order logic Axiom of Equality. Let be a first-order language. For each variable , the formula is universally valid. This means that, for any variable symbol the formula can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that. Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation: Axiom scheme for Universal Instantiation. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization: Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Non-logical axioms Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate. Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that in principle every theory could be axiomatized in this way and formalized down to the bare language of logical formulas. Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups. Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system. Examples This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms. Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic. The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory. This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry. Arithmetic The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem. We have a language where is a constant symbol and is a unary function and the following axioms: for any formula with one free variable. The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0. Euclidean geometry Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees. Real analysis The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis. Role in mathematical logic Deductive systems and completeness A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas , that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system. Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms. There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another. Further discussion Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously, there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent. See also Axiomatic system Dogma First principle, axiom in science and philosophy List of axioms Model theory Regulæ Juris Theorem Presupposition Physical law Principle Notes References Further reading Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks. External links Metamath axioms page Ancient Greek philosophy Concepts in ancient Greek metaphysics Concepts in epistemology Concepts in ethics Concepts in logic Concepts in metaphysics Concepts in the philosophy of science Deductive reasoning Formal systems History of logic History of mathematics History of philosophy History of science Intellectual history Logic Mathematical logic Mathematical terminology Philosophical terminology Reasoning
Axiom
Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer. Ada Byron was the only child of poet Lord Byron and mathematician Lady Byron. All of Byron's other children were born out of wedlock to other women. Byron separated from his wife a month after Ada was born and left England forever. Four months later, he commemorated the parting in a poem that begins, "Is thy face like thy mother's my fair child! ADA! sole daughter of my house and heart?". He died in Greece when Ada was eight years old. Her mother remained bitter and promoted Ada's interest in mathematics and logic in an effort to prevent her from developing her father's perceived insanity. Despite this, Ada remained interested in him, naming her two sons Byron and Gordon. Upon her death, she was buried next to him at her request. Although often ill in her childhood, Ada pursued her studies assiduously. She married William King in 1835. King was made Earl of Lovelace in 1838, Ada thereby becoming Countess of Lovelace. Her educational and social exploits brought her into contact with scientists such as Andrew Crosse, Charles Babbage, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens, contacts which she used to further her education. Ada described her approach as "poetical science" and herself as an "Analyst (& Metaphysician)". When she was a teenager (18), her mathematical talents led her to a long working relationship and friendship with fellow British mathematician Charles Babbage, who is known as "the father of computers". She was in particular interested in Babbage's work on the Analytical Engine. Lovelace first met him in June 1833, through their mutual friend, and her private tutor, Mary Somerville. Between 1842 and 1843, Ada translated an article by Italian military engineer Luigi Menabrea about the Analytical Engine, supplementing it with an elaborate set of notes, simply called "Notes". Lovelace's notes are important in the early history of computers, containing what many consider to be the first computer program—that is, an algorithm designed to be carried out by a machine. Other historians reject this perspective and point out that Babbage's personal notes from the years 1836/1837 contain the first programs for the engine. She also developed a vision of the capability of computers to go beyond mere calculating or number-crunching, while many others, including Babbage himself, focused only on those capabilities. Her mindset of "poetical science" led her to ask questions about the Analytical Engine (as shown in her notes) examining how individuals and society relate to technology as a collaborative tool. She died of uterine cancer in 1852 at the age of 36, the same age at which her father died. Biography Childhood Lord Byron expected his child to be a "glorious boy" and was disappointed when Lady Byron gave birth to a girl. The child was named after Byron's half-sister, Augusta Leigh, and was called "Ada" by Byron himself. On 16 January 1816, at Lord Byron's command, Lady Byron left for her parents' home at Kirkby Mallory, taking their five-week-old daughter with her. Although English law at the time granted full custody of children to the father in cases of separation, Lord Byron made no attempt to claim his parental rights, but did request that his sister keep him informed of Ada's welfare. On 21 April, Lord Byron signed the deed of separation, although very reluctantly, and left England for good a few days later. Aside from an acrimonious separation, Lady Byron continued throughout her life to make allegations about her husband's immoral behaviour. This set of events made Lovelace infamous in Victorian society. Ada did not have a relationship with her father. He died in 1824 when she was eight years old. Her mother was the only significant parental figure in her life. Lovelace was not shown the family portrait of her father until her 20th birthday. Lovelace did not have a close relationship with her mother. She was often left in the care of her maternal grandmother Judith, Hon. Lady Milbanke, who doted on her. However, because of societal attitudes of the time—which favoured the husband in any separation, with the welfare of any child acting as mitigation—Lady Byron had to present herself as a loving mother to the rest of society. This included writing anxious letters to Lady Milbanke about her daughter's welfare, with a cover note saying to retain the letters in case she had to use them to show maternal concern. In one letter to Lady Milbanke, she referred to her daughter as "it": "I talk to it for your satisfaction, not my own, and shall be very glad when you have it under your own." Lady Byron had her teenage daughter watched by close friends for any sign of moral deviation. Lovelace dubbed these observers the "Furies" and later complained they exaggerated and invented stories about her. Lovelace was often ill, beginning in early childhood. At the age of eight, she experienced headaches that obscured her vision. In June 1829, she was paralyzed after a bout of measles. She was subjected to continuous bed rest for nearly a year, something which may have extended her period of disability. By 1831, she was able to walk with crutches. Despite the illnesses, she developed her mathematical and technological skills. Ada Byron had an affair with a tutor in early 1833. She tried to elope with him after she was caught, but the tutor's relatives recognised her and contacted her mother. Lady Byron and her friends covered the incident up to prevent a public scandal. Lovelace never met her younger half-sister, Allegra, the daughter of Lord Byron and Claire Clairmont. Allegra died in 1822 at the age of five. Lovelace did have some contact with Elizabeth Medora Leigh, the daughter of Byron's half-sister Augusta Leigh, who purposely avoided Lovelace as much as possible when introduced at court. Adult years Lovelace became close friends with her tutor Mary Somerville, who introduced her to Charles Babbage in 1833. She had a strong respect and affection for Somerville, and they corresponded for many years. Other acquaintances included the scientists Andrew Crosse, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens. She was presented at Court at the age of seventeen "and became a popular belle of the season" in part because of her "brilliant mind." By 1834 Ada was a regular at Court and started attending various events. She danced often and was able to charm many people, and was described by most people as being dainty, although John Hobhouse, Byron's friend, described her as "a large, coarse-skinned young woman but with something of my friend's features, particularly the mouth". This description followed their meeting on 24 February 1834 in which Ada made it clear to Hobhouse that she did not like him, probably due to her mother's influence, which led her to dislike all of her father's friends. This first impression was not to last, and they later became friends. On 8 July 1835, she married William, 8th Baron King, becoming Lady King. They had three homes: Ockham Park, Surrey; a Scottish estate on Loch Torridon in Ross-shire; and a house in London. They spent their honeymoon at Worthy Manor in Ashley Combe near Porlock Weir, Somerset. The Manor had been built as a hunting lodge in 1799 and was improved by King in preparation for their honeymoon. It later became their summer retreat and was further improved during this time. From 1845, the family's main house was Horsley Towers, built in the Tudorbethan fashion by the architect of the Houses of Parliament, Charles Barry, and later greatly enlarged to Lovelace's own designs. They had three children: Byron (born 1836); Anne Isabella (called Annabella, born 1837); and Ralph Gordon (born 1839). Immediately after the birth of Annabella, Lady King experienced "a tedious and suffering illness, which took months to cure." Ada was a descendant of the extinct Barons Lovelace and in 1838, her husband was made Earl of Lovelace and Viscount Ockham, meaning Ada became the Countess of Lovelace. In 1843–44, Ada's mother assigned William Benjamin Carpenter to teach Ada's children and to act as a "moral" instructor for Ada. He quickly fell for her and encouraged her to express any frustrated affections, claiming that his marriage meant he would never act in an "unbecoming" manner. When it became clear that Carpenter was trying to start an affair, Ada cut it off. In 1841, Lovelace and Medora Leigh (the daughter of Lord Byron's half-sister Augusta Leigh) were told by Ada's mother that Ada's father was also Medora's father. On 27 February 1841, Ada wrote to her mother: "I am not in the least astonished. In fact, you merely confirm what I have for years and years felt scarcely a doubt about, but should have considered it most improper in me to hint to you that I in any way suspected." She did not blame the incestuous relationship on Byron, but instead blamed Augusta Leigh: "I fear she is more inherently wicked than he ever was." In the 1840s, Ada flirted with scandals: firstly, from a relaxed approach to extra-marital relationships with men, leading to rumours of affairs; and secondly, from her love of gambling. She apparently lost more than £3,000 on the horses during the later 1840s. The gambling led to her forming a syndicate with male friends, and an ambitious attempt in 1851 to create a mathematical model for successful large bets. This went disastrously wrong, leaving her thousands of pounds in debt to the syndicate, forcing her to admit it all to her husband. She had a shadowy relationship with Andrew Crosse's son John from 1844 onwards. John Crosse destroyed most of their correspondence after her death as part of a legal agreement. She bequeathed him the only heirlooms her father had personally left to her. During her final illness, she would panic at the idea of the younger Crosse being kept from visiting her. Education From 1832, when she was seventeen, her mathematical abilities began to emerge, and her interest in mathematics dominated the majority of her adult life. Her mother's obsession with rooting out any of the insanity of which she accused Byron was one of the reasons that Ada was taught mathematics from an early age. She was privately educated in mathematics and science by William Frend, William King, and Mary Somerville, the noted 19th-century researcher and scientific author. In the 1840s, the mathematician Augustus De Morgan extended her "much help in her mathematical studies" including study of advanced calculus topics including the "numbers of Bernoulli" (that formed her celebrated algorithm for Babbage's Analytical Engine). In a letter to Lady Byron, De Morgan suggested that Ada's skill in mathematics might lead her to become "an original mathematical investigator, perhaps of first-rate eminence." Lovelace often questioned basic assumptions through integrating poetry and science. Whilst studying differential calculus, she wrote to De Morgan: I may remark that the curious transformations many formulae can undergo, the unsuspected and to a beginner apparently impossible identity of forms exceedingly dissimilar at first sight, is I think one of the chief difficulties in the early part of mathematical studies. I am often reminded of certain sprites and fairies one reads of, who are at one's elbows in one shape now, and the next minute in a form most dissimilar. Lovelace believed that intuition and imagination were critical to effectively applying mathematical and scientific concepts. She valued metaphysics as much as mathematics, viewing both as tools for exploring "the unseen worlds around us." Death Lovelace died at the age of 36 on 27 November 1852, from uterine cancer. The illness lasted several months, in which time Annabella took command over whom Ada saw, and excluded all of her friends and confidants. Under her mother's influence, Ada had a religious transformation and was coaxed into repenting of her previous conduct and making Annabella her executor. She lost contact with her husband after confessing something to him on 30 August which caused him to abandon her bedside. It is not known what she told him. She was buried, at her request, next to her father at the Church of St. Mary Magdalene in Hucknall, Nottinghamshire. A memorial plaque, written in Latin, to her and her father is in the chapel attached to Horsley Towers. Work Throughout her life, Lovelace was strongly interested in scientific developments and fads of the day, including phrenology and mesmerism. After her work with Babbage, Lovelace continued to work on other projects. In 1844, she commented to a friend Woronzow Greig about her desire to create a mathematical model for how the brain gives rise to thoughts and nerves to feelings ("a calculus of the nervous system"). She never achieved this, however. In part, her interest in the brain came from a long-running pre-occupation, inherited from her mother, about her "potential" madness. As part of her research into this project, she visited the electrical engineer Andrew Crosse in 1844 to learn how to carry out electrical experiments. In the same year, she wrote a review of a paper by Baron Karl von Reichenbach, Researches on Magnetism, but this was not published and does not appear to have progressed past the first draft. In 1851, the year before her cancer struck, she wrote to her mother mentioning "certain productions" she was working on regarding the relation of maths and music. Lovelace first met Charles Babbage in June 1833, through their mutual friend Mary Somerville. Later that month, Babbage invited Lovelace to see the prototype for his difference engine. She became fascinated with the machine and used her relationship with Somerville to visit Babbage as often as she could. Babbage was impressed by Lovelace's intellect and analytic skills. He called her "The Enchantress of Number." In 1843, he wrote to her: During a nine-month period in 1842–43, Lovelace translated the Italian mathematician Luigi Menabrea's article on Babbage's newest proposed machine, the Analytical Engine. With the article, she appended a set of notes. Explaining the Analytical Engine's function was a difficult task, as many other scientists did not really grasp the concept and the British establishment had shown little interest in it. Lovelace's notes even had to explain how the Analytical Engine differed from the original Difference Engine. Her work was well received at the time; the scientist Michael Faraday described himself as a supporter of her writing. The notes are around three times longer than the article itself and include (in Note G), in complete detail, a method for calculating a sequence of Bernoulli numbers using the Analytical Engine, which might have run correctly had it ever been built (only Babbage's Difference Engine has been built, completed in London in 2002). Based on this work, Lovelace is now considered by many to be the first computer programmer and her method has been called the world's first computer program. Others dispute this because some of Charles Babbage's earlier writings could be considered computer programs. Note G also contains Lovelace's dismissal of artificial intelligence. She wrote that "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." This objection has been the subject of much debate and rebuttal, for example by Alan Turing in his paper "Computing Machinery and Intelligence". Lovelace and Babbage had a minor falling out when the papers were published, when he tried to leave his own statement (criticising the government's treatment of his Engine) as an unsigned preface, which could have been mistakenly interpreted as a joint declaration. When Taylor's Scientific Memoirs ruled that the statement should be signed, Babbage wrote to Lovelace asking her to withdraw the paper. This was the first that she knew he was leaving it unsigned, and she wrote back refusing to withdraw the paper. The historian Benjamin Woolley theorised that "His actions suggested he had so enthusiastically sought Ada's involvement, and so happily indulged her ... because of her 'celebrated name'." Their friendship recovered, and they continued to correspond. On 12 August 1851, when she was dying of cancer, Lovelace wrote to him asking him to be her executor, though this letter did not give him the necessary legal authority. Part of the terrace at Worthy Manor was known as Philosopher's Walk, as it was there that Lovelace and Babbage were reputed to have walked while discussing mathematical principles. First computer program In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Ada Lovelace to translate Menabrea's paper into English. She then augmented the paper with notes, which were added to the translation. Ada Lovelace spent the better part of a year doing this, assisted with input from Babbage. These notes, which are more extensive than Menabrea's paper, were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL. Ada Lovelace's notes were labelled alphabetically from A to G. In note G, she describes an algorithm for the Analytical Engine to compute Bernoulli numbers. It is considered to be the first published algorithm ever specifically tailored for implementation on a computer, and Ada Lovelace has often been cited as the first computer programmer for this reason. The engine was never completed so her program was never tested. In 1953, more than a century after her death, Ada Lovelace's notes on Babbage's Analytical Engine were republished as an appendix to B. V. Bowden's Faster than Thought: A Symposium on Digital Computing Machines. The engine has now been recognised as an early model for a computer and her notes as a description of a computer and software. Insight into potential of computing devices In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity. She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote: This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations." This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine. According to the historian of computing and Babbage specialist Doron Swade: Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper. Controversy over contribution Though Lovelace is often referred to as the first computer programmer, some biographers, computer scientists and historians of computing claim otherwise. Allan G. Bromley, in the 1990 article Difference and Analytical Engines: Bruce Collier, who later wrote a biography of Babbage, wrote in his 1970 Harvard University PhD thesis that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way". Eugene Eric Kim and Betty Alexandra Toole consider it "incorrect" to regard Lovelace as the first computer programmer, as Babbage wrote the initial programs for his Analytical Engine, although the majority were never published. Bromley notes several dozen sample programs prepared by Babbage between 1837 and 1840, all substantially predating Lovelace's notes. Dorothy K. Stein regards Lovelace's notes as "more a reflection of the mathematical uncertainty of the author, the political purposes of the inventor, and, above all, of the social and cultural context in which it was written, than a blueprint for a scientific development." Doron Swade, a specialist on history of computing known for his work on Babbage, discussed Lovelace during a lecture on Babbage's analytical engine. He explained that Ada was only a "promising beginner" instead of genius in mathematics, that she began studying basic concepts of mathematics five years after Babbage conceived the analytical engine so she could not have made important contributions to it, and that she only published the first computer program instead of actually writing it. But he agrees that Ada was the only person to see the potential of the analytical engine as a machine capable of expressing entities other than quantities. In his self-published book, Idea Makers, Stephen Wolfram defends Lovelace's contributions. While acknowledging that Babbage wrote several unpublished algorithms for the Analytical Engine prior to Lovelace's notes, Wolfram argues that "there's nothing as sophisticated—or as clean—as Ada's computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada's work, but she was definitely the driver of it." Wolfram then suggests that Lovelace's main achievement was to distill from Babbage's correspondence "a clear exposition of the abstract operation of the machine—something which Babbage never did." In popular culture 1810s Lord Byron wrote the poem "Fare Thee Well" to his wife Lady Byron in 1816, following their separation after the birth of Ada Lovelace. In the poem he writes: And when thou would'st solace gather— When our child's first accents flow— Wilt thou teach her to say "Father!" Though his care she must forego? When her little hands shall press thee— When her lip to thine is pressed— Think of him whose prayer shall bless thee— Think of him thy love had blessed! Should her lineaments resemble Those thou never more may'st see, Then thy heart will softly tremble With a pulse yet true to me. 1970s Lovelace is portrayed in Romulus Linney's 1977 play Childe Byron. 1990s In the 1990 steampunk novel The Difference Engine by William Gibson and Bruce Sterling, Lovelace delivers a lecture on the "punched cards" programme which proves Gödel's incompleteness theorems decades before their actual discovery. In the 1997 film Conceiving Ada, a computer scientist obsessed with Ada finds a way of communicating with her in the past by means of "undying information waves". In Tom Stoppard's 1993 play Arcadia, the precocious teenage genius Thomasina Coverly—a character "apparently based" on Ada Lovelace (the play also involves Lord Byron)—comes to understand chaos theory, and theorises the second law of thermodynamics, before either is officially recognised. 2000s Lovelace features in John Crowley's 2005 novel, Lord Byron's Novel: The Evening Land, as an unseen character whose personality is forcefully depicted in her annotations and anti-heroic efforts to archive her father's lost novel. 2010s The 2015 play Ada and the Engine by Lauren Gunderson portrays Lovelace and Charles Babbage in unrequited love, and it imagines a post-death meeting between Lovelace and her father. Lovelace and Babbage are the main characters in Sydney Padua's webcomic and graphic novel The Thrilling Adventures of Lovelace and Babbage. The comic features extensive footnotes on the history of Ada Lovelace, and many lines of dialogue are drawn from actual correspondence. Lovelace and Mary Shelley as teenagers are the central characters in Jordan Stratford's steampunk series, The Wollstonecraft Detective Agency. Lovelace, identified as Ada Augusta Byron, is portrayed by Lily Lesser in the second season of The Frankenstein Chronicles. She is employed as an "analyst" to provide the workings of a life-sized humanoid automaton. The brass workings of the machine are reminiscent of Babbage's analytical engine. Her employment is described as keeping her occupied until she returns to her studies in advanced mathematics. Lovelace and Babbage appear as characters in the second season of the ITV series Victoria (2017). Emerald Fennell portrays Lovelace in the episode, "The Green-Eyed Monster." The Cardano cryptocurrency platform, which was launched in 2017, uses Ada as the name for their cryptocurrency and Lovelace as the smallest sub-unit of an Ada. "Lovelace" is the name given to the operating system designed by the character Cameron Howe in Halt and Catch Fire. Lovelace is a primary character in the 2019 Big Finish Doctor Who audio play The Enchantress of Numbers, starring Tom Baker as the Fourth Doctor and Jane Slavin as his current companion, WPC Ann Kelso. Lovelace is played by Finty Williams. In 2019, Lovelace is a featured character in the play STEM FEMMES by Philadelphia theater company Applied Mechanics. 2020s Lovelace features as a character in "Spyfall, Part 2", the second episode of Doctor Who, series 12, which first aired on BBC One on 5 January 2020. The character was portrayed by Sylvie Briggs, alongside characterisations of Charles Babbage and Noor Inayat Khan. In 2021, Nvidia named their upcoming GPU architecture (to be released in 2022), "Ada Lovelace", after her. Commemoration The computer language Ada, created on behalf of the United States Department of Defense, was named after Lovelace. The reference manual for the language was approved on 10 December 1980 and the Department of Defense Military Standard for the language, MIL-STD-1815, was given the number of the year of her birth. In 1981, the Association for Women in Computing inaugurated its Ada Lovelace Award. Since 1998, the British Computer Society (BCS) has awarded the Lovelace Medal, and in 2008 initiated an annual competition for women students. BCSWomen sponsors the Lovelace Colloquium, an annual conference for women undergraduates. Ada College is a further-education college in Tottenham Hale, London, focused on digital skills. Ada Lovelace Day is an annual event celebrated on the second Tuesday of October, which began in 2009. Its goal is to "... raise the profile of women in science, technology, engineering, and maths," and to "create new role models for girls and women" in these fields. Events have included Wikipedia edit-a-thons with the aim of improving the representation of women on Wikipedia in terms of articles and editors to reduce unintended gender bias on Wikipedia. The Ada Initiative was a non-profit organisation dedicated to increasing the involvement of women in the free culture and open source movements. The Engineering in Computer Science and Telecommunications College building in Zaragoza University is called the Ada Byron Building. The computer centre in the village of Porlock, near where Lovelace lived, is named after her. Ada Lovelace House is a council-owned building in Kirkby-in-Ashfield, Nottinghamshire, near where Lovelace spent her infancy. In 2012, a Google Doodle and blog post honoured her on her birthday. In 2013, Ada Developers Academy was founded and named after her. The mission of Ada Developers Academy is to diversify tech by providing women and gender diverse people the skills, experience, and community support to become professional software developers to change the face of tech. On 17 September 2013, an episode of Great Lives about Ada Lovelace aired. As of November 2015, all new British passports have included an illustration of Lovelace and Babbage. In 2017, a Google Doodle honoured her with other women on International Women's Day. On 2 February 2018, Satellogic, a high-resolution Earth observation imaging and analytics company, launched a ÑuSat type micro-satellite named in honour of Ada Lovelace. In March 2018, The New York Times published a belated obituary for Ada Lovelace. On 27 July 2018, Senator Ron Wyden submitted, in the United States Senate, the designation of 9 October 2018 as National Ada Lovelace Day: "To honor the life and contributions of Ada Lovelace as a leading woman in science and mathematics". The resolution (S.Res.592) was considered, and agreed to without amendment and with a preamble by unanimous consent. In November 2020 it was announced that Trinity College Dublin whose library had previously held forty busts, all of them of men, was commissioning four new busts of women, one of whom was to be Lovelace. Bicentenary The bicentenary of Ada Lovelace's birth was celebrated with a number of events, including: The Ada Lovelace Bicentenary Lectures on Computability, Israel Institute for Advanced Studies, 20 December 2015 – 31 January 2016. Ada Lovelace Symposium, University of Oxford, 13–14 October 2015. Ada.Ada.Ada, a one-woman show about the life and work of Ada Lovelace (using an LED dress), premiered at Edinburgh International Science Festival on 11 April 2015, and continues to touring internationally to promote diversity on STEM at technology conferences, businesses, government and educational organisations. Special exhibitions were displayed by the Science Museum in London, England and the Weston Library (part of the Bodleian Library) in Oxford, England. Publications Lovelace, Ada King. Ada, the Enchantress of Numbers: A Selection from the Letters of Lord Byron's Daughter and her Description of the First Computer. Mill Valley, CA: Strawberry Press, 1992. . Publication history Six copies of the 1843 first edition of Sketch of the Analytical Engine with Ada Lovelace's "Notes" have been located. Three are held at Harvard University, one at the University of Oklahoma, and one at the United States Air Force Academy. On 20 July 2018, the sixth copy was sold at auction to an anonymous buyer for £95,000. A digital facsimile of one of the copies in the Harvard University Library is available online. In December 2016, a letter written by Ada Lovelace was forfeited by Martin Shkreli to the New York State Department of Taxation and Finance for unpaid taxes owed by Shkreli. See also Ai-Da (robot) Code: Debugging the Gender Gap List of pioneers in computer science Timeline of women in science Women in computing Women in STEM fields Explanatory notes References General sources . . . . . . . With notes upon the memoir by the translator. Miller, Clair Cain. "Ada Lovelace, 1815–1852," New York Times, 8 March 2018. . . . . . . . Further reading Miranda Seymour, In Byron's Wake: The Turbulent Lives of Byron's Wife and Daughter: Annabella Milbanke and Ada Lovelace, Pegasus, 2018, 547 pp. Christopher Hollings, Ursula Martin, and Adrian Rice, Ada Lovelace: The Making of a Computer Scientist, Bodleian Library, 2018, 114 pp. Jenny Uglow, "Stepping Out of Byron's Shadow", The New York Review of Books, vol. LXV, no. 18 (22 November 2018), pp. 30–32. Jennifer Chiaverini, Enchantress of Numbers, Dutton, 2017, 426 pp. External links "Ada's Army gets set to rewrite history at Inspirefest 2018" by Luke Maxwell, 4 August 2018 "Untangling the Tale of Ada Lovelace" by Stephen Wolfram, December 2015 1815 births 1852 deaths 19th-century British women scientists 19th-century British writers 19th-century English mathematicians 19th-century English women writers 19th-century British inventors 19th-century English nobility Ada (programming language) British countesses British women computer scientists British women mathematicians Burials in Nottinghamshire Ada Women computer scientists Computer designers Daughters of barons Deaths from cancer in England Deaths from uterine cancer English computer programmers English people of Scottish descent English women poets Lord Byron Mathematicians from London Women of the Victorian era Burials at the Church of St Mary Magdalene, Hucknall
Ada Lovelace
The Alps are the highest and most extensive mountain range system that lies entirely in Europe, stretching approximately across eight Alpine countries (from west to east): France, Switzerland, Monaco, Italy, Liechtenstein, Austria, Germany, and Slovenia. The Alpine arch generally extends from Nice on the western Mediterranean to Trieste on the Adriatic and Vienna at the beginning of the Pannonian Basin. The mountains were formed over tens of millions of years as the African and Eurasian tectonic plates collided. Extreme shortening caused by the event resulted in marine sedimentary rocks rising by thrusting and folding into high mountain peaks such as Mont Blanc and the Matterhorn. Mont Blanc spans the French–Italian border, and at is the highest mountain in the Alps. The Alpine region area contains 128 peaks higher than . The altitude and size of the range affect the climate in Europe; in the mountains, precipitation levels vary greatly and climatic conditions consist of distinct zones. Wildlife such as ibex live in the higher peaks to elevations of , and plants such as Edelweiss grow in rocky areas in lower elevations as well as in higher elevations. Evidence of human habitation in the Alps goes back to the Palaeolithic era. A mummified man, determined to be 5,000 years old, was discovered on a glacier at the Austrian–Italian border in 1991. By the 6th century BC, the Celtic La Tène culture was well established. Hannibal famously crossed the Alps with a herd of elephants, and the Romans had settlements in the region. In 1800, Napoleon crossed one of the mountain passes with an army of 40,000. The 18th and 19th centuries saw an influx of naturalists, writers, and artists, in particular, the Romantics, followed by the golden age of alpinism as mountaineers began to ascend the peaks. The Alpine region has a strong cultural identity. The traditional culture of farming, cheesemaking, and woodworking still exists in Alpine villages, although the tourist industry began to grow early in the 20th century and expanded greatly after World War II to become the dominant industry by the end of the century. The Winter Olympic Games have been hosted in the Swiss, French, Italian, Austrian and German Alps. At present, the region is home to 14 million people and has 120 million annual visitors. Etymology and toponymy The English word Alps comes from the Latin Alpes. The Latin word Alpes could possibly come from the adjective albus ("white"), or could possibly come from the Greek goddess Alphito, whose name is related to alphita, the "white flour"; alphos, a dull white leprosy; and finally the Proto-Indo-European word *albʰós. Similarly, the river god Alpheus is also supposed to derive from the Greek alphos and means whitish. In his commentary on the Aeneid of Vergil, the late fourth-century grammarian Maurus Servius Honoratus says that all high mountains are called Alpes by Celts. According to the Oxford English Dictionary, the Latin Alpes might possibly derive from a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman times, "Albania" was a name for the eastern Caucasus, while in the English languages "Albania" (or "Albany") was occasionally used as a name for Scotland, although it is more likely derived from the Latin word albus, the color white. In modern languages the term alp, alm, albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks. An alp refers to a high mountain pasture, typically near or above the tree line, where cows and other livestock are taken to be grazed during the summer months and where huts and hay barns can be found, sometimes constituting tiny hamlets. Therefore, the term "the Alps", as a reference to the mountains, is a misnomer. The term for the mountain peaks varies by nation and language: words such as Horn, Kogel, Kopf, Gipfel, Spitze, Stock, and Berg are used in German-speaking regions; Mont, Pic, Tête, Pointe, Dent, Roche, and Aiguille in French-speaking regions; and Monte, Picco, Corno, Punta, Pizzo, or Cima in Italian-speaking regions. Geography The Alps are a crescent shaped geographic feature of central Europe that ranges in an arc (curved line) from east to west and is in width. The mean height of the mountain peaks is . The range stretches from the Mediterranean Sea north above the Po basin, extending through France from Grenoble, and stretching eastward through mid and southern Switzerland. The range continues onward toward Vienna, Austria, and east to the Adriatic Sea and Slovenia. To the south it dips into northern Italy and to the north extends to the southern border of Bavaria in Germany. In areas like Chiasso, Switzerland, and Allgäu, Bavaria, the demarcation between the mountain range and the flatlands are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine territory are Austria (28.7% of the total area), Italy (27.2%), France (21.4%) and Switzerland (13.2%). The highest portion of the range is divided by the glacial trough of the Rhône valley, from Mont Blanc to the Matterhorn and Monte Rosa on the southern side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia, are smaller than those in the central and western portions. The variances in nomenclature in the region spanned by the Alps makes classification of the mountains and subregions difficult, but a general classification is that of the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist Stefan Schmid, near the Splügen Pass. The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont Blanc, at and Piz Bernina at . The second-highest major peaks are Monte Rosa at and Ortler, at , respectively. Series of lower mountain ranges run parallel to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France. The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards, before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border, the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna. The northeast end of the Alpine arc directly on the Danube, which flows into the Black Sea, is the Leopoldsberg near Vienna. In contrast, the southeastern part of the Alps ends on the Adriatic Sea in the area around Trieste towards Duino and Barcola. Passes The Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the plains and hilly pre-mountainous zones. In the medieval period hospices were established by religious orders at the summits of many of the main passes. The most important passes are the Col de l'Iseran (the highest), the Col Agnel, the Brenner Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, the Simplon Pass, and the Stelvio Pass. Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at , the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there, it has seen continuous use. A railroad with a tunnel long was built along the route of the pass in the mid-19th century. With a summit of , the Great St. Bernard Pass is one of the highest in the Alps, crossing the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte to cross 40,000 troops in 1800. The Mont Cenis pass has been a major commercial and military road between Western Europe and Italy. The pass was crossed by many troops on their way to the Italian peninsula. From Constantine I, Pepin the Short and Charlemagne to Henry IV, Napoléon and more recently the German Gebirgsjägers during World War II. Now the pass has been supplanted by the Fréjus Highway Tunnel (opened 1980) and Rail Tunnel (opened 1871). The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in 1882 the Saint Gotthard Railway Tunnel was opened connecting Lucerne in Switzerland, with Milan in Italy. 98 years later followed Gotthard Road Tunnel ( long) connecting the A2 motorway in Göschenen on the north side with Airolo on the south side, exactly like the railway tunnel. On 1 June 2016 the world's longest railway tunnel, the Gotthard Base Tunnel was opened, which connects Erstfeld in canton of Uri with Bodio in canton of Ticino by two single tubes of . It is the first tunnel that traverses the Alps on a flat route. From 11 December 2016, it has been part of the regular railway timetable and used hourly as standard ride between Basel/Lucerne/Zurich and Bellinzona/Lugano/Milan. The highest pass in the alps is the col de l'Iseran in Savoy (France) at , followed by the Stelvio Pass in northern Italy at ; the road was built in the 1820s. Highest mountains The Union Internationale des Associations d'Alpinisme (UIAA) has defined a list of 82 "official" Alpine summits that reach at least . The list includes not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives. Below are listed the 29 "four-thousanders" with at least of prominence. While Mont Blanc was first climbed in 1786 and the Jungfrau in 1811, most of the Alpine four-thousanders were climbed during the second half of the 19th century, notably Piz Bernina (1850), the Dom (1858), the Grand Combin (1859), the Weisshorn (1861) and the Barre des Écrins (1864); the ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911. Many of the big Alpine three-thousanders were climbed in the early 19th century, notably the Grossglockner (1800) and the Ortler (1804), although some of them were climbed only much later, such at Mont Pelvoux (1848), Monte Viso (1861) and La Meije (1877). The first British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in 1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first ascent of the Eiger Nordwand (north face of the Eiger). Geology and orogeny Important geological concepts were established as naturalists began studying the rock formations of the Alps in the 18th century. In the mid-19th century the now-defunct theory of geosynclines was used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics became widely accepted. The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300 million years ago. In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the Jurassic Period. The Tethys was later squeezed between colliding plates causing the formation of mountain ranges called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process, caused by the collision between the African and the Eurasian plates that began in the late Cretaceous Period. Under extreme compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds, or nappes, and thrust faults. As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse sediments from the continual uplift and erosion were later deposited in foreland areas as molasse. The molasse regions in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch. The Alpine orogeny occurred in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing the development of the Jura Mountains. A series of tectonic events in the Triassic, Jurassic and Cretaceous periods caused different paleogeographic regions. The Alps are subdivided by different lithology (rock composition) and nappe structure according to the orogenic events that affected them. The geological subdivision differentiates the Western, Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the centre and, south of the Periadriatic Seam, the Southern Alpine system. According to geologist Stefan Schmid, because the Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in the Cretaceous Period, the two areas show distinct differences in nappe formations. Flysch deposits in the Southern Alps of Lombardy probably occurred in the Cretaceous or later. Peaks in France, Italy and Switzerland lie in the "Houillière zone", which consists of basement with sediments from the Mesozoic Era. High "massifs" with external sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting whereas the Eastern Alps have comparatively few high peaked massifs. Similarly the peaks in eastern Switzerland extending to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement rock. In simple terms, the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan) origin. The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes, topped off by nappes derived from the African plate. The Matterhorn is an example of the ongoing orogeny and shows evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak, below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their oceanic basement is sandwiched between rock derived from the African and European plates. The core regions of the Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas. Peaks such as Mont Blanc, the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from the various orogenies including exposures of basement rock. Due to the ever-present geologic instability, earthquakes continue in the Alps to this day. Typically, the largest earthquakes in the alps have been between magnitude 6 and 7 on the Richter scale. Minerals The Alps are a source of minerals that have been mined for thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the steel industry. Crystals, such as cinnabar, amethyst, and quartz, are found throughout much of the Alpine region. The cinnabar deposits in Slovenia are a notable source of cinnabar pigments. Alpine crystals have been studied and collected for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals, and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the naming of Alpine minerals. Glaciers In the Miocene Epoch the mountains underwent severe erosion because of glaciation, which was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept" although other naturalists before him put forth similar ideas. Agassiz studied glacier movement in the 1840s at the Unteraar Glacier where he found the glacier moved per year, more rapidly in the middle than at the edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the Jungfraujoch, devoted exclusively to the study of Alpine glaciers. Glaciers pick up rocks and sediment with them as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier ice ages. Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes are remnants of glacial troughs or previously infilled valleys. Moraines, piles of rock picked up during the movement of the glacier, accumulate at edges, centre and the terminus of glaciers. Alpine glaciers can be straight rivers of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers. Glaciers end in ice caves (the Rhône Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes a piece of glacier will detach or break resulting in flooding, property damage and loss of life. High levels of precipitation cause the glaciers to descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the level. The of the Alps covered by glaciers in 1876 had shrunk to by 1973, resulting in decreased river run-off levels. Forty percent of the glaciation in Austria has disappeared since 1850, and 30% of that in Switzerland. Rivers and lakes The Alps provide lowland Europe with drinking water, irrigation, and hydroelectric power. Although the area is only about 11% of the surface area of Europe, the Alps provide up to 90% of water to lowland Europe, particularly to arid areas and during the summer months. Cities such as Milan depend on 80% of water from Alpine runoff. Water from the rivers is used in at least 550 hydroelectricity power plants, considering only those producing at least 10MW of electricity. Major European rivers flow from the Alps, such as the Rhine, the Rhône, the Inn, and the Po, all of which have headwaters in the Alps and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the Alps. The Rhône is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants. The Rhine originates in a area in Switzerland and represents almost 60% of water exported from the country. Tributary valleys, some of which are complicated, channel water to the main valleys which can experience flooding during the snowmelt season when rapid runoff causes debris torrents and swollen rivers. The rivers form lakes, such as Lake Geneva, a crescent-shaped lake crossing the Swiss border with Lausanne on the Swiss side and the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the south side of the Königssee, accessible only by boat or by climbing over the abutting peaks. Additionally, the Alps have led to the creation of large lakes in Italy. For instance, the Sarca, the primary inflow of Lake Garda, originates in the Italian Alps. The Italian Lakes are a popular tourist destination since the Roman Era for their mild climate. Scientists have been studying the impact of climate change and water use. For example, each year more water is diverted from rivers for snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the Alps as well as an effect on the water availability to the lowlands. Climate The Alps are a classic example of what happens when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain. The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because moisture is sucked from the air as it flows over the high peaks. The severe weather in the Alps has been studied since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations were placed in the mountains early in the early 20th century, providing continuous data for climatologists. Some of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland, and northern Tyrol. The areas that are not arid and receive high precipitation experience periodic flooding from rapid snowmelt and runoff. The mean precipitation in the Alps ranges from a low of per year to per year, with the higher levels occurring at high altitudes. At altitudes between , snowfall begins in November and accumulates through to April or May when the melt begins. Snow lines vary from , above which the snow is permanent and the temperatures hover around the freezing point even during July and August. High-water levels in streams and rivers peak in June and July when the snow is still melting at the higher altitudes. The Alps are split into five climatic zones, each with different vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains. The lowest zone is the colline zone, which exists between , depending on the location. The montane zone extends from , followed by the sub-Alpine zone from . The Alpine zone, extending from tree line to snow line, is followed by the glacial zone, which covers the glaciated areas of the mountain. Climatic conditions show variances within the same zones; for example, weather conditions at the head of a mountain valley, extending directly from the peaks, are colder and more severe than those at the mouth of a valley which tend to be less severe and receive less snowfall. Various models of climate change have been projected into the 22nd century for the Alps, with an expectation that a trend toward increased temperatures will have an effect on snowfall, snowpack, glaciation, and river runoff. Significant changes, of both natural and anthropogenic origins, have already been diagnosed from observations. Ecology Flora Thirteen thousand species of plants have been identified in the Alpine regions. Alpine plants are grouped by habitat and soil type which can be limestone or non-calcareous. The habitats range from meadows, bogs, woodland (deciduous and coniferous) areas to soil-less scree and moraines, and rock faces and ridges. A natural vegetation limit with altitude is given by the presence of the chief deciduous trees—oak, beech, ash and sycamore maple. These do not reach exactly to the same elevation, nor are they often found growing together; but their upper limit corresponds accurately enough to the change from a temperate to a colder climate that is further proved by a change in the presence of wild herbaceous vegetation. This limit usually lies about above the sea on the north side of the Alps, but on the southern slopes it often rises to , sometimes even to . Above the forestry, there is often a band of short pine trees (Pinus mugo), which is in turn superseded by Alpenrosen, dwarf shrubs, typically Rhododendron ferrugineum (on acid soils) or Rhododendron hirsutum (on alkaline soils). Although the Alpenrose prefers acidic soil, the plants are found throughout the region. Above the tree line is the area defined as "alpine" where in the alpine meadow plants are found that have adapted well to harsh conditions of cold temperatures, aridity, and high altitudes. The alpine area fluctuates greatly because of regional fluctuations in tree lines. Alpine plants such as the Alpine gentian grow in abundance in areas such as the meadows above the Lauterbrunnental. Gentians are named after the Illyrian king Gentius, and 40 species of the early-spring blooming flower grow in the Alps, in a range of . Writing about the gentians in Switzerland D. H. Lawrence described them as "darkening the day-time, torch-like with the smoking blueness of Pluto's gloom." Gentians tend to "appear" repeatedly as the spring blooming takes place at progressively later dates, moving from the lower altitude to the higher altitude meadows where the snow melts much later than in the valleys. On the highest rocky ledges the spring flowers bloom in the summer. At these higher altitudes, the plants tend to form isolated cushions. In the Alps, several species of flowering plants have been recorded above , including Ranunculus glacialis, Androsace alpina and Saxifraga biflora. Eritrichium nanum, commonly known as the King of the Alps, is the most elusive of the alpine flowers, growing on rocky ridges at . Perhaps the best known of the alpine plants is Edelweiss which grows in rocky areas and can be found at altitudes as low as and as high as . The plants that grow at the highest altitudes have adapted to conditions by specialization such as growing in rock screes that give protection from winds. The extreme and stressful climatic conditions give way to the growth of plant species with secondary metabolites important for medicinal purposes. Origanum vulgare, Prunella vulgaris, Solanum nigrum and Urtica dioica are some of the more useful medicinal species found in the Alps. Human interference has nearly exterminated the trees in many areas, and, except for the beech forests of the Austrian Alps, forests of deciduous trees are rarely found after the extreme deforestation between the 17th and 19th centuries. The vegetation has changed since the second half of the 20th century, as the high alpine meadows cease to be harvested for hay or used for grazing which eventually might result in a regrowth of forest. In some areas, the modern practice of building ski runs by mechanical means has destroyed the underlying tundra from which the plant life cannot recover during the non-skiing months, whereas areas that still practice a natural piste type of ski slope building preserve the fragile underlayers. Fauna The Alps are a habitat for 30,000 species of wildlife, ranging from the tiniest snow fleas to brown bears, many of which have made adaptations to the harsh cold conditions and high altitudes to the point that some only survive in specific micro-climates either directly above or below the snow line. The largest mammal to live in the highest altitudes are the alpine ibex, which have been sighted as high as . The ibex live in caves and descend to eat the succulent alpine grasses. Classified as antelopes, chamois are smaller than ibex and found throughout the Alps, living above the tree line and are common in the entire alpine range. Areas of the eastern Alps are still home to brown bears. In Switzerland the canton of Bern was named for the bears but the last bear is recorded as having been killed in 1792 above Kleine Scheidegg by three hunters from Grindelwald. Many rodents such as voles live underground. Marmots live almost exclusively above the tree line as high as . They hibernate in large groups to provide warmth, and can be found in all areas of the Alps, in large colonies they build beneath the alpine pastures. Golden eagles and bearded vultures are the largest birds to be found in the Alps; they nest high on rocky ledges and can be found at altitudes of . The most common bird is the alpine chough which can be found scavenging at climber's huts or at the Jungfraujoch, a high altitude tourist destination. Reptiles such as adders and vipers live up to the snow line; because they cannot bear the cold temperatures they hibernate underground and soak up the warmth on rocky ledges. The high-altitude Alpine salamanders have adapted to living above the snow line by giving birth to fully developed young rather than laying eggs. Brown trout can be found in the streams up to the snow line. Molluscs such as the wood snail live up the snow line. Popularly gathered as food, the snails are now protected. A number of species of moths live in the Alps, some of which are believed to have evolved in the same habitat up to 120 million years ago, long before the Alps were created. Blue butterflies can commonly be seen drinking from the snowmelt; some species of blues fly as high as . The butterflies tend to be large, such as those from the swallowtail Parnassius family, with a habitat that ranges to . Twelve species of beetles have habitats up to the snow line; the most beautiful and formerly collected for its colours but now protected is Rosalia alpina. Spiders, such as the large wolf spider, live above the snow line and can be seen as high as . Scorpions can be found in the Italian Alps. Some of the species of moths and insects show evidence of having been indigenous to the area from as long ago as the Alpine orogeny. In Emosson in Valais, Switzerland, dinosaur tracks were found in the 1970s, dating probably from the Triassic Period. History Prehistory to Christianity About 10,000 years ago, when the ice melted after the Würm glaciation, late Palaeolithic communities were established along the lake shores and in cave systems. Evidence of human habitation has been found in caves near Vercors, close to Grenoble; in Austria the Mondsee culture shows evidence of houses built on piles to keep them dry. Standing stones have been found in Alpine areas of France and Italy. The Rock Drawings in Valcamonica are more than 5000 years old; more than 200,000 drawings and etchings have been identified at the site. In 1991, a mummy of a neolithic body, known as Ötzi the Iceman, was discovered by hikers on the Similaun glacier. His clothing and gear indicate that he lived in an alpine farming community, while the location and manner of his death – an arrowhead was discovered in his shoulder – suggests he was travelling from one place to another. Analysis of the mitochondrial DNA of Ötzi, has shown that he belongs to the K1 subclade which cannot be categorized into any of the three modern branches of that subclade. The new subclade has provisionally been named K1ö for Ötzi. Celtic tribes settled in Switzerland between 1500 and 1000 BC. The Raetians lived in the eastern regions, while the west was occupied by the Helvetii and the Allobrogi settled in the Rhône valley and in Savoy. The Ligurians and Adriatic Veneti lived in north-west Italy and Triveneto respectively. Among the many substances Celtic tribes mined was salt in areas such as Salzburg in Austria where evidence of the Hallstatt culture was found by a mine manager in the 19th century. By the 6th century BC the La Tène culture was well established in the region, and became known for high quality decorated weapons and jewellery. The Celts were the most widespread of the mountain tribes—they had warriors that were strong, tall and fair skinned, and skilled with iron weapons, which gave them an advantage in warfare. During the Second Punic War in 218 BC, the Carthaginian general Hannibal probably crossed the Alps with an army numbering 38,000 infantry, 8,000 cavalry, and 37 war elephants. This was one of the most celebrated achievements of any military force in ancient warfare, although no evidence exists of the actual crossing or the place of crossing. The Romans, however, had built roads along the mountain passes, which continued to be used through the medieval period to cross the mountains and Roman road markers can still be found on the mountain passes. The Roman expansion brought the defeat of the Allobrogi in 121 BC and during the Gallic Wars in 58 BC Julius Caesar overcame the Helvetii. The Rhaetians continued to resist but were eventually conquered when the Romans turned northward to the Danube valley in Austria and defeated the Brigantes. The Romans built settlements in the Alps; towns such as Aosta (named for Augustus) in Italy, Martigny and Lausanne in Switzerland, and Partenkirchen in Bavaria show remains of Roman baths, villas, arenas and temples. Much of the Alpine region was gradually settled by Germanic tribes, (Lombards, Alemanni, Bavarii, and Franks) from the 6th to the 13th centuries mixing with the local Celtic tribes. Christianity, feudalism, and Napoleonic wars Christianity was established in the region by the Romans, and saw the establishment of monasteries and churches in the high regions. The Frankish expansion of the Carolingian Empire and the Bavarian expansion in the eastern Alps introduced feudalism and the building of castles to support the growing number of dukedoms and kingdoms. Castello del Buonconsiglio in Trento, Italy, still has intricate frescoes, excellent examples of Gothic art, in a tower room. In Switzerland, Château de Chillon is preserved as an example of medieval architecture. Much of the medieval period was a time of power struggles between competing dynasties such as the House of Savoy, the Visconti in northern Italy and the House of Habsburg in Austria and Slovenia. In 1291, to protect themselves from incursions by the Habsburgs, four cantons in the middle of Switzerland drew up a charter that is considered to be a declaration of independence from neighbouring kingdoms. After a series of battles fought in the 13th, 14th and 15th centuries, more cantons joined the confederacy and by the 16th century Switzerland was well-established as a separate state. During the Napoleonic Wars in the late 18th century and early 19th century, Napoleon annexed territory formerly controlled by the Habsburgs and Savoys. In 1798, he established the Helvetic Republic in Switzerland; two years later he led an army across the St. Bernard pass and conquered almost all of the Alpine regions. After the fall of Napoléon, many alpine countries developed heavy protections to prevent any new invasion. Thus, Savoy built a series of fortifications in the Maurienne valley in order to protect the major alpine passes, such as the col du Mont-Cenis that was even crossed by Charlemagne and his father to defeat the Lombards. The later indeed became very popular after the construction of a paved road ordered by Napoléon Bonaparte. The Barrière de l'Esseillon is a series of forts with heavy batteries, built on a cliff with a perfect view of the valley, a gorge on one side and steep mountains on the other side. In the 19th century, the monasteries built in the high Alps during the medieval period to shelter travellers and as places of pilgrimage, became tourist destinations. The Benedictines had built monasteries in Lucerne, Switzerland, and Oberammergau; the Cistercians in the Tyrol and at Lake Constance; and the Augustinians had abbeys in the Savoy and one in the centre of Interlaken, Switzerland. The Great St Bernard Hospice, built in the 9th or 10th centuries, at the summit of the Great Saint Bernard Pass was a shelter for travellers and place for pilgrims since its inception; by the 19th century it became a tourist attraction with notable visitors such as author Charles Dickens and mountaineer Edward Whymper. Exploration Radiocarbon-dated charcoal placed around 50,000 years ago was found in the Drachloch (Dragon's Hole) cave above the village of Vattis in the canton of St. Gallen, proving that the high peaks were visited by prehistoric people. Seven bear skulls from the cave may have been buried by the same prehistoric people. The peaks, however, were mostly ignored except for a few notable examples, and long left to the exclusive attention of the people of the adjoining valleys. The mountain peaks were seen as terrifying, the abode of dragons and demons, to the point that people blindfolded themselves to cross the Alpine passes. The glaciers remained a mystery and many still believed the highest areas to be inhabited by dragons. Charles VII of France ordered his chamberlain to climb Mont Aiguille in 1356. The knight reached the summit of Rocciamelone where he left a bronze triptych of three crosses, a feat which he conducted with the use of ladders to traverse the ice. In 1492, Antoine de Ville climbed Mont Aiguille, without reaching the summit, an experience he described as "horrifying and terrifying." Leonardo da Vinci was fascinated by variations of light in the higher altitudes, and climbed a mountain—scholars are uncertain which one; some believe it may have been Monte Rosa. From his description of a "blue like that of a gentian" sky it is thought that he reached a significantly high altitude. In the 18th century four Chamonix men almost made the summit of Mont Blanc but were overcome by altitude sickness and snowblindness. Conrad Gessner was the first naturalist to ascend the mountains in the 16th century, to study them, writing that in the mountains he found the "theatre of the Lord". By the 19th century more naturalists began to arrive to explore, study and conquer the high peaks. Two men who first explored the regions of ice and snow were Horace-Bénédict de Saussure (1740–1799) in the Pennine Alps, and the Benedictine monk of Disentis Placidus a Spescha (1752–1833). Born in Geneva, Saussure was enamoured with the mountains from an early age; he left a law career to become a naturalist and spent many years trekking through the Bernese Oberland, the Savoy, the Piedmont and Valais, studying the glaciers and the geology, as he became an early proponent of the theory of rock upheaval. Saussure, in 1787, was a member of the third ascent of Mont Blanc—today the summits of all the peaks have been climbed. The Romantics and Alpinists Albrecht von Haller's poem Die Alpen (1732) described the mountains as an area of mythical purity. Jean-Jacques Rousseau was another writer who presented the Alps as a place of allure and beauty, in his novel Julie, or the New Heloise (1761), Later the first wave of Romantics such as Goethe and Turner came to admire the scenery; Wordsworth visited the area in 1790, writing of his experiences in The Prelude (1799). Schiller later wrote the play William Tell (1804), which tells the story the legendary Swiss marksman William Tell as part of the greater Swiss struggle for independence from the Habsburg Empire in the early 14th century. At the end of the Napoleonic Wars, the Alpine countries began to see an influx of poets, artists, and musicians, as visitors came to experience the sublime effects of monumental nature. In 1816, Byron, Percy Bysshe Shelley and his wife Mary Shelley visited Geneva and all three were inspired by the scenery in their writings. During these visits Shelley wrote the poem "Mont Blanc", Byron wrote "The Prisoner of Chillon" and the dramatic poem Manfred, and Mary Shelley, who found the scenery overwhelming, conceived the idea for the novel Frankenstein in her villa on the shores of Lake Geneva in the midst of a thunderstorm. When Coleridge travelled to Chamonix, he declaimed, in defiance of Shelley, who had signed himself "Atheos" in the guestbook of the Hotel de Londres near Montenvers, "Who would be, who could be an atheist in this valley of wonders". By the mid-19th century scientists began to arrive en masse to study the geology and ecology of the region. From the beginning of the 19th century, the tourism and mountaineering development of the Alps began. In the early years of the "golden age of alpinism" initially scientific activities were mixed with sport, for example by the physicist John Tyndall, with the first ascent of the Matterhorn by Edward Whymper being the highlight. In the later years, the "silver age of alpinism", the focus was on mountain sports and climbing. The first president of the Alpine Club, John Ball, is considered the discoverer of the Dolomites, which for decades were the focus of climbers like Paul Grohmann, Michael Innerkofler and Angelo Dibona. The Nazis Austrian-born Adolf Hitler had a lifelong romantic fascination with the Alps and by the 1930s established a home at Berghof, in the Obersalzberg region outside of Berchtesgaden. His first visit to the area was in 1923 and he maintained a strong tie there until the end of his life. At the end of World War II, the US Army occupied Obersalzberg, to prevent Hitler from retreating with the Wehrmacht into the mountains. By 1940 many of the Alpine countries were under the control of the Axis powers. Austria underwent a political coup that made it part of the Third Reich; France had been invaded and Italy was a fascist regime. Switzerland and Liechtenstein were the only countries to avoid an Axis takeover. The Swiss Confederation mobilized its troops—the country follows the doctrine of "armed neutrality" with all males required to have military training—a number that General Eisenhower estimated to be about 850,000. The Swiss commanders wired the infrastructure leading into the country with explosives, and threatened to destroy bridges, railway tunnels and roads across passes in the event of a Nazi invasion; and if there was an invasion the Swiss army would then have retreated to the heart of the mountain peaks, where conditions were harsher, and a military invasion would involve difficult and protracted battles. German Ski troops were trained for the war, and battles were waged in mountainous areas such as the battle at Riva Ridge in Italy, where the American 10th Mountain Division encountered heavy resistance in February 1945. At the end of the war, a substantial amount of Nazi plunder was found stored in Austria, where Hitler had hoped to retreat as the war drew to a close. The salt mines surrounding the Altaussee area, where American troops found of gold coins stored in a single mine, were used to store looted art, jewels, and currency; vast quantities of looted art were found and returned to the owners. Largest cities The largest city within the Alps is the city of Grenoble in France. Other larger and important cities within the Alps with over 100,000 inhabitants are in Tyrol with Bolzano (Italy), Trento (Italy) and Innsbruck (Austria). Larger cities outside the Alps are Milan, Verona, Turin (Italy), Munich (Germany), Graz, Vienna, Salzburg (Austria), Ljubljana, Maribor, Kranj (Slovenia), Zurich, Geneva (Switzerland), Nice and Lyon (France). Cities with over 100,000 inhabitants in the Alps are: Alpine people and culture The population of the region is 14 million spread across eight countries. On the rim of the mountains, on the plateaus and the plains the economy consists of manufacturing and service jobs whereas in the higher altitudes and in the mountains farming is still essential to the economy. Farming and forestry continue to be mainstays of Alpine culture, industries that provide for export to the cities and maintain the mountain ecology. The Alpine regions are multicultural and linguistically diverse. Dialects are common, and vary from valley to valley and region to region. In the Slavic Alps alone 19 dialects have been identified. Some of the Romance dialects spoken in the French, Swiss and Italian alps of Aosta Valley derive from Arpitan, while the southern part of the western range is related to Occitan; the German dialects derive from Germanic tribal languages. Romansh, spoken by two percent of the population in southeast Switzerland, is an ancient Rhaeto-Romanic language derived from Latin, remnants of ancient Celtic languages and perhaps Etruscan. Much of the Alpine culture is unchanged since the medieval period when skills that guaranteed survival in the mountain valleys and in the highest villages became mainstays, leading to strong traditions of carpentry, woodcarving, baking and pastry-making, and cheesemaking. Farming has been a traditional occupation for centuries, although it became less dominant in the 20th century with the advent of tourism. Grazing and pasture land are limited because of the steep and rocky topography of the Alps. In mid-June cows are moved to the highest pastures close to the snowline, where they are watched by herdsmen who stay in the high altitudes often living in stone huts or wooden barns during the summers. Villagers celebrate the day the cows are herded up to the pastures and again when they return in mid-September. The Almabtrieb, Alpabzug, Alpabfahrt, Désalpes ("coming down from the alps") is celebrated by decorating the cows with garlands and enormous cowbells while the farmers dress in traditional costumes. Cheesemaking is an ancient tradition in most Alpine countries. A wheel of cheese from the Emmental in Switzerland can weigh up to , and the Beaufort in Savoy can weigh up to . Owners of the cows traditionally receive from the cheesemakers a portion in relation to the proportion of the cows' milk from the summer months in the high alps. Haymaking is an important farming activity in mountain villages that has become somewhat mechanized in recent years, although the slopes are so steep that scythes are usually necessary to cut the grass. Hay is normally brought in twice a year, often also on festival days. In the high villages, people live in homes built according to medieval designs that withstand cold winters. The kitchen is separated from the living area (called the stube, the area of the home heated by a stove), and second-floor bedrooms benefit from rising heat. The typical Swiss chalet originated in the Bernese Oberland. Chalets often face south or downhill, and are built of solid wood, with a steeply gabled roof to allow accumulated snow to slide off easily. Stairs leading to upper levels are sometimes built on the outside, and balconies are sometimes enclosed. Food is passed from the kitchen to the stube, where the dining room table is placed. Some meals are communal, such as fondue, where a pot is set in the middle of the table for each person to dip into. Other meals are still served in a traditional manner on carved wooden plates. Furniture has been traditionally elaborately carved and in many Alpine countries carpentry skills are passed from generation to generation. Roofs are traditionally constructed from Alpine rocks such as pieces of schist, gneiss or slate. Such chalets are typically found in the higher parts of the valleys, as in the Maurienne valley in Savoy, where the amount of snow during the cold months is important. The inclination of the roof cannot exceed 40%, allowing the snow to stay on top, thereby functioning as insulation from the cold. In the lower areas where the forests are widespread, wooden tiles are traditionally used. Commonly made of Norway spruce, they are called "tavaillon". In the German-speaking parts of the Alps (Austria, Bavaria, South Tyrol, Liechtenstein and Switzerland), there is a strong tradition of Alpine folk culture. Old traditions are carefully maintained among inhabitants of Alpine areas, even though this is seldom obvious to the visitor: many people are members of cultural associations where the Alpine folk culture is cultivated. At cultural events, traditional folk costume (in German Tracht) is expected: typically lederhosen for men and dirndls for women. Visitors can get a glimpse of the rich customs of the Alps at public Volksfeste. Even when large events feature only a little folk culture, all participants take part with gusto. Good opportunities to see local people celebrating the traditional culture occur at the many fairs, wine festivals and firefighting festivals which fill weekends in the countryside from spring to autumn. Alpine festivals vary from country to country. Frequently they include music (e.g. the playing of Alpenhorns), dance (e.g. Schuhplattler), sports (e.g. wrestling marches and archery), as well as traditions with pagan roots such as the lighting of fires on Walpurgis Night and Saint John's Eve. Many areas celebrate Fastnacht in the weeks before Lent. Folk costume also continues to be worn for most weddings and festivals. Tourism The Alps are one of the more popular tourist destinations in the world with many resorts such Oberstdorf, in Bavaria, Saalbach in Austria, Davos in Switzerland, Chamonix in France, and Cortina d'Ampezzo in Italy recording more than a million annual visitors. With over 120 million visitors a year, tourism is integral to the Alpine economy with much it coming from winter sports, although summer visitors are also an important component. The tourism industry began in the early 19th century when foreigners visited the Alps, travelled to the bases of the mountains to enjoy the scenery, and stayed at the spa-resorts. Large hotels were built during the Belle Époque; cog-railways, built early in the 20th century, brought tourists to ever-higher elevations, with the Jungfraubahn terminating at the Jungfraujoch, well above the eternal snow-line, after going through a tunnel in Eiger. During this period winter sports were slowly introduced: in 1882 the first figure skating championship was held in St. Moritz, and downhill skiing became a popular sport with English visitors early in the 20th century, as the first ski-lift was installed in 1908 above Grindelwald. In the first half of the 20th century the Olympic Winter Games were held three times in Alpine venues: the 1924 Winter Olympics in Chamonix, France; the 1928 Winter Olympics in St. Moritz, Switzerland; and the 1936 Winter Olympics in Garmisch-Partenkirchen, Germany. During World War II the winter games were cancelled but after that time the Winter Games have been held in St. Moritz (1948), Cortina d'Ampezzo (1956), Innsbruck, Austria (1964 and 1976), Grenoble, France, (1968), Albertville, France, (1992), and Torino (2006). In 1930, the Lauberhorn Rennen (Lauberhorn Race), was run for the first time on the Lauberhorn above Wengen; the equally demanding Hahnenkamm was first run in the same year in Kitzbühl, Austria. Both races continue to be held each January on successive weekends. The Lauberhorn is the more strenuous downhill race at and poses danger to racers who reach within seconds of leaving the start gate. During the post-World War I period, ski-lifts were built in Swiss and Austrian towns to accommodate winter visitors, but summer tourism continued to be important; by the mid-20th century the popularity of downhill skiing increased greatly as it became more accessible and in the 1970s several new villages were built in France devoted almost exclusively to skiing, such as Les Menuires. Until this point, Austria and Switzerland had been the traditional and more popular destinations for winter sports, but by the end of the 20th century and into the early 21st century, France, Italy and the Tyrol began to see increases in winter visitors. From 1980 to the present, ski-lifts have been modernized and snow-making machines installed at many resorts, leading to concerns regarding the loss of traditional Alpine culture and questions regarding sustainable development. Probably due to climate change, the number of ski resorts and piste kilometres has declined since 2015 Avalanche/snow-slide 17th century French-Italian border avalanche: in the 17th century about 2500 people were killed by an avalanche in a village on the French-Italian border. 19th century Zermatt avalanche: in the 19th century, 120 homes in a village near Zermatt were destroyed by an avalanche. December 13, 1916 Marmolada-mountain-avalanche 1950–1951 winter-of-terror avalanches February 10, 1970 Val d'Isère avalanche February 9, 1999 Montroc avalanche February 21, 1999 Evolène avalanche February 23, 1999 Galtür avalanche the deadliest avalanche in the Alps in 40 years. July 2014 Mont-Blanc avalanche January 13, 2016 Les-Deux-Alpes avalanche January 18, 2016 Valfréjus avalanche Transportation The region is serviced by of roads used by six million vehicles per year. Train travel is well established in the Alps, with, for instance of track for every in a country such as Switzerland. Most of Europe's highest railways are located there. In 2007, the new Lötschberg Base Tunnel was opened, which circumvents the 100 years older Lötschberg Tunnel. With the opening of the Gotthard Base Tunnel on June 1, 2016, it bypasses the Gotthard Tunnel built in the 19th century and realizes the first flat route through the Alps. Some high mountain villages are car-free either because of inaccessibility or by choice. Wengen, and Zermatt (in Switzerland) are accessible only by cable car or cog-rail trains. Avoriaz (in France), is car-free, with other Alpine villages considering becoming car-free zones or limiting the number of cars for reasons of sustainability of the fragile Alpine terrain. The lower regions and larger towns of the Alps are well-served by motorways and main roads, but higher mountain passes and byroads, which are amongst the highest in Europe, can be treacherous even in summer due to steep slopes. Many passes are closed in winter. A number of airports around the Alps (and some within), as well as long-distance rail links from all neighbouring countries, afford large numbers of travellers easy access. See also Notes References Works cited Alpine Convention. (2010). The Alps: People and pressures in the mountains, the facts at a glance Allaby, Michael et al. The Encyclopedia of Earth. (2008). Berkeley: University of California Press. Beattie, Andrew. (2006). The Alps: A Cultural History. New York: Oxford University Press. Benniston, Martin, et al. (2011). "Impact of Climatic Change on Water and Natural Hazards in the Alps". Environmental Science and Policy. Volume 30. 1–9 Cebon, Peter, et al. (1998). Views from the Alps: Regional Perspectives on Climate Change. Cambridge MA: MIT Press. Chatré, Baptiste, et al. (2010). The Alps: People and Pressures in the Mountains, the Facts at a Glance. Permanent Secretariat of the Alpine Convention (alpconv.org). Retrieved August 4, 2012. De Graciansky, Pierre-Charles et al. (2011). The Western Alps, From Rift to Passive Margin to Orogenic Belt. Amsterdam: Elsevier. Feuer, A.B. (2006). Packs On!: Memoirs of the 10th Mountain Division in World War II. Mechanicsburg, Pennsylvania: Stackpole Books. Fleming, Fergus. (2000). Killing Dragons: The Conquest of the Alps. New York: Grove. Gerrard, AJ. (1990) Mountain Environments: An Examination of the Physical Geography of Mountains. Boston: MIT Press. Halbrook, Stephen P. (1998). Target Switzerland: Swiss Armed Neutrality in World War II. Rockville Center, NY: Sarpedon. Halbrook, Stephen P. (2006). The Swiss and the Nazis: How the Alpine Republic Survived in the Shadow of the Third Reich. Havertown, PA: Casemate. Hudson, Simon. (2000). Snow Business: A Study of the International Ski Industry. New York: Cengage Körner, Christian. (2003). Alpine Plant Life. New York: Springer Verlag. Lancel, Serge. (1999). Hannibal. Oxford: Blackwell. Mitchell, Arthur H. (2007). Hitler's Mountain. Jefferson, NC: McFarland. Prevas, John. (2001). Hannibal Crosses The Alps: The Invasion Of Italy And The Punic Wars. Cambridge, MA: Da Capo Press. Reynolds, Kev. (2012) The Swiss Alps. Cicerone Press. Roth, Philipe. (2007). Minerals first Discovered in Switzerland. Lausanne, CH: Museum of Geology. Schmid, Stefan M. (2004). "Regional tectonics: from the Rhine graben to the Po plain, a summary of the tectonic evolution of the Alps and their forelands". Basel: Geologisch-Paläontologisches Institut Sharp, Hilary. (2002). Trekking and Climbing in the Western Alps. London: New Holland. Shoumatoff, Nicholas and Nina. (2001). The Alps: Europe's Mountain Heart. Ann Arbor, MI: University of Michigan Press. Viazzo, Pier Paolo. (1980). Upland Communities: Environment, Population and Social Structure in the Alps since the Sixteenth Century. Cambridge: Cambridge University Press. External links 17, 2005 Satellite photo of the Alps, taken on August 31, 2005, by MODIS aboard Terra Official website of the Alpine Space Programme This EU co-funded programme co-finances transnational projects in the Alpine region Geography of Central Europe Geography of Southern Europe Geography of Western Europe Mountain ranges of Austria Mountain ranges of France Mountain ranges of Germany Mountain ranges of Italy Mountain ranges of Liechtenstein Mountain ranges of Monaco Mountain ranges of Slovenia Mountain ranges of Switzerland Physiographic provinces
Alps
April is the fourth month of the year in the Gregorian calendar, the fifth in the early Julian, the first of four months to have a length of 30 days, and the second of five months to have a length of less than 31 days. April is commonly associated with the season of autumn in parts of the Southern Hemisphere, and spring in parts of the Northern Hemisphere, where it is the seasonal equivalent to October in the Southern Hemisphere and vice versa. History The Romans gave this month the Latin name Aprilis but the derivation of this name is uncertain. The traditional etymology is from the verb aperire, "to open", in allusion to its being the season when trees and flowers begin to "open", which is supported by comparison with the modern Greek use of άνοιξη (ánixi) (opening) for spring. Since some of the Roman months were named in honor of divinities, and as April was sacred to the goddess Venus, her Veneralia being held on the first day, it has been suggested that Aprilis was originally her month Aphrilis, from her equivalent Greek goddess name Aphrodite (Aphros), or from the Etruscan name Apru. Jacob Grimm suggests the name of a hypothetical god or hero, Aper or Aprus. April was the second month of the earliest Roman calendar, before Ianuarius and Februarius were added by King Numa Pompilius about 700 BC. It became the fourth month of the calendar year (the year when twelve months are displayed in order) during the time of the decemvirs about 450 BC, when it also was given 29 days. The 30th day was added during the reform of the calendar undertaken by Julius Caesar in the mid-40s BC, which produced the Julian calendar. The Anglo-Saxons called April ēastre-monaþ. The Venerable Bede says in The Reckoning of Time that this month ēastre is the root of the word Easter. He further states that the month was named after a goddess Eostre whose feast was in that month. It is also attested by Einhard in his work, Vita Karoli Magni. St George's day is the twenty-third of the month; and St Mark's Eve, with its superstition that the ghosts of those who are doomed to die within the year will be seen to pass into the church, falls on the twenty-fourth. In China the symbolic of the earth by the emperor and princes of the blood took place in their third month, which frequently corresponds to April. In Finnish April is huhtikuu, meaning slash-and-burn moon, when gymnosperms for beat and burn clearing of farmland were felled. In Slovene, the most established traditional name is mali traven, meaning the month when plants start growing. It was first written in 1466 in the Škofja Loka manuscript. The month Aprilis had 30 days; Numa Pompilius made it 29 days long; finally Julius Caesar’s calendar reform made it again 30 days long, which was not changed in the calendar revision of Augustus Caesar in 8 BC. Additionally in the Spanish colony, Las Islas Filipinas (now known as the Philippines), the month Aprilis had a significant meaning to the life of the natives as it was associated to the influence of the Chinese during the Spanish colonial period. The importance of this aspect to the lives of the natives was formerly associated to an event called "Abril na Ikaw" as it is closely linked to the famous trader, April Yu. In Ancient Rome, the festival of Cerealia was held for seven days from mid-to-late April, but exact dates are uncertain. Feriae Latinae was also held in April, with the date varying. Other ancient Roman observances include Veneralia (April 1), Megalesia (April 10–16), Fordicidia (April 15), Parilia (April 21), Vinalia Urbana, Robigalia, and Serapia were celebrated on (April 25). Floralia was held April 27 during the Republican era, or April 28 on the Julian calendar, and lasted until May 3. However, these dates do not correspond to the modern Gregorian calendar. The Lyrids meteor shower appears on April 16 – April 26 each year, with the peak generally occurring on April 22. Eta Aquariids meteor shower also appears in April. It is visible from about April 21 to about May 20 each year with peak activity on or around May 6. The Pi Puppids appear on April 23, but only in years around the parent comet's perihelion date. The Virginids also shower at various dates in April. The "Days of April" (journées d'avril) is a name appropriated in French history to a series of insurrections at Lyons, Paris and elsewhere, against the government of Louis Philippe in 1834, which led to violent repressive measures, and to a famous trial known as the procès d'avril. April symbols April's birthstone is the diamond. The birth flower is typically listed as either the Daisy (Bellis perennis) or the Sweet Pea. The zodiac signs for the month of April are Aries (until April 20) and Taurus (April 20 onwards). April observances This list does not necessarily imply either official status nor general observance. Month-long observances In Catholic, Protestant and Orthodox tradition, April is the Month of the Resurrection of the Lord. April and March are the months in which is celebrated the moveable Feast of Easter Sunday. National Pet Month (United Kingdom) United States Arab American Heritage Month Autism Awareness Month Cancer Control Month Community College Awareness Month Confederate History Month (Alabama, Florida, Georgia, Louisiana, Mississippi, Texas, Virginia) Donate Life Month Financial Literacy Month Jazz Appreciation Month Mathematics and Statistics Awareness Month National Poetry Month National Poetry Writing Month Occupational Therapy Month National Prevent Child Abuse Month National Volunteer Month Parkinson's Disease Awareness Month Rosacea Awareness Month Sexual Assault Awareness Month United States Food months Fresh Florida Tomato Month National Food Month National Grilled Cheese Month National Pecan Month National Soft Pretzel Month National Soyfoods Month Non-Gregorian observances: 2021 (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable observances, 2021 dates Youth Homelessness Matters Day National Health Day (Kiribati): April 6 Oral, Head and Neck Cancer Awareness Week (United States): April 13–19 National Park Week (United States): April 18–26 Crime Victims' Rights Week (United States): April 19–25 National Volunteer Week: April 19–25 European Immunization Week: April 20–26 Day of Silence (United States): April 24 Pay It Forward Day: April 28 (International observance) Denim Day: April 29 (International observance) Day of Dialogue (United States) Vaccination Week In The Americas See: List of movable Western Christian observances See: List of movable Eastern Christian observances First Wednesday: April 1 National Day of Hope (United States) First Saturday: April 4 Ulcinj Municipality Day (Ulcinj, Montenegro) First Sunday: April 5 Daylight saving time ends (Australia and New Zealand) Geologists Day (former Soviet Union countries) Kanamara Matsuri (Kawasaki, Japan) Opening Day (United States) First full week: April 5–11 National Library Week (United States) National Library Workers Day (United States) (Tuesday of National Library week, April 4) National Bookmobile Day (Wednesday of National Library week, April 5) National Public Health Week (United States) National Public Safety Telecommunicators Week (United States) Second Wednesday: April 8 International Day of Pink Second Thursday: April 9 National Former Prisoner of War Recognition Day (United States) Second Friday of April: April 10 Fast and Prayer Day (Liberia) Air Force Day (Russia) Kamakura Matsuri at Tsurugaoka Hachiman (Kamakura, Japan), lasts until third Sunday. Second Sunday: April 12 Children's Day (Peru) Week of April 14: April 12–18 Pan-American Week (United States) Third Wednesday: April 15 Administrative Professionals' Day (New Zealand) Third Thursday: April 16 National High Five Day (United States) Third Saturday: April 18 Record Store Day (International observance) Last full week of April: April 19–25 Administrative Professionals Week (Malaysia, North America) World Immunization Week Week of April 23: April 19–25 Canada Book Week (Canada) Week of the New Moon: April 19–25 National Dark-Sky Week (United States) Third Monday: April 20 Patriots' Day (Massachusetts, Maine, United States) Queen's Official Birthday (Saint Helena, Ascension and Tristan da Cunha) Sechseläuten (Zürich, Switzerland) Wednesday of last full week of April: April 22 Administrative Professionals' Day (Hong Kong, North America) First Thursday after April 18: April 23 First Day of Summer (Iceland) Fourth Thursday: April 23 Take Our Daughters And Sons To Work Day (United States) Last Friday: April 24 Arbor Day (United States) Día de la Chupina (Rosario, Argentina) Last Friday in April to first Sunday in May: April 24-May 3 Arbour Week in Ontario Last Saturday: April 25 Children's Day (Colombia) National Rebuilding Day (United States) National Sense of Smell Day (United States) World Tai Chi and Qigong Day Last Sunday: April 26 Flag Day (Åland, Finland) Turkmen Racing Horse Festival (Turkmenistan) April 27 (moves to April 26 if April 27 is on a Sunday): April 27 Koningsdag (Netherlands) Last Monday: April 27 Confederate Memorial Day (Alabama, Georgia (U.S. state), and Mississippi, United States) Last Wednesday: April 29 International Noise Awareness Day Fixed observances April 1 April Fools' Day Arbor Day (Tanzania) Civil Service Day (Thailand) Cyprus National Day (Cyprus) Edible Book Day Fossil Fools Day Kha b-Nisan (Assyrian people) National Civil Service Day (Thailand) Odisha Day (Odisha, India) Start of Testicular Cancer Awareness week (United States), April 1–7 Season for Nonviolence January 30 – April 4 April 2 International Children's Book Day (International observance) Malvinas Day (Argentina) National Peanut Butter and Jelly Day (United States) Thai Heritage Conservation Day (Thailand) Unity of Peoples of Russia and Belarus Day (Belarus) World Autism Awareness Day (International observance) April 3 April 4 Children's Day (Hong Kong, Taiwan) Independence Day (Senegal) International Day for Mine Awareness and Assistance in Mine Action Peace Day (Angola) April 5 Children's Day (Palestinian territories) National Caramel Day (United States) Sikmogil (South Korea) April 6 Chakri Day (Thailand) National Beer Day (United Kingdom) New Beer's Eve (United States) Tartan Day (United States & Canada) April 7 Flag Day (Slovenia) Genocide Memorial Day (Rwanda), and its related observance: International Day of Reflection on the 1994 Rwanda Genocide (United Nations) Motherhood and Beauty Day (Armenia) National Beer Day (United States) No Housework Day Sheikh Abeid Amani Karume Day (Tanzania) Women's Day (Mozambique) World Health Day (International observance) April 8 Buddha's Birthday (Japan only, other countries follow different calendars) Feast of the First Day of the Writing of the Book of the Law (Thelema) International Romani Day (International observance) Trading Cards for Grown-ups Day April 9 Anniversary of the German Invasion of Denmark (Denmark) Baghdad Liberation Day (Iraqi Kurdistan) Constitution Day (Kosovo) Day of National Unity (Georgia) Day of the Finnish Language (Finland) Day of Valor or Araw ng Kagitingan (Philippines) Feast of the Second Day of the Writing of the Book of the Law (Thelema) International Banshtai Tsai Day Martyr's Day (Tunisia) National Former Prisoner of War Recognition Day (United States) Remembrance for Haakon Sigurdsson (The Troth) Vimy Ridge Day (Canada) April 10 Day of the Builder (Azerbaijan) Feast of the Third Day of the Writing of the Book of the Law (Thelema) Siblings Day (International observance) April 11 Juan Santamaría Day, anniversary of his death in the Second Battle of Rivas. (Costa Rica) International Louie Louie Day National Cheese Fondue Day (United States) World Parkinson's Day April 12 Children's Day (Bolivia and Haiti) Commemoration of first human in space by Yuri Gagarin: Cosmonautics Day (Russia) International Day of Human Space Flight Yuri's Night (International observance) Halifax Day (North Carolina) National Grilled Cheese Sandwich Day (United States) National Redemption Day (Liberia) Walk on Your Wild Side Day April 13 Jefferson's Birthday (United States) Katyn Memorial Day (Poland) Teacher's Day (Ecuador) First day of Thingyan (Myanmar) (April 13–16) Unfairly Prosecuted Persons Day (Slovakia) April 14 ʔabusibaree (Okinawa Islands, Japan) Ambedkar Jayanti (India) Black Day (South Korea) Commemoration of Anfal Genocide Against the Kurds (Iraqi Kurdistan) Dhivehi Language Day (Maldives) Day of Mologa (Yaroslavl Oblast, Russia) Day of the Georgian language (Georgia (country)) Season of Emancipation (April 14 to August 23) (Barbados) N'Ko Alphabet Day (Mande speakers) Pohela Boishakh (Bangladesh) Pana Sankranti (Odisha, India) Puthandu (Tamils) (India, Malaysia, Singapore, Sri Lanka) Second day of Songkran (Thailand) (Thailand) Pan American Day (several countries in the Americas) The first day of Takayama Spring Festival (Takayama, Gifu, Japan) Vaisakh (Punjab (region)), (India and Pakistan) Youth Day (Angola) April 15 Day of the Sun (North Korea). Hillsborough Disaster Memorial (Liverpool, England) Jackie Robinson Day (United States) National Banana Day (United States) Pohela Boishakh (West Bengal, India) (Note: celebrated on April 14 in Bangladesh) Last day of Songkran (Thailand) (Thailand) Tax Day, the official deadline for filing an individual tax return (or requesting an extension). (United States, Philippines) Universal Day of Culture World Art Day April 16 Birthday of José de Diego (Puerto Rico, United States) Birthday of Queen Margrethe II (Denmark) Emancipation Day (Washington, D.C., United States) Foursquare Day (International observance) Memorial Day for the Victims of the Holocaust (Hungary) National Healthcare Decisions Day (United States) Remembrance of Chemical Attack on Balisan and Sheikh Wasan (Iraqi Kurdistan) World Voice Day April 17 Blah Blah Blah Day Evacuation Day (Syria) FAO Day (Iraq) Flag Day (American Samoa) Malbec World Day National Cheeseball Day (United States) National Espresso Day (Italy) Women's Day (Gabon) World Hemophilia Day April 18 Anniversary of the Victory over the Teutonic Knights in the Battle of the Ice, 1242 (Russia) Army Day (Iran) Coma Patients' Day (Poland) Friend's Day (Brazil) Independence Day (Zimbabwe) International Day For Monuments and Sites Invention Day (Japan) Pet Owner's Independence Day April 19 Army Day (Brazil) Beginning of the Independence Movement (Venezuela) Bicycle Day Dutch-American Friendship Day (United States) Holocaust Remembrance Day (Poland) Indian Day (Brazil) King Mswati III's birthday (Swaziland) Landing of the 33 Patriots Day (Uruguay) National Garlic Day (United States) National Rice Ball Day (United States) Primrose Day (United Kingdom) April 20 420 (cannabis culture) (International) UN Chinese Language Day (United Nations) April 21 A&M Day (Texas A&M University) Civil Service Day (India) Day of Local Self-Government (Russia) Grounation Day (Rastafari movement) Heroic Defense of Veracruz (Mexico) Kang Pan-sok's Birthday (North Korea) Kartini Day (Indonesia) Local Self Government Day (Russia) National Tree Planting Day (Kenya) San Jacinto Day (Texas) Queen's Official Birthday (Falkland Islands) Tiradentes' Day (Brazil) Vietnam Book Day (Vietnam) April 22 Discovery Day (Brazil) Earth Day (International observance) and its related observance: International Mother Earth Day Holocaust Remembrance Day (Serbia) National Jelly Bean Day (United States) April 23 Castile and León Day (Castile and León, Spain) German Beer Day (Germany) Independence Day (Conch Republic, Key West, Florida) International Pixel-Stained Technopeasant Day Khongjom Day (Manipur, India) National Sovereignty and Children's Day (Turkey and Northern Cyprus) Navy Day (China) St George's Day (England) and its related observances: Canada Book Day (Canada) La Diada de Sant Jordi (Catalonia, Spain) World Book Day UN English Language Day (United Nations) April 24 Armenian Genocide Remembrance Day (Armenia) Concord Day (Niger) Children's Day (Zambia) Democracy Day (Nepal) Fashion Revolution Day Flag Day (Ireland) International Sculpture Day Kapyong Day (Australia) Labour Safety Day (Bangladesh) National Panchayati Raj Day (India) National Pigs in a Blanket Day (United States) Republic Day (The Gambia) St Mark's Eve (Western Christianity) World Day for Laboratory Animals April 25 Anniversary of the First Cabinet of Kurdish Government (Iraqi Kurdistan) Anzac Day (Australia, New Zealand) Arbor Day (Germany) DNA Day Feast of Saint Mark (Western Christianity) Flag Day (Faroe Islands) Flag Day (Swaziland) Freedom Day (Portugal) Liberation Day (Italy) Major Rogation (Western Christianity) Military Foundation Day (North Korea) National Zucchini Bread Day (United States) Parental Alienation Awareness Day Red Hat Society Day Sinai Liberation Day (Egypt) World Malaria Day April 26 Chernobyl disaster related observances: Memorial Day of Radiation Accidents and Catastrophes (Russia) Day of Remembrance of the Chernobyl tragedy (Belarus) Confederate Memorial Day (Florida, United States) Hug A Friend Day Hug an Australian Day Lesbian Visibility Day National Pretzel Day (United States) Old Permic Alphabet Day Union Day (Tanzania) World Intellectual Property Day April 27 Day of Russian Parliamentarism (Russia) Day of the Uprising Against the Occupying Forces (Slovenia) Flag Day (Moldova) Freedom Day (South Africa) UnFreedom Day Independence Day (Sierra Leone) Independence Day (Togo) National Day (Mayotte) National Day (Sierra Leone) National Prime Rib Day (United States) National Veterans' Day (Finland) April 28 Lawyers' Day (Orissa, India) Mujahideen Victory Day (Afghanistan) National Day (Sardinia, Italy) National Heroes Day (Barbados) Restoration of Sovereignty Day (Japan) Workers' Memorial Day and World Day for Safety and Health at Work (international) National Day of Mourning (Canada) April 29 Day of Remembrance for all Victims of Chemical Warfare (United Nations) International Dance Day (UNESCO) Princess Bedike's Birthday (Denmark) National Shrimp Scampi Day (United States) Shōwa Day, traditionally the start of the Golden Week holiday period, which is April 29 and May 3–5. (Japan) April 30 Armed Forces Day (Georgia (country)) Birthday of the King (Sweden) Camarón Day (French Foreign Legion) Children's Day (Mexico) Consumer Protection Day (Thailand) Honesty Day (United States) International Jazz Day (UNESCO) Martyr's Day (Pakistan) May Eve, the eve of the first day of summer in the Northern hemisphere (see May 1): Beltane begins at sunset in the Northern hemisphere, Samhain begins at sunset in the Southern hemisphere. (Neo-Druidic Wheel of the Year) Carodejnice (Czech Republic and Slovakia) Walpurgis Night (Central and Northern Europe) National Persian Gulf Day (Iran) Reunification Day (Vietnam) Russian State Fire Service Day (Russia) Tax Day (Canada) Teachers' Day (Paraguay) See also Germanic calendar List of historical anniversaries Sinking of the RMS Titanic References External links National Arbor Day Foundation 00
April
August is the eighth month of the year in the Julian and Gregorian calendars, and the fifth of seven months to have a length of 31 days. Its zodiac sign is Leo and was originally named Sextilis in Latin because it was the 6th month in the original ten-month Roman calendar under Romulus in 753 BC, with March being the first month of the year. About 700 BC, it became the eighth month when January and February were added to the year before March by King Numa Pompilius, who also gave it 29 days. Julius Caesar added two days when he created the Julian calendar in 46 BC (708 AUC), giving it its modern length of 31 days. In 8 BC, it was renamed in honor of Emperor Augustus. According to a Senatus consultum quoted by Macrobius, he chose this month because it was the time of several of his great triumphs, including the conquest of Egypt. Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length. In the Southern Hemisphere, August is the seasonal equivalent of February in the Northern Hemisphere. In the Northern Hemisphere, August falls in the season of summer. In the Southern Hemisphere, the month falls during the season of winter. In many European countries, August is the holiday month for most workers. Numerous religious holidays occurred during August in ancient Rome. Certain meteor showers take place in August. The Kappa Cygnids take place in August, with the dates varying each year. The Alpha Capricornids meteor shower takes place as early as July 10 and ends at around August 10, and the Southern Delta Aquariids take place from mid-July to mid-August, with the peak usually around July 28–29. The Perseids, a major meteor shower, typically takes place between July 17 and August 24, with the days of the peak varying yearly. The star cluster of Messier 30 is best observed around August. Among the aborigines of the Canary Islands, especially among the Guanches of Tenerife, the month of August received in the name of Beñesmer or Beñesmen, which was also the harvest festival held this month. August symbols August's birthstones are the peridot, sardonyx, and spinel. Its birth flower is the gladiolus or poppy, meaning beauty, strength of character, love, marriage and family. The Western zodiac signs for the month of August are Leo (until August 22) and Virgo (from August 23 onwards). Observances This list does not necessarily imply either official status or general observance. Non-Gregorian observances: 2020 dates (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Month-long observances American Adventures Month (celebrating vacationing in the Americas) Children's Eye Health and Safety Month Digestive Tract Paralysis (DTP) Month Get Ready for Kindergarten Month Happiness Happens Month Month of Philippine Languages or Buwan ng Wika (Philippines) Neurosurgery Outreach Month Psoriasis Awareness Month Spinal Muscular Atrophy Awareness Month What Will Be Your Legacy Month United States month-long observances National Black Business Month National Children's Vision and Learning Month National Immunization Awareness Month National Princess Peach Month National Water Quality Month National Win with Civility Month Food Months in the United States National Catfish Month National Dippin' Dots Month Family Meals Month National Goat Cheese Month. National Panini Month Peach Month Sandwich Month Moveable Gregorian observances National Science Week (Australia) See also Movable Western Christian observances See also Movable Eastern Christian observances Second to last Sunday in July and the following two weeks Construction Holiday (Quebec) 1st Saturday Food Day (Canada) Mead Day (United States) National Mustard Day (United States) 1st Sunday Air Force Day (Ukraine) American Family Day (Arizona, United States) Children's Day (Uruguay) Friendship Day (United States) International Forgiveness Day Railway Workers' Day (Russia) First Full week of August National Farmer's Market Week 1st Monday August Public Holiday (Ireland) Children's Day (Tuvalu) Civic Holiday (Canada) British Columbia Day (British Columbia, Canada) Natal Day (Nova Scotia, Canada) New Brunswick Day (New Brunswick, Canada) Saskatchewan Day (Saskatchewan, Canada Terry Fox Day (Manitoba, Canada) Commerce Day (Iceland) Emancipation Day (Anguilla, Antigua, The Bahamas, British Virgin Islands, Dominica, Grenada, Saint Kitts and Nevis) Farmer's Day (Zambia) Kadooment Day (Barbados) Labor Day (Samoa) National Day (Jamaica) Picnic Day (Northern Territory, Australia) Somers' Day (Bermuda) Youth Day (Kiribati) 1st Tuesday National Night Out (United States) 1st Friday International Beer Day 2nd Saturday Sports Day (Russia) Sunday on or closest to August 9 National Peacekeepers' Day (Canada) 2nd Sunday Children's Day (Argentina, Chile, Uruguay) Father's Day (Brazil, Samoa) Melon Day (Turkmenistan) Navy Day (Bulgaria) National Day (Singapore) 2nd Monday Heroes' Day (Zimbabwe) Victory Day (Hawaii and Rhode Island, United States) 2nd Tuesday Defence Forces Day (Zimbabwe) 3rd Saturday National Honey Bee Day (United States) Independence Day (India) 3rd Sunday Children's Day (Argentina, Peru) Grandparents Day (Hong Kong) 3rd Monday Discovery Day (Yukon, Canada) Day of Hearts (Haarlem and Amsterdam, Netherlands) National Mourning Day (Bangladesh) 3rd Friday Hawaii Admission Day (Hawaii, United States) Last Thursday National Burger Day (United Kingdom) Last Sunday Coal Miner's Day (some former Soviet Union countries) National Grandparents Day (Taiwan) Last Monday Father's Day (South Sudan) National Heroes' Day (Philippines) Liberation Day (Hong Kong) Late Summer Bank Holiday (England, Northern Ireland and Wales) Fixed Gregorian observances Season of Emancipation (Barbados) (April 14 to August 23) International Clown Week (August 1–7) World Breastfeeding Week (August 1–7) August 1 Armed Forces Day (China) Armed Forces Day (Lebanon) Azerbaijani Language and Alphabet Day (Azerbaijan) Emancipation Day (Barbados, Guyana, Jamaica, Saint Vincent and the Grenadines, St. Lucia, Trinidad and Tobago, Turks and Caicos Islands) Imbolc (Neopaganism, Southern Hemisphere only) Lammas (England, Scotland, Neopaganism, Northern Hemisphere only) Lughnasadh (Gaels, Ireland, Scotland, Neopaganism, Northern Hemisphere only) Minden Day (United Kingdom) National Day (Benin) National Milkshake Day (United States) Official Birthday and Coronation Day of the King of Tonga (Tonga) Pachamama Raymi (Quechua people in Ecuador and Peru) Parents' Day (Democratic Republic of the Congo) Procession of the Cross and the beginning of Dormition Fast (Eastern Orthodoxy) Statehood Day (Colorado) Swiss National Day (Switzerland) Victory Day (Cambodia, Laos, Vietnam) World Scout Scarf Day Yorkshire Day (Yorkshire, England) August 2 Airmobile Forces Day (Ukraine) Day of Azerbaijani cinema (Azerbaijan) Our Lady of the Angels Day (Costa Rica) Paratroopers Day (Russia) Republic Day (North Macedonia) August 3 Anniversary of the Killing of Pidjiguiti (Guinea-Bissau) Armed Forces Day (Equatorial Guinea) Esther Day (United States) Flag Day (Venezuela) Independence Day (Niger) Arbor Day (Niger) National Guard Day (Venezuela) National Watermelon Day (United States) National White Wine Day (United States) August 4 Coast Guard Day (United States) Constitution Day (Cook Islands) Matica slovenská Day (Slovakia) Revolution Day (Burkina Faso) August 5 Dedication of the Basilica of St Mary Major (Catholic Church) Independence Day (Burkina Faso) National Underwear Day (United States) Victory and Homeland Thanksgiving Day and the Day of Croatian defenders (Croatia) August 6 Feast of the Transfiguration Sheikh Zayed bin Sultan Al Nahyan's Accession Day. (United Arab Emirates) Hiroshima Peace Memorial Ceremony (Hiroshima, Japan) Independence Day (Bolivia) Independence Day (Jamaica) Russian Railway Troops Day (Russia) August 7 Assyrian Martyrs Day (Assyrian community) Battle of Boyacá Day (Colombia) Emancipation Day (Saint Kitts and Nevis) Independence Day (Ivory Coast) Republic Day (Ivory Coast) Youth Day (Kiribati) August 8 Ceasefire Day (Iraqi Kurdistan) Father's Day (Taiwan) Happiness Happens Day (International observance) International Cat Day Namesday of Queen Silvia of Sweden, (Sweden) Nane Nane Day (Tanzania) Signal Troops Day (Ukraine) August 9 Battle of Gangut Day (Russia) International Day of the World's Indigenous People (United Nations) National Day (Singapore) National Women's Day (South Africa) Remembrance for Radbod, King of the Frisians (The Troth) August 10 Argentine Air Force Day (Argentina) Constitution Day (Anguilla) Declaration of Independence of Quito (Ecuador) International Biodiesel Day National S'more Day (United States) August 11 Flag Day (Pakistan) Independence Day (Chad) Mountain Day (Japan) August 12 Glorious Twelfth (United Kingdom) HM the Queen's Birthday and National Mother's Day (Thailand) International Youth Day (United Nations) Russian Railway Troops Day (Russia) Sea Org Day (Scientology) World Elephant Day August 13 Independence Day (Central African Republic) International Lefthanders Day National Filet Mignon Day (United States) Women's Day (Tunisia) August 14 Anniversary Day (Tristan da Cunha) Commemoration of Wadi al-Dahab (Morocco) Day of the Defenders of the Fatherland (Abkhazia) Engineer's Day (Dominican Republic) Falklands Day (Falkland Islands) Independence Day (Pakistan) National Creamsicle Day (United States) Pramuka Day (Indonesia) August 15 Feast Day of the Assumption of Mary (Catholic holy days of obligation, a public holiday in many countries. Ferragosto (Italy) Māras (Latvia) Mother's Day (Antwerp and Costa Rica) National Acadian Day (Acadians) Virgin of Candelaria, patron of the Canary Islands. (Tenerife, Spain) Feast of the Dormition of the Theotokos (Eastern Orthodox, Oriental Orthodox and Eastern Catholic Churches) Navy Day (Romania) Armed Forces Day (Poland)**The first day of Flooding of the Nile, or Wafaa El-Nil (Egypt and Coptic Church) The main day of Bon Festival (Japan), and its related observances: Awa Dance Festival (Tokushima Prefecture) Constitution Day (Equatorial Guinea) End-of-war Memorial Day, when the National Memorial Service for War Dead is held. (Japan) Founding of Asunción (Paraguay) Independence Day (Korea) Gwangbokjeol (South Korea) Jogukhaebangui nal, "Fatherland Liberation Day" (North Korea) Independence Day (India) Independence Day (Republic of the Congo) National Day (Liechtenstein) National Mourning Day (Bangladesh) Victory over Japan Day (United Kingdom) National Lemon Meringue Pie Day (United States) August 16 Bennington Battle Day (Vermont, United States) Children's Day (Paraguay) Gozan no Okuribi (Kyoto, Japan) The first day of the Independence Days (Gabon) National Airborne Day (United States) National Rum Day (United States) Restoration Day (Dominican Republic) August 17 The Birthday of Marcus Garvey (Rastafari) Engineer's Day (Colombia) Flag Day (Bolivia) Independence Day (Indonesia) Independence Days (Gabon) National Vanilla Custard Day (United States) Prekmurje Union Day (Slovenia) San Martin Day (Argentina) August 18 Arbor Day (Pakistan) Armed Forces Day (North Macedonia) Bad Poetry Day Birthday of Virginia Dare (Roanoke Island) Constitution Day (Indonesia) Long Tan Day (Australia) National Science Day (Thailand) August 19 Feast of the Transfiguration (Julian calendar), and its related observances: Buhe (Ethiopian Orthodox Tewahedo Church) Saviour's Transfiguration, popularly known as the "Apples Feast" (Russian Orthodox Church and Georgian Orthodox Church) Afghan Independence Day (Afghanistan) August Revolution Commemoration Day (Vietnam) Birthday of Crown Princess Mette-Marit (Norway) Manuel Luis Quezón Day (Quezon City and other places in The Philippines named after Manuel L. Quezon) National Aviation Day (United States) National Potato Day (United States) World Humanitarian Day August 20 Indian Akshay Urja Day (India) Restoration of Independence Day (Estonia) Revolution of the King and People (Morocco) Saint Stephen's Day (Hungary) World Mosquito Day August 21 Ninoy Aquino Day (Philippines) Youth Day/King Mohammed VI's Birthday (Morocco) August 22 Feast of the Coronation of Mary Flag Day (Russia) Madras Day (Chennai and Tamil Nadu, India) National Eat a Peach Day (United States) National Pecan Torte Day (United States) Southern Hemisphere Hoodie-Hoo Day (Chase's Calendar of Events, Southern Hemisphere) August 23 Battle of Kursk Day (Russia) Day of the National Flag (Ukraine) European Day of Remembrance for Victims of Stalinism and Nazism or Black Ribbon Day (European Union and other countries), and related observances: Liberation from Fascist Occupation Day (Romania) International Day for the Remembrance of the Slave Trade and its Abolition Umhlanga Day (Swaziland) August 24 Flag Day (Liberia) Independence Day of Ukraine International Strange Music Day National Waffle Day (United States) Nostalgia Night (Uruguay) Willka Raymi (Cusco, Peru) August 25 Day of Songun (North Korea) Independence Day (Uruguay) Liberation Day (France) National Banana Split Day (United States) National Whiskey Sour Day (United States) Soldier's Day (Brazil) August 26 Herero Day (Namibia) Heroes' Day (Namibia) Repentance Day (Papua New Guinea) Women's Equality Day (United States) August 27 Film and Movies Day (Russia) Independence Day (Republic of Moldova) Lyndon Baines Johnson Day (Texas, United States) National Banana Lovers Day (United States) National Pots De Creme Day (United States) August 28 Assumption of Mary (Eastern Orthodox Church (Public holiday in North Macedonia, Serbia, and Georgia (country)) Crackers of the Keyboard Day Race Your Mouse Around the Icons Day National Cherry Turnover Day (United States) August 29 International Day against Nuclear Tests Miners' Day (Ukraine) More Herbs, Less Salt Day National Lemon Juice Day (United States) National Chop Suey Day (United States) National Sports Day (India) Slovak National Uprising Anniversary (Slovakia) Telugu Language Day (India) August 30 Constitution Day (Kazakhstan) Constitution Day (Turks and Caicos Islands) Independence Day (Tatarstan, Russia, unrecognized) International Day of the Disappeared (International) Popular Consultation Day (East Timor) Saint Rose of Lima's Day (Peru) Victory Day (Turkey) August 31 Baloch-Pakhtun Unity Day (Balochs and Pashtuns, International observance) Day of Solidarity and Freedom (Poland) Independence Day (Federation of Malaya, Malaysia) Independence Day (Kyrgyzstan) Independence Day (Trinidad and Tobago) Love Litigating Lawyers Day National Language Day (Moldova) National Trail Mix Day (United States) North Borneo Self-government Day (Sabah, Borneo) References Further reading 08 Augustus
August
In chemistry, an alcohol is a type of organic compound that carries at least one hydroxyl functional group (−OH) bound to a saturated carbon atom. The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic drinks. An important class of alcohols, of which methanol and ethanol are the simplest members, includes all compounds for which the general formula is . Simple monoalcohols that are the subject of this article include primary (), secondary () and tertiary () alcohols. The suffix -ol appears in the IUPAC chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, many substances that contain hydroxyl functional groups (particularly sugars, such as glucose and sucrose) have names which include neither the suffix -ol, nor the prefix hydroxy-. History The inflammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (c. 371–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of alcohol, even despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (c. 801–873 CE) and to al-Fārābī (c. 872–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., alcohol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists. The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated fractional distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (c. 1310–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine). Nomenclature Etymology The word "alcohol" is from the Arabic kohl (), a powder used as an eyeliner. Al- is the Arabic definite article, equivalent to the in English. Alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide . It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. The meaning of alcohol was extended to distilled substances in general, and then narrowed to ethanol, when "spirits" was a synonym for hard liquor. Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre." The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium." By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "vini alcohol vel vinum alcalisatum". Johnson (1657) glosses alcohol vini as "quando omnis superfluitas vini a vino separatur, ita ut accensum ardeat donec totum consumatur, nihilque fæcum aut phlegmatis in fundo remaneat." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850. The term ethanol was invented in 1892, blending "ethane" with the "-ol" ending of "alcohol", which was generalized as a libfix. Systematic names IUPAC nomenclature is used in scientific publications and where precise identification of the substance is important, especially in cases where the relative complexity of the molecule does not make such a systematic name unwieldy. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane". When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for , propan-2-ol for . If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used, e.g., as in 1-hydroxy-2-propanone (). In cases where the hydroxy group is bonded to an sp2 carbon on an aromatic ring, the molecule is classified separately as a phenol and is named using the IUPAC rules for naming phenols. Phenols have distinct properties and are not classified as alcohols. Common names In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix. Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. (The respective numeric shorthands 1°, 2°, and 3° are also sometimes used in informal settings.) The primary alcohols have general formulas . The simplest primary alcohol is methanol (), for which R=H, and the next is ethanol, for which , the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (). For the tertiary alcohols the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is . In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups. In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, can be named trimethylcarbinol. Applications Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols: methanol, mainly for the production of formaldehyde and as a fuel additive ethanol, mainly for alcoholic beverages, fuel additive, solvent 1-propanol, 1-butanol, and isobutyl alcohol for use as a solvent and precursor to solvents C6–C11 alcohols used for plasticizers, e.g. in polyvinylchloride fatty alcohol (C12–C18), precursors to detergents Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally. Toxicity With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Methanol and ethanol are less acutely toxic. All alcohols are mild skin irritants. The metabolism of methanol (and ethylene glycol) is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way methanol will be excreted intact in urine. Physical properties In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. Butanol, with a four-carbon chain, is moderately soluble. Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether. Occurrence in nature Simple alcohols are found widely in nature. Ethanol is the most prominent because it is the product of fermentation, a major energy-producing pathway. Other simple alcohols, chiefly fusel alcohols, are formed in only trace amounts. More complex alcohols however are pervasive, as manifested in sugars, some amino acids, and fatty acids. Production Ziegler and oxo processes In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis. An idealized synthesis of 1-octanol is shown: The process generates a range of alcohols that are separated by distillation. Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol: Such processes give fatty alcohols, which are useful for detergents. Hydration reactions Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration using ethylene (ethylene hydration) or other alkenes from cracking of fractions of distilled crude oil. Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide. Biological routes Ethanol is obtained by fermentation using glucose produced from sugar from the hydrolysis of starch, in the presence of yeast and temperature of less than 37 °C to produce ethanol. For instance, such a process might proceed by the conversion of sucrose by the enzyme invertase into glucose and fructose, then the conversion of glucose by the enzyme complex zymase into ethanol and carbon dioxide. Several species of the benign bacteria in the intestine use fermentation as a form of anaerobic metabolism. This metabolic reaction produces ethanol as a waste product. Thus, human bodies contain some quantity of alcohol endogenously produced by these bacteria. In rare cases, this can be sufficient to cause "auto-brewery syndrome" in which intoxicating quantities of alcohol are produced. Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above . The bacterium Clostridium acetobutylicum can feed on cellulose to produce butanol on an industrial scale. Substitution Primary alkyl halides react with aqueous NaOH or KOH mainly to primary alcohols in nucleophilic aliphatic substitution. (Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead). Grignard reagents react with carbonyl groups to secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki-Hiyama reaction. Reduction Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction by aluminiumisopropylates is the Meerwein-Ponndorf-Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters. Hydrolysis Alkenes engage in an acid catalysed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with NBS and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed. The formation of a secondary alcohol via reduction and hydration is shown: Reactions Deprotonation With aqueous pKa values of around 16–19, they are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula RO− M+. The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water. In DMSO, alcohols (and water) have a pKa of around 29–32. As a consequence, alkoxides (and hydroxide) are powerful bases and nucleophiles (e.g., for the Williamson ether synthesis) in this solvent. In particular, RO– or HO– in DMSO can be used to generate significant equilibrium concentrations of acetylide ions through the deprotonation of alkynes (see Favorskii reaction). Nucleophilic substitution The OH group is not a good leaving group in nucleophilic substitution reactions, so neutral alcohols do not react in such reactions. However, if the oxygen is first protonated to give , the leaving group (water) is much more stable, and the nucleophilic substitution can take place. For instance, tertiary alcohols react with hydrochloric acid to produce tertiary alkyl halides, where the hydroxyl group is replaced by a chlorine atom by unimolecular nucleophilic substitution. If primary or secondary alcohols are to be reacted with hydrochloric acid, an activator such as zinc chloride is needed. In alternative fashion, the conversion may be performed directly using thionyl chloride.[1] Alcohols may, likewise, be converted to alkyl bromides using hydrobromic acid or phosphorus tribromide, for example: In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction. Dehydration Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol: Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols eliminate easily at just above room temperature, but primary alcohols require a higher temperature. This is a diagram of acid catalysed dehydration of ethanol to produce ethylene: A more controlled elimination reaction requires the formation of the xanthate ester. Protonolysis Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes. Esterification Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid: Other types of ester are prepared in a similar manner for example, tosyl (tosylate) esters are made by reaction of the alcohol with p-toluenesulfonyl chloride in pyridine. Oxidation Primary alcohols () can be oxidized either to aldehydes (R-CHO) or to carboxylic acids (). The oxidation of secondary alcohols (R1R2CH-OH) normally terminates at the ketone (R1R2C=O) stage. Tertiary alcohols (R1R2R3C-OH) are resistant to oxidation. The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate () by reaction with water before it can be further oxidized to the carboxylic acid. Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess-Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent. See also Enol Ethanol fuel Fatty alcohol Index of alcohol-related articles List of alcohols Lucas test Polyol Rubbing alcohol Sugar alcohol Transesterification Citations General references Antiseptics Functional groups
Alcohol (chemistry)
Ibn Sina (), also known as Abu Ali Sina (), Pour Sina (), and often known in the West as Avicenna (;  – June 1037), was a Persian polymath who is regarded as one of the most significant physicians, astronomers, thinkers and writers of the Islamic Golden Age, and the father of early modern medicine. Sajjad H. Rizvi has called Avicenna "arguably the most influential philosopher of the pre-modern era". He was a Muslim Peripatetic philosopher influenced by Greek Aristotelian philosophy. Of the 450 works he is believed to have written, around 240 have survived, including 150 on philosophy and 40 on medicine. His most famous works are The Book of Healing, a philosophical and scientific encyclopedia, and The Canon of Medicine, a medical encyclopedia which became a standard medical text at many medieval universities and remained in use as late as 1650. Besides philosophy and medicine, Avicenna's corpus includes writings on astronomy, alchemy, geography and geology, psychology, Islamic theology, logic, mathematics, physics and works of poetry. Name is a Latin corruption of the Arabic patronym Ibn Sīnā (), meaning "Son of Sina". However, Avicenna was not the son but the great-great-grandson of a man named Sina. His formal Arabic name was Abū ʿAlī al-Ḥusayn bin ʿAbdullāh ibn al-Ḥasan bin ʿAlī bin Sīnā al-Balkhi al-Bukhari (). Circumstances Avicenna created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations of Byzantine Greco-Roman, Persian and Indian texts were studied extensively. Greco-Roman (Mid- and Neo-Platonic, and Aristotelian) texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals, who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine. The Samanid dynasty in the eastern part of Persia, Greater Khorasan and Central Asia as well as the Buyid dynasty in the western part of Persia and Iraq provided a thriving atmosphere for scholarly and cultural development. Under the Samanids, Bukhara rivaled Baghdad as a cultural capital of the Islamic world. There, the study of the Quran and the Hadith thrived. Philosophy, Fiqh and theology (kalaam) were further developed, most noticeably by Avicenna and his opponents. Al-Razi and Al-Farabi had provided methodology and knowledge in medicine and philosophy. Avicenna had access to the great libraries of Balkh, Khwarezm, Gorgan, Rey, Isfahan and Hamadan. Various texts (such as the 'Ahd with Bahmanyar) show that he debated philosophical points with the greatest scholars of the time. Aruzi Samarqandi describes how before Avicenna left Khwarezm he had met Al-Biruni (a famous scientist and astronomer), Abu Nasr Iraqi (a renowned mathematician), Abu Sahl Masihi (a respected philosopher) and Abu al-Khayr Khammar (a great physician). Biography Early life and education Avicenna was born in in the village of Afshana in Transoxiana to a family of Persian stock. The village was near the Samanid capital of Bukhara, which was his mother's hometown. His father Abd Allah was a native of the city of Balkh in Tukharistan. An official of the Samanid bureaucracy, he had served as the governor of a village of the royal estate of Harmaytan (near Bukhara) during the reign of Nuh II (). Avicenna also had a younger brother. A few years later, the family settled in Bukhara, a centre of learning, which attracted many scholars. It was there that Avicenna was educated, which early on was seemingly administered by his father. Although both Avicenna's father and brother had converted to Ismailism, he himself did not follow the faith. He was instead an adherent of the Hanafi school, which was also followed by the Samanids. Avicenna was first schooled in the Quran and literature, and by the age of 10, he had memorised the entire Quran. He was later sent by his father to an Indian greengrocer, who taught him arithmetic. Afterwards, he was schooled in Jurisprudence by the Hanafi jurist Ismail al-Zahid. Some time later, Avicenna's father invited the physician and philosopher Abu Abdallah al-Natili to their house to educate Avicenna. Together, they studied the Isagoge of Porphyry (died 305) and possibly the Categories of Aristotle (died 322 BC) as well. After Avicenna had read the Almagest of Ptolemy (died 170) and Euclid's Elements, Natili told him to continue his research independently. By the time Avicenna was eighteen, he was well-educated in Greek sciences. Although Avicenna only mentions Natili as his teacher in his autobiography, he most likely had other teachers as well, such as the physicians Abu Mansur Qumri and Abu Sahl al-Masihi. Career In Bukhara and Gurganj At the age of seventeen, Avicenna was made a physician of Nuh II. By the time Avicenna was at least 21 years old, his father died. He was subsequently given an administrative post, possibly succeeding his father as the governor of Harmaytan. Avicenna later moved to Gurganj, the capital of Khwarazm, which he reports that he did due to "necessity". The date he went to the place is uncertain, as he reports that he served the Khwarazmshah (ruler) of the region, the Ma'munid Abu al-Hasan Ali. The latter ruled from 997 to 1009, which indicates that Avicenna moved sometime during that period. He may have moved in 999, the year which the Samanid state fell after the Turkic Qarakhanids captured Bukhara and imprisoned the Samanid ruler Abd al-Malik II. Due to his high position and strong connection with the Samanids, Avicenna may have found himself in an unfavorable position after the fall of his suzerain. It was through the minister of Gurganj, Abu'l-Husayn as-Sahi, a patron of Greek sciences, that Avicenna entered into the service of Abu al-Hasan Ali. Under the Ma'munids, Gurganj became a centre of learning, attracting many prominent figures, such as Avicenna and his former teacher Abu Sahl al-Masihi, the mathematician Abu Nasr Mansur, the physician Ibn al-Khammar, and the philologist al-Tha'alibi. In Gurgan Avicenna later moved due to "necessity" once more (in 1012), this time to the west. There he travelled through the Khurasani cities of Nasa, Abivard, Tus, Samangan and Jajarm. He was planning to visit the ruler of the city of Gurgan, the Ziyarid Qabus (), a cultivated patron of writing, whose court attracted many distinguished poets and scholars. However, when Avicenna eventually arrived, he discovered that the ruler had been dead since the winter of 1013. Avicenna then left Gurgan for Dihistan, but returned after becoming ill. There he met Abu 'Ubayd al-Juzjani (died 1070) who became his pupil and companion. Avicenna stayed briefly in Gurgan, reportedly serving Qabus' son and successor Manuchihr () and resided in the house of a patron. In Ray and Hamadan In , Avicenna went to the city of Ray, where he entered into the service of the Buyid amir (ruler) Majd al-Dawla () and his mother Sayyida Shirin, the de facto ruler of the realm. There he served as the physician at the court, treating Majd al-Dawla, who was suffering from melancholia. Avicenna reportedly later served as the "business manager" of Sayyida Shirin in Qazvin and Hamadan, though details regarding this tenure are unclear. During his period, Avicenna finished his Canon of Medicine, and started writing his Book of Healing. In 1015, during Avicenna's stay in Hamadan, he participated in a public debate, as was custom for newly arrived scholars in western Iran at that time. The purpose of the debate was to examining one's reputation against a prominent local resident. The person whom Avicenna debated against was Abu'l-Qasim al-Kirmani, a member of the school of philosophers of Baghdad. The debate became heated, resulting in Avicenna accusing Abu'l-Qasim of lack of basic knowledge in logic, while Abu'l-Qasim accused Avicenna of impoliteness. After the debate, Avicenna sent a letter to the Baghdad Peripatetics, asking if Abu'l-Qasim's claim that he shared the same opinion as them was true. Abu'l-Qasim later retaliated by writing a letter to an unknown person, in which he made accusations so serious, that Avicenna wrote to a deputy of Majd al-Dawla, named Abu Sa'd, to investigate the matter. The accusation made towards Avicenna may have been the same as he had received earlier, in which he was accused by the people of Hamadan of copying the stylistic structures of the Quran in his Sermons on Divine Unity. The seriousness of this charge, in the words of the historian Peter Adamson, "cannot be underestimated in the larger Muslim culture." Not long afterwards, Avicenna shifted his allegiance to the rising Buyid amir Shams al-Dawla (the younger brother of Majd al-Dawla), which Adamson suggests was due to Abu'l-Qasim also working under Sayyida Shirin. Avicenna had been called upon by Shams al-Dawla to treat him, but after the latters campaign in the same year against his former ally, the Annazid ruler Abu Shawk (), he forced Avicenna to become his vizier. Although Avicenna would sometimes clash with Shams al-Dawla's troops, he remained vizier until the latter died of colic in 1021. Avicenna was asked by Shams al-Dawla's son and successor Sama' al-Dawla () stay as vizier, but instead went into hiding with his patron Abu Ghalib al-Attar, to wait for better opportunities to emerge. It was during this period that Avicenna was secretly in contact with Ala al-Dawla Muhammad (), the Kakuyid ruler of Isfahan and uncle of Sayyida Shirin. During his stay at Attar's home that Avicenna completed his Book of Healing, writing fifty pages a day. The Buyid court in Hamadan, particularly the Kurdish vizier Taj al-Mulk, suspected Avicenna of correspondence with Ala al-Dawla, and as result had the house of Attar ransacked and Avicenna imprisoned in the fortress of Fardajan, outside Hamadan. Juzjani blames one of Avicenna's informers for his capture. Avicenna was imprisoned in four months, until Ala al-Dawla captured Hamadan, thus putting an end to Sama al-Dawla's reign. In Isfahan Avicenna was subsequently released, and went to Isfahan, where he was well received by Ala al-Dawla. In the words of Juzjani, the Kakuyid ruler gave Avicenna "the respect and esteem which someone like him deserved." Adamson also says that Avicenna's service under Ala al-Dawla "proved to be the most stable period of his life." Avicenna served as the advisor, if not vizier of Ala al-Dawla, accompanying him in many of his military expeditions and travels. Avicenna dedicated two Persian works to him, a philosophical treatise named Danish-nama-yi Ala'i ("Book of Science for Ala"), and a medical treatise about the pulse. During the brief occupation of Isfahan by the Ghaznavids in January 1030, Avicenna and Ala al-Dawla relocated to the southwestern Iranian region of Khuzistan, where they stayed until the death of the Ghaznavid ruler Mahmud (), which occurred two months later. It was seemingly when Avicenna returned to Isfahan that he started writing his Pointers and Reminders. In 1037, while Avicenna was accompanying Ala al-Dawla to a battle near Isfahan, he was hit by a severe colic, which he had been constantly suffering from throughout his life. He died shortly afterwards in Hamadan, where he was buried. Philosophy Avicenna wrote extensively on early Islamic philosophy, especially the subjects logic, ethics and metaphysics, including treatises named Logic and Metaphysics. Most of his works were written in Arabic—then the language of science in the Middle East—and some in Persian. Of linguistic significance even to this day are a few books that he wrote in nearly pure Persian language (particularly the Danishnamah-yi 'Ala', Philosophy for Ala' ad-Dawla'). Avicenna's commentaries on Aristotle often criticized the philosopher, encouraging a lively debate in the spirit of ijtihad. Avicenna's Neoplatonic scheme of "emanations" became fundamental in the Kalam (school of theological discourse) in the 12th century. His Book of Healing became available in Europe in partial Latin translation some fifty years after its composition, under the title Sufficientia, and some authors have identified a "Latin Avicennism" as flourishing for some time, paralleling the more influential Latin Averroism, but suppressed by the Parisian decrees of 1210 and 1215. Avicenna's psychology and theory of knowledge influenced William of Auvergne, Bishop of Paris and Albertus Magnus, while his metaphysics influenced the thought of Thomas Aquinas. Metaphysical doctrine Early Islamic philosophy and Islamic metaphysics, imbued as it is with Islamic theology, distinguishes more clearly than Aristotelianism between essence and existence. Whereas existence is the domain of the contingent and the accidental, essence endures within a being beyond the accidental. The philosophy of Avicenna, particularly that part relating to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism can be seen in what is left of his work. Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence cannot be inferred from or accounted for by the essence of existing things, and that form and matter by themselves cannot interact and originate the movement of the universe or the progressive actualization of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must be an existing thing and coexist with its effect. Avicenna's consideration of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being; namely impossibility, contingency and necessity. Avicenna argued that the impossible being is that which cannot exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself' (wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself' and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed existence. It is what always exists. The Necessary exists 'due-to-Its-Self', and has no quiddity/essence (mahiyya) other than existence (wujud). Furthermore, It is 'One' (wahid ahad) since there cannot be more than one 'Necessary-Existent-due-to-Itself' without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist 'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. However, if no differentia distinguishes them from each other, then there is no sense in which these 'Existents' are not one and the same. Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd), nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity (kam), place (ayn), situation (wad) and time (waqt). Avicenna's theology on metaphysical issues (ilāhiyyāt) has been criticized by some Islamic scholars, among them al-Ghazali, Ibn Taymiyya and Ibn al-Qayyim. While discussing the views of the theists among the Greek philosophers, namely Socrates, Plato and Aristotle in Al-Munqidh min ad-Dalal ("Deliverance from Error"), al-Ghazali noted that the Greek philosophers "must be taxed with unbelief, as must their partisans among the Muslim philosophers, such as Avicenna and al-Farabi and their likes." He added that "None, however, of the Muslim philosophers engaged so much in transmitting Aristotle's lore as did the two men just mentioned. [...] The sum of what we regard as the authentic philosophy of Aristotle, as transmitted by al-Farabi and Avicenna, can be reduced to three parts: a part which must be branded as unbelief; a part which must be stigmatized as innovation; and a part which need not be repudiated at all." Argument for God's existence Avicenna made an argument for the existence of God which would be known as the "Proof of the Truthful" (Arabic: burhan al-siddiqin). Avicenna argued that there must be a "necessary existent" (Arabic: wajib al-wujud), an entity that cannot not exist and through a series of arguments, he identified it with the Islamic conception of God. Present-day historian of philosophy Peter Adamson called this argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy. Al-Biruni correspondence Correspondence between Avicenna (with his student Ahmad ibn 'Ali al-Ma'sumi) and Al-Biruni has survived in which they debated Aristotelian natural philosophy and the Peripatetic school. Abu Rayhan began by asking Avicenna eighteen questions, ten of which were criticisms of Aristotle's On the Heavens. Theology Avicenna was a devout Muslim and sought to reconcile rational philosophy with Islamic theology. His aim was to prove the existence of God and His creation of the world scientifically and through reason and logic. Avicenna's views on Islamic theology (and philosophy) were enormously influential, forming part of the core of the curriculum at Islamic religious schools until the 19th century. Avicenna wrote a number of short treatises dealing with Islamic theology. These included treatises on the prophets (whom he viewed as "inspired philosophers"), and also on various scientific and philosophical interpretations of the Quran, such as how Quranic cosmology corresponds to his own philosophical system. In general these treatises linked his philosophical writings to Islamic religious ideas; for example, the body's afterlife. There are occasional brief hints and allusions in his longer works, however, that Avicenna considered philosophy as the only sensible way to distinguish real prophecy from illusion. He did not state this more clearly because of the political implications of such a theory, if prophecy could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could only be properly considered by other philosophers. Later interpretations of Avicenna's philosophy split into three different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs. Avicenna memorized the Quran by the age of ten, and as an adult, he wrote five treatises commenting on suras from the Quran. One of these texts included the Proof of Prophecies, in which he comments on several Quranic verses and holds the Quran in high esteem. Avicenna argued that the Islamic prophets should be considered higher than philosophers. Avicenna is generally understood to have been aligned with the Sunni Hanafi school of thought. Avicenna studied Hanafi law, many of his notable teachers were Hanafi jurists, and he served under the Hanafi court of Ali ibn Mamun. Avicenna said at an early age that he remained "unconvinced" by Ismaili missionary attempts to convert him. Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) also believed Avicenna to be a follower of the Brethren of Purity. Thought experiments While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "floating man"—literally falling man—a thought experiment to demonstrate human self-awareness and the substantiality and immateriality of the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario, one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off from sense experience, would still be capable of determining his own existence, the thought experiment points to the conclusions that the soul is a perfection, independent of the body, and an immaterial substance. The conceivability of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature. Following is an English translation of the argument: However, Avicenna posited the brain as the place where reason interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect. The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human being: the soul exists and is self-aware. Avicenna thus concluded that the idea of the self is not logically dependent on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance. The body is unnecessary; in relation to it, the soul is its perfection. In itself, the soul is an immaterial substance. The Canon of Medicine Avicenna authored a five-volume medical encyclopedia: The Canon of Medicine (Al-Qanun fi't-Tibb). It was used as the standard medical textbook in the Islamic world and Europe up to the 18th century. The Canon still plays an important role in Unani medicine. Liber Primus Naturalium Avicenna considered whether events like rare diseases or disorders have natural causes. He used the example of polydactyly to explain his perception that causal reasons exist for all medical events. This view of medical phenomena anticipated developments in the Enlightenment by seven centuries. The Book of Healing Earth sciences Avicenna wrote on Earth sciences such as geology in The Book of Healing. While discussing the formation of mountains, he explained: Philosophy of science In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty". Avicenna then added two further methods for arriving at the first principles: the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he developed a "method of experimentation as a means for scientific inquiry." Logic An early formal system of temporal logic was studied by Avicenna. Although he did not develop a real theory of temporal propositions, he did study the relationship between temporalis and the implication. Avicenna's work was further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern times. Avicennian logic also influenced several early European logicians such as Albertus Magnus and William of Ockham. Avicenna endorsed the law of non-contradiction proposed by Aristotle, that a fact could not be both true and false at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned." Physics In mechanics, Avicenna, in The Book of Healing, developed a theory of motion, in which he made a distinction between the inclination (tendency to motion) and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance. The theory of motion presented by Avicenna was probably influenced by the 6th-century Alexandrian scholar John Philoponus. Avicenna's is a less sophisticated variant of the theory of impetus developed by Buridan in the 14th century. It is unclear if Buridan was influenced by Avicenna, or by Philoponus directly. In optics, Avicenna was among those who argued that light had a speed, observing that "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite." He also provided a wrong explanation of the rainbow phenomenon. Carl Benjamin Boyer described Avicenna's ("Ibn Sīnā") theory on the rainbow as follows: In 1253, a Latin text entitled Speculum Tripartitum stated the following regarding Avicenna's theory on heat: Psychology Avicenna's legacy in classical psychology is primarily embodied in the Kitab al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were known in Latin under the title De Anima (treatises "on the soul"). Notably, Avicenna develops what is called the Flying Man argument in the Psychology of The Cure I.1.7 as defence of the argument that the soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology designates as a form of an "epoche"). Avicenna's psychology requires that connection between the body and soul be strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving features of the object by our external senses. This sensory information is supplied to the internal senses, which merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the universal from the concrete particular is the key to their relationship and interaction, which takes place in the physical body. The soul completes the action of intellection by accepting forms that have been abstracted from matter. This process requires a concrete particular (material) to be abstracted into the universal intelligible (immaterial). The material and immaterial interact through the Active Intellect, which is a "divine light" containing the intelligible forms. The Active Intellect reveals the universals concealed in material objects much like the sun makes colour available to our eyes. Other contributions Astronomy and astrology Avicenna wrote an attack on astrology titled Resāla fī ebṭāl aḥkām al-nojūm, in which he cited passages from the Quran to dispute the power of astrology to foretell the future. He believed that each planet had some influence on the earth, but argued against astrologers being able to determine the exact effects. Avicenna's astronomical writings had some influence on later writers, although in general his work could be considered less developed than Alhazen or Al-Biruni. One important feature of his writing is that he considers mathematical astronomy as a separate discipline to astrology. He criticized Aristotle's view of the stars receiving their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous. He claimed to have observed Venus as a spot on the Sun. This is possible, as there was a transit on 24 May 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e. the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the prevailing geocentric model. He also wrote the Summary of the Almagest, (based on Ptolemy's Almagest), with an appended treatise "to bring that which is stated in the Almagest and what is understood from Natural Science into conformity". For example, Avicenna considers the motion of the solar apogee, which Ptolemy had taken to be fixed. Chemistry Avicenna was first to derive the attar of flowers from distillation and used steam distillation to produce essential oils such as rose essence, which he used as aromatherapeutic treatments for heart conditions. Unlike al-Razi, Avicenna explicitly disputed the theory of the transmutation of substances commonly believed by alchemists: Four works on alchemy attributed to Avicenna were translated into Latin as: was the most influential, having influenced later medieval chemists and alchemists such as Vincent of Beauvais. However, Anawati argues (following Ruska) that the de Anima is a fake by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy). Avicenna classified minerals into stones, fusible substances, sulfurs and salts, building on the ideas of Aristotle and Jabir. The epistola de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in his career when he had not yet firmly decided that transmutation was impossible. Poetry Almost half of Avicenna's works are versified. His poems appear in both Arabic and Persian. As an example, Edward Granville Browne claims that the following Persian verses are incorrectly attributed to Omar Khayyám, and were originally written by Ibn Sīnā: Legacy Classical Islamic civilization Robert Wisnovsky, a scholar of Avicenna attached to the McGill University, says that "Avicenna was the central figure in the long history of the rational sciences in Islam, particularly in the fields of metaphysics, logic and medicine" but that his works didn't only have an influence in these "secular" fields of knowledge alone, as "these works, or portions of them, were read, taught, copied, commented upon, quoted, paraphrased and cited by thousands of post-Avicennian scholars—not only philosophers, logicians, physicians and specialists in the mathematical or exact sciences, but also by those who specialized in the disciplines of ʿilm al-kalām (rational theology, but understood to include natural philosophy, epistemology and philosophy of mind) and usūl al-fiqh (jurisprudence, but understood to include philosophy of law, dialectic, and philosophy of language)." Middle Ages and Renaissance As early as the 14th century when Dante Alighieri depicted him in Limbo alongside the virtuous non-Christian thinkers in his Divine Comedy such as Virgil, Averroes, Homer, Horace, Ovid, Lucan, Socrates, Plato and Saladin. Avicenna has been recognized by both East and West, as one of the great figures in intellectual history. George Sarton, the author of The History of Science, described Avicenna as "one of the greatest thinkers and medical scholars in history" and called him "the most famous scientist of Islam and one of the most famous of all races, places, and times". He was one of the Islamic world's leading writers in the field of medicine. Along with Rhazes, Abulcasis, Ibn al-Nafis and al-Ibadi, Avicenna is considered an important compiler of early Muslim medicine. He is remembered in the Western history of medicine as a major historical figure who made important contributions to medicine and the European Renaissance. His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters (such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into account post-Aristotelian advances in anatomical knowledge. Aristotle's dominant intellectual influence among medieval European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with Hippocrates and Galen as one of the acknowledged authorities, ("prince of physicians"). Modern reception In present-day Iran, Afghanistan and Tajikistan, he is considered a national icon, and is often regarded as among the greatest Persians. A monument was erected outside the Bukhara museum. The Avicenna Mausoleum and Museum in Hamadan was built in 1952. Bu-Ali Sina University in Hamadan (Iran), the biotechnology Avicenna Research Institute in Tehran (Iran), the ibn Sīnā Tajik State Medical University in Dushanbe, Ibn Sina Academy of Medieval Medicine and Sciences at Aligarh, India, Avicenna School in Karachi and Avicenna Medical College in Lahore, Pakistan, Ibn Sina Balkh Medical School in his native province of Balkh in Afghanistan, Ibni Sina Faculty Of Medicine of Ankara University Ankara, Turkey, the main classroom building (the Avicenna Building) of the Sharif University of Technology, and Ibn Sina Integrated School in Marawi City (Philippines) are all named in his honour. His portrait hangs in the Hall of the Avicenna Faculty of Medicine in the University of Paris. There is a crater on the Moon named Avicenna and a mangrove genus. In 1980, the Soviet Union, which then ruled his birthplace Bukhara, celebrated the thousandth anniversary of Avicenna's birth by circulating various commemorative stamps with artistic illustrations, and by erecting a bust of Avicenna based on anthropological research by Soviet scholars. Near his birthplace in Qishlak Afshona, some north of Bukhara, a training college for medical staff has been named for him. On the grounds is a museum dedicated to his life, times and work. The Avicenna Prize, established in 2003, is awarded every two years by UNESCO and rewards individuals and groups for their achievements in the field of ethics in science. The aim of the award is to promote ethical reflection on issues raised by advances in science and technology, and to raise global awareness of the importance of ethics in science. The Avicenna Directories (2008–15; now the World Directory of Medical Schools) list universities and schools where doctors, public health practitioners, pharmacists and others, are educated. The original project team stated "Why Avicenna? Avicenna ... was ... noted for his synthesis of knowledge from both east and west. He has had a lasting influence on the development of medicine and health sciences. The use of Avicenna's name symbolises the worldwide partnership that is needed for the promotion of health services of high quality." In June 2009, Iran donated a "Persian Scholars Pavilion" to United Nations Office in Vienna which is placed in the central Memorial Plaza of the Vienna International Center. The "Persian Scholars Pavilion" at United Nations in Vienna, Austria is featuring the statues of four prominent Iranian figures. Highlighting the Iranian architectural features, the pavilion is adorned with Persian art forms and includes the statues of renowned Iranian scientists Avicenna, Al-Biruni, Zakariya Razi (Rhazes) and Omar Khayyam. The 1982 Soviet film Youth of Genius () by recounts Avicenna's younger years. The film is set in Bukhara at the turn of the millennium. In Louis L'Amour's 1985 historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine. In his book The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley. List of works The treatises of Avicenna influenced later Muslim thinkers in many areas including theology, philology, mathematics, astronomy, physics and music. His works numbered almost 450 volumes on a wide range of subjects, of which around 240 have survived. In particular, 150 volumes of his surviving works concentrate on philosophy and 40 of them concentrate on medicine. His most famous works are The Book of Healing, and The Canon of Medicine. Avicenna wrote at least one treatise on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo, are treatises giving a synoptic view of Aristotelian doctrine, though Metaphysics demonstrates a significant departure from the brand of Neoplatonism known as Aristotelianism in Avicenna's world; Arabic philosophers have hinted at the idea that Avicenna was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted into the Muslim world. The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in 1493, 1495 and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic was published by Schmoelders in 1836). Two encyclopedic treatises, dealing with philosophy, are often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account of Avicenna's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction, of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of these works have been modified by the corrections which the monastic editors confess that they applied. There is also a (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority of which is lost in antiquity, which according to Averroes was pantheistic in tone. Avicenna's works further include: Sirat al-shaykh al-ra'is (The Life of Avicenna), ed. and trans. WE. Gohlman, Albany, NY: State University of New York Press, 1974. (The only critical edition of Avicenna's autobiography, supplemented with material from a biography by his student Abu 'Ubayd al-Juzjani. A more recent translation of the Autobiography appears in D. Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden: Brill, 1988; second edition 2014.) Al-isharat wa al-tanbihat (Remarks and Admonitions), ed. S. Dunya, Cairo, 1960; parts translated by S.C. Inati, Remarks and Admonitions, Part One: Logic, Toronto, Ont.: Pontifical Institute for Mediaeval Studies, 1984, and Ibn Sina and Mysticism, Remarks and Admonitions: Part 4, London: Kegan Paul International, 1996. Al-Qanun fi'l-tibb (The Canon of Medicine), ed. I. a-Qashsh, Cairo, 1987. (Encyclopedia of medicine.) manuscript, Latin translation, Flores Avicenne, Michael de Capella, 1508, Modern text. Ahmed Shawkat Al-Shatti, Jibran Jabbur. Risalah fi sirr al-qadar (Essay on the Secret of Destiny), trans. G. Hourani in Reason and Tradition in Islamic Ethics, Cambridge: Cambridge University Press, 1985. Danishnama-i 'ala'i (The Book of Scientific Knowledge), ed. and trans. P. Morewedge, The Metaphysics of Avicenna, London: Routledge and Kegan Paul, 1973. Kitab al-Shifa''' (The Book of Healing). (Avicenna's major work on philosophy. He probably began to compose al-Shifa' in 1014, and completed it in 1020.) Critical editions of the Arabic text have been published in Cairo, 1952–83, originally under the supervision of I. Madkour. Kitab al-Najat (The Book of Salvation), trans. F. Rahman, Avicenna's Psychology: An English Translation of Kitab al-Najat, Book II, Chapter VI with Historical-philosophical Notes and Textual Improvements on the Cairo Edition, Oxford: Oxford University Press, 1952. (The psychology of al-Shifa'.) (Digital version of the Arabic text) Risala fi'l-Ishq (A Treatise on Love). Translated by Emil L. Fackenheim. Persian works Avicenna's most important Persian work is the Danishnama-i 'Alai (, "the Book of Knowledge for [Prince] 'Ala ad-Daulah"). Avicenna created new scientific vocabulary that had not previously existed in Persian. The Danishnama covers such topics as logic, metaphysics, music theory and other sciences of his time. It has been translated into English by Parwiz Morewedge in 1977. The book is also important in respect to Persian scientific works.Andar Danesh-e Rag (, "On the Science of the Pulse") contains nine chapters on the science of the pulse and is a condensed synopsis. Persian poetry from Avicenna is recorded in various manuscripts and later anthologies such as Nozhat al-Majales. See also Al-Qumri (possibly Avicenna's teacher) Abdol Hamid Khosro Shahi (Iranian theologian) Mummia (Persian medicine) Namesakes of Ibn Sina Ibn Sina Academy of Medieval Medicine and Sciences in Aligarh Avicenna Bay in Antarctica Avicenna (crater) on the far side of the Moon Avicenna Cultural and Scientific Foundation Avicenne Hospital in Paris, France Avicenna International College in Budapest, Hungary Avicenna Mausoleum (complex dedicated to Avicenna) in Hamadan, Iran Avicenna Research Institute in Tehran, Iran Avicenna Tajik State Medical University in Dushanbe, Tajikistan Bu-Ali Sina University in Hamedan, Iran Ibn Sina Peak – named after the Scientist, on the Kyrgyzstan–Tajikistan border Ibn Sina Foundation in Houston, Texas Ibn Sina Hospital, Baghdad, Iraq Ibn Sina Hospital, Istanbul, Turkey Ibn Sina Medical College Hospital, Dhaka, Bangladesh Ibn Sina University Hospital of Rabat-Salé at Mohammed V University in Rabat, Morocco Ibne Sina Hospital, Multan, Punjab, Pakistan International Ibn Sina Clinic, Dushanbe, Tajikistan Philosophy Eastern philosophy Iranian philosophy Islamic philosophy Contemporary Islamic philosophy Science in the medieval Islamic world List of scientists in medieval Islamic world Sufi philosophy Science and technology in Iran Ancient Iranian medicine List of pre-modern Iranian scientists and scholars References Sources cited Further reading Encyclopedic articles (PDF version) Avicenna entry by Sajjad H. Rizvi in the Internet Encyclopedia of Philosophy Primary literature For an old list of other extant works, C. Brockelmann's Geschichte der arabischen Litteratur (Weimar 1898), vol. i. pp. 452–458. (XV. W.; G. W. T.) For a current list of his works see A. Bertolacci (2006) and D. Gutas (2014) in the section "Philosophy". Avicenne: Réfutation de l'astrologie. Edition et traduction du texte arabe, introduction, notes et lexique par Yahya Michot. Préface d'Elizabeth Teissier (Beirut-Paris: Albouraq, 2006) . William E. Gohlam (ed.), The Life of Ibn Sina. A Critical Edition and Annotated Translation, Albany, State of New York University Press, 1974. For Ibn Sina's life, see Ibn Khallikan's Biographical Dictionary, translated by de Slane (1842); F. Wüstenfeld's Geschichte der arabischen Aerzte und Naturforscher (Göttingen, 1840). Madelung, Wilferd and Toby Mayer (ed. and tr.), Struggling with the Philosopher: A Refutation of Avicenna's Metaphysics. A New Arabic Edition and English Translation of Shahrastani's Kitab al-Musara'a. Secondary literature This is, on the whole, an informed and good account of the life and accomplishments of one of the greatest influences on the development of thought both Eastern and Western. ... It is not as philosophically thorough as the works of D. Saliba, A.M. Goichon, or L. Gardet, but it is probably the best essay in English on this important thinker of the Middle Ages. (Julius R. Weinberg, The Philosophical Review, Vol. 69, No. 2, Apr. 1960, pp. 255–259) This is a distinguished work which stands out from, and above, many of the books and articles which have been written in this century on Avicenna (Ibn Sīnā) (980–1037). It has two main features on which its distinction as a major contribution to Avicennan studies may be said to rest: the first is its clarity and readability; the second is the comparative approach adopted by the author. ... (Ian Richard Netton, Journal of the Royal Asiatic Society, Third Series, Vol. 4, No. 2, July 1994, pp. 263–264) Y.T. Langermann (ed.), Avicenna and his Legacy. A Golden Age of Science and Philosophy, Brepols Publishers, 2010, For a new understanding of his early career, based on a newly discovered text, see also: Michot, Yahya, Ibn Sînâ: Lettre au vizir Abû Sa'd. Editio princeps d'après le manuscrit de Bursa, traduction de l'arabe, introduction, notes et lexique (Beirut-Paris: Albouraq, 2000) . This German publication is both one of the most comprehensive general introductions to the life and works of the philosopher and physician Avicenna (Ibn Sīnā, d. 1037) and an extensive and careful survey of his contribution to the history of science. Its author is a renowned expert in Greek and Arabic medicine who has paid considerable attention to Avicenna in his recent studies. ... (Amos Bertolacci, Isis, Vol. 96, No. 4, December 2005, p. 649) Shaikh al Rais Ibn Sina (Special number) 1958–59, Ed. Hakim Syed Zillur Rahman, Tibbia College Magazine, Aligarh Muslim University, Aligarh, India. Medicine Browne, Edward G. Islamic Medicine. Fitzpatrick Lectures Delivered at the Royal College of Physicians in 1919–1920, reprint: New Delhi: Goodword Books, 2001. Pormann, Peter & Savage-Smith, Emilie. Medieval Islamic Medicine, Washington: Georgetown University Press, 2007. Prioreschi, Plinio. Byzantine and Islamic Medicine, A History of Medicine, Vol. 4, Omaha: Horatius Press, 2001. Syed Ziaur Rahman. Pharmacology of Avicennian Cardiac Drugs (Metaanalysis of researches and studies in Avicennian Cardiac Drugs along with English translation of Risalah al Adwiya al Qalbiyah), Ibn Sina Academy of Medieval Medicine and Sciences, Aligarh, India, 2020 Philosophy Amos Bertolacci, The Reception of Aristotle's Metaphysics in Avicenna's Kitab al-Sifa'. A Milestone of Western Metaphysical Thought, Leiden: Brill 2006, (Appendix C contains an Overview of the Main Works by Avicenna on Metaphysics in Chronological Order). Dimitri Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden, Brill 2014, second revised and expanded edition (first edition: 1988), including an inventory of Avicenna' Authentic Works. Andreas Lammer: The Elements of Avicenna's Physics. Greek Sources and Arabic Innovations. Scientia graeco-arabica 20. Berlin / Boston: Walter de Gruyter, 2018. Jon McGinnis and David C. Reisman (eds.) Interpreting Avicenna: Science and Philosophy in Medieval Islam: Proceedings of the Second Conference of the Avicenna Study Group, Leiden: Brill, 2004. Michot, Jean R., La destinée de l'homme selon Avicenne, Louvain: Aedibus Peeters, 1986, . Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger, Binghamton, N.Y.: Global Publications SUNY, 2000 (reprinted by SUNY Press in 2014 with a new Preface). Nader El-Bizri, "Avicenna and Essentialism," Review of Metaphysics, Vol. 54 (June 2001), pp. 753–778. Nader El-Bizri, "Avicenna's De Anima between Aristotle and Husserl," in The Passions of the Soul in the Metamorphosis of Becoming, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2003, pp. 67–89. Nader El-Bizri, "Being and Necessity: A Phenomenological Investigation of Avicenna's Metaphysics and Cosmology," in Islamic Philosophy and Occidental Phenomenology on the Perennial Issue of Microcosm and Macrocosm, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2006, pp. 243–261. Nader El-Bizri, 'Ibn Sīnā's Ontology and the Question of Being', Ishrāq: Islamic Philosophy Yearbook 2 (2011), 222–237 Nader El-Bizri, 'Philosophising at the Margins of 'Sh'i Studies': Reflections on Ibn Sīnā's Ontology', in The Study of Sh'i Islam. History, Theology and Law, eds. F. Daftary and G. Miskinzoda (London: I.B. Tauris, 2014), pp. 585–597. Reisman, David C. (ed.), Before and After Avicenna: Proceedings of the First Conference of the Avicenna Study Group'', Leiden: Brill, 2003. External links Avicenna (Ibn-Sina) on the Subject and the Object of Metaphysics with a list of translations of the logical and philosophical works and an annotated bibliography 980s births 1037 deaths 10th-century Iranian people 11th-century astronomers 11th-century Persian writers 11th-century philosophers Alchemists of medieval Islam Aristotelian philosophers Burials in Iran Buyid viziers Classical humanists Critics of atheism Cultural critics Epistemologists Founders of philosophical traditions Iranian music theorists Islamic philosophers Transoxanian Islamic scholars Logicians People from Bukhara Region Pharmacologists of medieval Iran Medieval Persian poets Medieval Persian writers Metaphysicians Moral philosophers Musical theorists of medieval Islam Ontologists People from Khorasan Persian physicists Philosophers of ethics and morality Philosophers of logic Philosophers of mind Philosophers of psychology Philosophers of religion Philosophers of science Physicians of medieval Islam Samanid scholars Unani medicine Medieval Persian philosophers Iranian logicians Iranian ethicists People who memorized the Quran Samanid officials
Avicenna
Augustin-Jean Fresnel ( ; ; or ; ; 10 May 1788 – 14 July 1827) was a French civil engineer and physicist whose research in optics led to the almost unanimous acceptance of the wave theory of light, excluding any remnant of Newton's corpuscular theory, from the late 1830s until the end of the 19th century. He is perhaps better known for inventing the catadioptric (reflective/refractive) Fresnel lens and for pioneering the use of "stepped" lenses to extend the visibility of lighthouses, saving countless lives at sea. The simpler dioptric (purely refractive) stepped lens, first proposed by Count Buffon and independently reinvented by Fresnel, is used in screen magnifiers and in condenser lenses for overhead projectors. By expressing Huygens's principle of secondary waves and Young's principle of interference in quantitative terms, and supposing that simple colors consist of sinusoidal waves, Fresnel gave the first satisfactory explanation of diffraction by straight edges, including the first satisfactory wave-based explanation of rectilinear propagation. Part of his argument was a proof that the addition of sinusoidal functions of the same frequency but different phases is analogous to the addition of forces with different directions. By further supposing that light waves are purely transverse, Fresnel explained the nature of polarization, the mechanism of chromatic polarization, and the transmission and reflection coefficients at the interface between two transparent isotropic media. Then, by generalizing the direction-speed-polarization relation for calcite, he accounted for the directions and polarizations of the refracted rays in doubly-refractive crystals of the biaxial class (those for which Huygens's secondary wavefronts are not axisymmetric). The period between the first publication of his pure-transverse-wave hypothesis, and the submission of his first correct solution to the biaxial problem, was less than a year. Later, he coined the terms linear polarization, circular polarization, and elliptical polarization, explained how optical rotation could be understood as a difference in propagation speeds for the two directions of circular polarization, and (by allowing the reflection coefficient to be complex) accounted for the change in polarization due to total internal reflection, as exploited in the Fresnel rhomb. Defenders of the established corpuscular theory could not match his quantitative explanations of so many phenomena on so few assumptions. Fresnel had a lifelong battle with tuberculosis, to which he succumbed at the age of 39. Although he did not become a public celebrity in his lifetime, he lived just long enough to receive due recognition from his peers, including (on his deathbed) the Rumford Medal of the Royal Society of London, and his name is ubiquitous in the modern terminology of optics and waves. After the wave theory of light was subsumed by Maxwell's electromagnetic theory in the 1860s, some attention was diverted from the magnitude of Fresnel's contribution. In the period between Fresnel's unification of physical optics and Maxwell's wider unification, a contemporary authority, Humphrey Lloyd, described Fresnel's transverse-wave theory as "the noblest fabric which has ever adorned the domain of physical science, Newton's system of the universe alone excepted." Early life Family Augustin-Jean Fresnel (also called Augustin Jean or simply Augustin), born in Broglie, Normandy, on 10 May 1788, was the second of four sons of the architect Jacques Fresnel (1755–1805) and his wife Augustine, née Mérimée (1755–1833). In 1790, following the Revolution, Broglie became part of the département of Eure. The family moved twice – in 1789/90 to Cherbourg, and in 1794 to Jacques's home town of Mathieu, where Madame Fresnel would spend 25 years as a widow, outliving two of her sons. The first son, Louis (1786–1809), was admitted to the École Polytechnique, became a lieutenant in the artillery, and was killed in action at Jaca, Spain, the day before his 23rd birthday. The third, Léonor (1790–1869), followed Augustin into civil engineering, succeeded him as secretary of the Lighthouse Commission, and helped to edit his collected works. The fourth, Fulgence Fresnel (1795–1855), became a noted linguist, diplomat, and orientalist, and occasionally assisted Augustin with negotiations. Fulgence died in Bagdad in 1855 having led a mission to explore Babylon. Léonor apparently was the only one of the four who married. Their mother's younger brother, Jean François "Léonor" Mérimée (1757–1836), father of the writer Prosper Mérimée (1803–1870), was a paint artist who turned his attention to the chemistry of painting. He became the Permanent Secretary of the École des Beaux-Arts and (until 1814) a professor at the École Polytechnique, and was the initial point of contact between Augustin and the leading optical physicists of the day . Education The Fresnel brothers were initially home-schooled by their mother. The sickly Augustin was considered the slow one, not inclined to memorization; but the popular story that he hardly began to read until the age of eight is disputed. At the age of nine or ten he was undistinguished except for his ability to turn tree-branches into toy bows and guns that worked far too well, earning himself the title l'homme de génie (the man of genius) from his accomplices, and a united crackdown from their elders. In 1801, Augustin was sent to the École Centrale at Caen, as company for Louis. But Augustin lifted his performance: in late 1804 he was accepted into the École Polytechnique, being placed 17th in the entrance examination. As the detailed records of the École Polytechnique begin in 1808, we know little of Augustin's time there, except that he made few if any friends and – in spite of continuing poor health – excelled in drawing and geometry: in his first year he took a prize for his solution to a geometry problem posed by Adrien-Marie Legendre. Graduating in 1806, he then enrolled at the École Nationale des Ponts et Chaussées (National School of Bridges and Roads, also known as "ENPC" or "École des Ponts"), from which he graduated in 1809, entering the service of the Corps des Ponts et Chaussées as an ingénieur ordinaire aspirant (ordinary engineer in training). Directly or indirectly, he was to remain in the employment of the "Corps des Ponts" for the rest of his life. Religious formation Augustin Fresnel's parents were Roman Catholics of the Jansenist sect, characterized by an extreme Augustinian view of original sin. Religion took first place in the boys' home-schooling. In 1802, Mme Fresnel reportedly said: Augustin remained a Jansenist. He indeed regarded his intellectual talents as gifts from God, and considered it his duty to use them for the benefit of others. Plagued by poor health, and determined to do his duty before death thwarted him, he shunned pleasures and worked to the point of exhaustion. According to his fellow engineer Alphonse Duleau, who helped to nurse him through his final illness, Fresnel saw the study of nature as part of the study of the power and goodness of God. He placed virtue above science and genius. Yet in his last days he needed "strength of soul," not against death alone, but against "the interruption of discoveries… of which he hoped to derive useful applications." Jansenism is considered heretical by the Roman Catholic Church , and this may be part of the explanation why Fresnel, in spite of his scientific achievements and his royalist credentials, never gained a permanent academic teaching post; his only teaching appointment was at the Athénée in the winter of 1819–20. Be that as it may, the brief article on Fresnel in the old Catholic Encyclopedia does not mention his Jansenism, but describes him as "a deeply religious man and remarkable for his keen sense of duty." Engineering assignments Fresnel was initially posted to the western département of Vendée. There, in 1811, he anticipated what became known as the Solvay process for producing soda ash, except that recycling of the ammonia was not considered. That difference may explain why leading chemists, who learned of his discovery through his uncle Léonor, eventually thought it uneconomic. About 1812, Fresnel was sent to Nyons, in the southern département of Drôme, to assist with the imperial highway that was to connect Spain and Italy. It is from Nyons that we have the first evidence of his interest in optics. On 15 May 1814, while work was slack due to Napoleon's defeat, Fresnel wrote a "P.S." to his brother Léonor, saying in part: As late as 28 December he was still waiting for information, but he had received Biot's memoir by 10 February 1815. (The Institut de France had taken over the functions of the French Académie des Sciences and other académies in 1795. In 1816 the Académie des Sciences regained its name and autonomy, but remained part of the institute.) In March 1815, perceiving Napoleon's return from Elba as "an attack on civilization", Fresnel departed without leave, hastened to Toulouse and offered his services to the royalist resistance, but soon found himself on the sick list. Returning to Nyons in defeat, he was threatened and had his windows broken. During the Hundred Days he was placed on suspension, which he was eventually allowed to spend at his mother's house in Mathieu. There he used his enforced leisure to begin his optical experiments. Contributions to physical optics Historical context: From Newton to Biot The appreciation of Fresnel's reconstruction of physical optics might be assisted by an overview of the fragmented state in which he found the subject. In this subsection, optical phenomena that were unexplained or whose explanations were disputed are named in bold type. The corpuscular theory of light, favored by Isaac Newton and accepted by nearly all of Fresnel's seniors, easily explained rectilinear propagation: the corpuscles obviously moved very fast, so that their paths were very nearly straight. The wave theory, as developed by Christiaan Huygens in his Treatise on Light (1690), explained rectilinear propagation on the assumption that each point crossed by a traveling wavefront becomes the source of a secondary wavefront. Given the initial position of a traveling wavefront, any later position (according to Huygens) was the common tangent surface (envelope) of the secondary wavefronts emitted from the earlier position. As the extent of the common tangent was limited by the extent of the initial wavefront, the repeated application of Huygens's construction to a plane wavefront of limited extent (in a uniform medium) gave a straight, parallel beam. While this construction indeed predicted rectilinear propagation, it was difficult to reconcile with the common observation that wavefronts on the surface of water can bend around obstructions, and with the similar behavior of sound waves – causing Newton to maintain, to the end of his life, that if light consisted of waves it would "bend and spread every way" into the shadows. Huygens's theory neatly explained the law of ordinary reflection and the law of ordinary refraction ("Snell's law"), provided that the secondary waves traveled slower in denser media (those of higher refractive index). The corpuscular theory, with the hypothesis that the corpuscles were subject to forces acting perpendicular to surfaces, explained the same laws equally well, albeit with the implication that light traveled faster in denser media; that implication was wrong, but could not be directly disproven with the technology of Newton's time or even Fresnel's time . Similarly inconclusive was stellar aberration—that is, the apparent change in the position of a star due to the velocity of the earth across the line of sight (not to be confused with stellar parallax, which is due to the displacement of the earth across the line of sight). Identified by James Bradley in 1728, stellar aberration was widely taken as confirmation of the corpuscular theory. But it was equally compatible with the wave theory, as Euler noted in 1746 – tacitly assuming that the aether (the supposed wave-bearing medium) near the earth was not disturbed by the motion of the earth. The outstanding strength of Huygens's theory was his explanation of the birefringence (double refraction) of "Iceland crystal" (transparent calcite), on the assumption that the secondary waves are spherical for the ordinary refraction (which satisfies Snell's law) and spheroidal for the extraordinary refraction (which does not). In general, Huygens's common-tangent construction implies that rays are paths of least time between successive positions of the wavefront, in accordance with Fermat's principle. In the special case of isotropic media, the secondary wavefronts must be spherical, and Huygens's construction then implies that the rays are perpendicular to the wavefront; indeed, the law of ordinary refraction can be separately derived from that premise, as Ignace-Gaston Pardies did before Huygens. Although Newton rejected the wave theory, he noticed its potential to explain colors, including the colors of "thin plates" (e.g., "Newton's rings", and the colors of skylight reflected in soap bubbles), on the assumption that light consists of periodic waves, with the lowest frequencies (longest wavelengths) at the red end of the spectrum, and the highest frequencies (shortest wavelengths) at the violet end. In 1672 he published a heavy hint to that effect, but contemporary supporters of the wave theory failed to act on it: Robert Hooke treated light as a periodic sequence of pulses but did not use frequency as the criterion of color, while Huygens treated the waves as individual pulses without any periodicity; and Pardies died young in 1673. Newton himself tried to explain colors of thin plates using the corpuscular theory, by supposing that his corpuscles had the wavelike property of alternating between "fits of easy transmission" and "fits of easy reflection", the distance between like "fits" depending on the color and the medium and, awkwardly, on the angle of refraction or reflection into that medium. More awkwardly still, this theory required thin plates to reflect only at the back surface, although thick plates manifestly reflected also at the front surface. It was not until 1801 that Thomas Young, in the Bakerian Lecture for that year, cited Newton's hint, and accounted for the colors of a thin plate as the combined effect of the front and back reflections, which reinforce or cancel each other according to the wavelength and the thickness. Young similarly explained the colors of "striated surfaces" (e.g., gratings) as the wavelength-dependent reinforcement or cancellation of reflections from adjacent lines. He described this reinforcement or cancellation as interference. Neither Newton nor Huygens satisfactorily explained diffraction—the blurring and fringing of shadows where, according to rectilinear propagation, they ought to be sharp. Newton, who called diffraction "inflexion", supposed that rays of light passing close to obstacles were bent ("inflected"); but his explanation was only qualitative. Huygens's common-tangent construction, without modifications, could not accommodate diffraction at all. Two such modifications were proposed by Young in the same 1801 Bakerian Lecture: first, that the secondary waves near the edge of an obstacle could diverge into the shadow, but only weakly, due to limited reinforcement from other secondary waves; and second, that diffraction by an edge was caused by interference between two rays: one reflected off the edge, and the other inflected while passing near the edge. The latter ray would be undeviated if sufficiently far from the edge, but Young did not elaborate on that case. These were the earliest suggestions that the degree of diffraction depends on wavelength. Later, in the 1803 Bakerian Lecture, Young ceased to regard inflection as a separate phenomenon, and produced evidence that diffraction fringes inside the shadow of a narrow obstacle were due to interference: when the light from one side was blocked, the internal fringes disappeared. But Young was alone in such efforts until Fresnel entered the field. Huygens, in his investigation of double refraction, noticed something that he could not explain: when light passes through two similarly oriented calcite crystals at normal incidence, the ordinary ray emerging from the first crystal suffers only the ordinary refraction in the second, while the extraordinary ray emerging from the first suffers only the extraordinary refraction in the second; but when the second crystal is rotated 90° about the incident rays, the roles are interchanged, so that the ordinary ray emerging from the first crystal suffers only the extraordinary refraction in the second, and vice versa. This discovery gave Newton another reason to reject the wave theory: rays of light evidently had "sides". Corpuscles could have sides (or poles, as they would later be called); but waves of light could not, because (so it seemed) any such waves would need to be longitudinal (with vibrations in the direction of propagation). Newton offered an alternative "Rule" for the extraordinary refraction, which rode on his authority through the 18th century, although he made "no known attempt to deduce it from any principles of optics, corpuscular or otherwise." In 1808, the extraordinary refraction of calcite was investigated experimentally, with unprecedented accuracy, by Étienne-Louis Malus, and found to be consistent with Huygens's spheroid construction, not Newton's "Rule". Malus, encouraged by Pierre-Simon Laplace, then sought to explain this law in corpuscular terms: from the known relation between the incident and refracted ray directions, Malus derived the corpuscular velocity (as a function of direction) that would satisfy Maupertuis's "least action" principle. But, as Young pointed out, the existence of such a velocity law was guaranteed by Huygens's spheroid, because Huygens's construction leads to Fermat's principle, which becomes Maupertuis's principle if the ray speed is replaced by the reciprocal of the particle speed! The corpuscularists had not found a force law that would yield the alleged velocity law, except by a circular argument in which a force acting at the surface of the crystal inexplicably depended on the direction of the (possibly subsequent) velocity within the crystal. Worse, it was doubtful that any such force would satisfy the conditions of Maupertuis's principle. In contrast, Young proceeded to show that "a medium more easily compressible in one direction than in any direction perpendicular to it, as if it consisted of an infinite number of parallel plates connected by a substance somewhat less elastic" admits spheroidal longitudinal wavefronts, as Huygens supposed. But Malus, in the midst of his experiments on double refraction, noticed something else: when a ray of light is reflected off a non-metallic surface at the appropriate angle, it behaves like one of the two rays emerging from a calcite crystal. It was Malus who coined the term polarization to describe this behavior, although the polarizing angle became known as Brewster's angle after its dependence on the refractive index was determined experimentally by David Brewster in 1815. Malus also introduced the term plane of polarization. In the case of polarization by reflection, his "plane of polarization" was the plane of the incident and reflected rays; in modern terms, this is the plane normal to the electric vibration. In 1809, Malus further discovered that the intensity of light passing through two polarizers is proportional to the squared cosine of the angle between their planes of polarization (Malus's law), whether the polarizers work by reflection or double refraction, and that all birefringent crystals produce both extraordinary refraction and polarization. As the corpuscularists started trying to explain these things in terms of polar "molecules" of light, the wave-theorists had no working hypothesis on the nature of polarization, prompting Young to remark that Malus's observations "present greater difficulties to the advocates of the undulatory theory than any other facts with which we are acquainted." Malus died in February 1812, at the age of 36, shortly after receiving the Rumford Medal for his work on polarization. In August 1811, François Arago reported that if a thin plate of mica was viewed against a white polarized backlight through a calcite crystal, the two images of the mica were of complementary colors (the overlap having the same color as the background). The light emerging from the mica was "depolarized" in the sense that there was no orientation of the calcite that made one image disappear; yet it was not ordinary ("unpolarized") light, for which the two images would be of the same color. Rotating the calcite around the line of sight changed the colors, though they remained complementary. Rotating the mica changed the saturation (not the hue) of the colors. This phenomenon became known as chromatic polarization. Replacing the mica with a much thicker plate of quartz, with its faces perpendicular to the optic axis (the axis of Huygens's spheroid or Malus's velocity function), produced a similar effect, except that rotating the quartz made no difference. Arago tried to explain his observations in corpuscular terms. In 1812, as Arago pursued further qualitative experiments and other commitments, Jean-Baptiste Biot reworked the same ground using a gypsum lamina in place of the mica, and found empirical formulae for the intensities of the ordinary and extraordinary images. The formulae contained two coefficients, supposedly representing colors of rays "affected" and "unaffected" by the plate – the "affected" rays being of the same color mix as those reflected by amorphous thin plates of proportional, but lesser, thickness. Arago protested, declaring that he had made some of the same discoveries but had not had time to write them up. In fact the overlap between Arago's work and Biot's was minimal, Arago's being only qualitative and wider in scope (attempting to include polarization by reflection). But the dispute triggered a notorious falling-out between the two men. Later that year, Biot tried to explain the observations as an oscillation of the alignment of the "affected" corpuscles at a frequency proportional to that of Newton's "fits", due to forces depending on the alignment. This theory became known as mobile polarization. To reconcile his results with a sinusoidal oscillation, Biot had to suppose that the corpuscles emerged with one of two permitted orientations, namely the extremes of the oscillation, with probabilities depending on the phase of the oscillation. Corpuscular optics was becoming expensive on assumptions. But in 1813, Biot reported that the case of quartz was simpler: the observable phenomenon (now called optical rotation or optical activity or sometimes rotary polarization) was a gradual rotation of the polarization direction with distance, and could be explained by a corresponding rotation (not oscillation) of the corpuscles. Early in 1814, reviewing Biot's work on chromatic polarization, Young noted that the periodicity of the color as a function of the plate thickness – including the factor by which the period exceeded that for a reflective thin plate, and even the effect of obliquity of the plate (but not the role of polarization)—could be explained by the wave theory in terms of the different propagation times of the ordinary and extraordinary waves through the plate. But Young was then the only public defender of the wave theory. In summary, in the spring of 1814, as Fresnel tried in vain to guess what polarization was, the corpuscularists thought that they knew, while the wave-theorists (if we may use the plural) literally had no idea. Both theories claimed to explain rectilinear propagation, but the wave explanation was overwhelmingly regarded as unconvincing. The corpuscular theory could not rigorously link double refraction to surface forces; the wave theory could not yet link it to polarization. The corpuscular theory was weak on thin plates and silent on gratings; the wave theory was strong on both, but under-appreciated. Concerning diffraction, the corpuscular theory did not yield quantitative predictions, while the wave theory had begun to do so by considering diffraction as a manifestation of interference, but had only considered two rays at a time. Only the corpuscular theory gave even a vague insight into Brewster's angle, Malus's law, or optical rotation. Concerning chromatic polarization, the wave theory explained the periodicity far better than the corpuscular theory, but had nothing to say about the role of polarization; and its explanation of the periodicity was largely ignored. And Arago had founded the study of chromatic polarization, only to lose the lead, controversially, to Biot. Such were the circumstances in which Arago first heard of Fresnel's interest in optics. Rêveries Fresnel's letters from later in 1814 reveal his interest in the wave theory, including his awareness that it explained the constancy of the speed of light and was at least compatible with stellar aberration. Eventually he compiled what he called his rêveries (musings) into an essay and submitted it via Léonor Mérimée to André-Marie Ampère, who did not respond directly. But on 19 December, Mérimée dined with Ampère and Arago, with whom he was acquainted through the École Polytechnique; and Arago promised to look at Fresnel's essay. In mid 1815, on his way home to Mathieu to serve his suspension, Fresnel met Arago in Paris and spoke of the wave theory and stellar aberration. He was informed that he was trying to break down open doors ("il enfonçait des portes ouvertes"), and directed to classical works on optics. Diffraction First attempt (1815) On 12 July 1815, as Fresnel was about to leave Paris, Arago left him a note on a new topic: Fresnel would not have ready access to these works outside Paris, and could not read English. But, in Mathieu – with a point-source of light made by focusing sunlight with a drop of honey, a crude micrometer of his own construction, and supporting apparatus made by a local locksmith – he began his own experiments. His technique was novel: whereas earlier investigators had projected the fringes onto a screen, Fresnel soon abandoned the screen and observed the fringes in space, through a lens with the micrometer at its focus, allowing more accurate measurements while requiring less light. Later in July, after Napoleon's final defeat, Fresnel was reinstated with the advantage of having backed the winning side. He requested a two-month leave of absence, which was readily granted because roadworks were in abeyance. On 23 September he wrote to Arago, beginning "I think I have found the explanation and the law of colored fringes which one notices in the shadows of bodies illuminated by a luminous point." In the same paragraph, however, Fresnel implicitly acknowledged doubt about the novelty of his work: noting that he would need to incur some expense in order to improve his measurements, he wanted to know "whether this is not useless, and whether the law of diffraction has not already been established by sufficiently exact experiments." He explained that he had not yet had a chance to acquire the items on his reading lists, with the apparent exception of "Young's book", which he could not understand without his brother's help.  Not surprisingly, he had retraced many of Young's steps. In a memoir sent to the institute on 15 October 1815, Fresnel mapped the external and internal fringes in the shadow of a wire. He noticed, like Young before him, that the internal fringes disappeared when the light from one side was blocked, and concluded that "the vibrations of two rays that cross each other under a very small angle can contradict each other…" But, whereas Young took the disappearance of the internal fringes as confirmation of the principle of interference, Fresnel reported that it was the internal fringes that first drew his attention to the principle. To explain the diffraction pattern, Fresnel constructed the internal fringes by considering the intersections of circular wavefronts emitted from the two edges of the obstruction, and the external fringes by considering the intersections between direct waves and waves reflected off the nearer edge. For the external fringes, to obtain tolerable agreement with observation, he had to suppose that the reflected wave was inverted; and he noted that the predicted paths of the fringes were hyperbolic. In the part of the memoir that most clearly surpassed Young, Fresnel explained the ordinary laws of reflection and refraction in terms of interference, noting that if two parallel rays were reflected or refracted at other than the prescribed angle, they would no longer have the same phase in a common perpendicular plane, and every vibration would be cancelled by a nearby vibration. He noted that his explanation was valid provided that the surface irregularities were much smaller than the wavelength. On 10 November, Fresnel sent a supplementary note dealing with Newton's rings and with gratings, including, for the first time, transmission gratings – although in that case the interfering rays were still assumed to be "inflected", and the experimental verification was inadequate because it used only two threads. As Fresnel was not a member of the institute, the fate of his memoir depended heavily on the report of a single member. The reporter for Fresnel's memoir turned out to be Arago (with Poinsot as the other reviewer). On 8 November, Arago wrote to Fresnel: Fresnel was troubled, wanting to know more precisely where he had collided with Young. Concerning the curved paths of the "colored bands", Young had noted the hyperbolic paths of the fringes in the two-source interference pattern, corresponding roughly to Fresnel's internal fringes, and had described the hyperbolic fringes that appear on the screen within rectangular shadows. He had not mentioned the curved paths of the external fringes of a shadow; but, as he later explained, that was because Newton had already done so. Newton evidently thought the fringes were caustics. Thus Arago erred in his belief that the curved paths of the fringes were fundamentally incompatible with the corpuscular theory. Arago's letter went on to request more data on the external fringes. Fresnel complied, until he exhausted his leave and was assigned to Rennes in the département of Ille-et-Vilaine. At this point Arago interceded with Gaspard de Prony, head of the École des Ponts, who wrote to Louis-Mathieu Molé, head of the Corps des Ponts, suggesting that the progress of science and the prestige of the Corps would be enhanced if Fresnel could come to Paris for a time. He arrived in March 1816, and his leave was subsequently extended through the middle of the year. Meanwhile, in an experiment reported on 26 February 1816, Arago verified Fresnel's prediction that the internal fringes were shifted if the rays on one side of the obstacle passed through a thin glass lamina. Fresnel correctly attributed this phenomenon to the lower wave velocity in the glass. Arago later used a similar argument to explain the colors in the scintillation of stars. Fresnel's updated memoir was eventually published in the March 1816 issue of Annales de Chimie et de Physique, of which Arago had recently become co-editor. That issue did not actually appear until May. In March, Fresnel already had competition: Biot read a memoir on diffraction by himself and his student Claude Pouillet, containing copious data and arguing that the regularity of diffraction fringes, like the regularity of Newton's rings, must be linked to Newton's "fits". But the new link was not rigorous, and Pouillet himself would become a distinguished early adopter of the wave theory. "Efficacious ray", double-mirror experiment (1816) On 24 May 1816, Fresnel wrote to Young (in French), acknowledging how little of his own memoir was new. But in a "supplement" signed on 14 July and read the next day, Fresnel noted that the internal fringes were more accurately predicted by supposing that the two interfering rays came from some distance outside the edges of the obstacle. To explain this, he divided the incident wavefront at the obstacle into what we now call Fresnel zones, such that the secondary waves from each zone were spread over half a cycle when they arrived at the observation point. The zones on one side of the obstacle largely canceled out in pairs, except the first zone, which was represented by an "efficacious ray". This approach worked for the internal fringes, but the superposition of the efficacious ray and the direct ray did not work for the external fringes. The contribution from the "efficacious ray" was thought to be only partly canceled, for reasons involving the dynamics of the medium: where the wavefront was continuous, symmetry forbade oblique vibrations; but near the obstacle that truncated the wavefront, the asymmetry allowed some sideways vibration towards the geometric shadow. This argument showed that Fresnel had not (yet) fully accepted Huygens's principle, which would have permitted oblique radiation from all portions of the front. In the same supplement, Fresnel described his well-known double mirror, comprising two flat mirrors joined at an angle of slightly less than 180°, with which he produced a two-slit interference pattern from two virtual images of the same slit. A conventional double-slit experiment required a preliminary single slit to ensure that the light falling on the double slit was coherent (synchronized). In Fresnel's version, the preliminary single slit was retained, and the double slit was replaced by the double mirror – which bore no physical resemblance to the double slit and yet performed the same function. This result (which had been announced by Arago in the March issue of the Annales) made it hard to believe that the two-slit pattern had anything to do with corpuscles being deflected as they passed near the edges of the slits. But 1816 was the "Year Without a Summer": crops failed; hungry farming families lined the streets of Rennes; the central government organized "charity workhouses" for the needy; and in October, Fresnel was sent back to Ille-et-Vilaine to supervise charity workers in addition to his regular road crew. According to Arago, Fresnel's letters from December 1816 reveal his consequent anxiety. To Arago he complained of being "tormented by the worries of surveillance, and the need to reprimand…" And to Mérimée he wrote: "I find nothing more tiresome than having to manage other men, and I admit that I have no idea what I'm doing." Prize memoir (1818) and sequel On 17 March 1817, the Académie des Sciences announced that diffraction would be the topic for the biannual physics Grand Prix to be awarded in 1819. The deadline for entries was set at 1 August 1818 to allow time for replication of experiments. Although the wording of the problem referred to rays and inflection and did not invite wave-based solutions, Arago and Ampère encouraged Fresnel to enter. In the fall of 1817, Fresnel, supported by de Prony, obtained a leave of absence from the new head of the Corp des Ponts, Louis Becquey, and returned to Paris. He resumed his engineering duties in the spring of 1818; but from then on he was based in Paris, first on the Canal de l'Ourcq, and then (from May 1819) with the cadastre of the pavements. On 15 January 1818, in a different context (revisited below), Fresnel showed that the addition of sinusoidal functions of the same frequency but different phases is analogous to the addition of forces with different directions. His method was similar to the phasor representation, except that the "forces" were plane vectors rather than complex numbers; they could be added, and multiplied by scalars, but not (yet) multiplied and divided by each other. The explanation was algebraic rather than geometric. Knowledge of this method was assumed in a preliminary note on diffraction, dated 19 April 1818 and deposited on 20 April, in which Fresnel outlined the elementary theory of diffraction as found in modern textbooks. He restated Huygens's principle in combination with the superposition principle, saying that the vibration at each point on a wavefront is the sum of the vibrations that would be sent to it at that moment by all the elements of the wavefront in any of its previous positions, all elements acting separately . For a wavefront partly obstructed in a previous position, the summation was to be carried out over the unobstructed portion. In directions other than the normal to the primary wavefront, the secondary waves were weakened due to obliquity, but weakened much more by destructive interference, so that the effect of obliquity alone could be ignored. For diffraction by a straight edge, the intensity as a function of distance from the geometric shadow could then be expressed with sufficient accuracy in terms of what are now called the normalized Fresnel integrals: The same note included a table of the integrals, for an upper limit ranging from 0 to 5.1 in steps of 0.1, computed with a mean error of 0.0003, plus a smaller table of maxima and minima of the resulting intensity. In his final "Memoir on the diffraction of light", deposited on 29 July and bearing the Latin epigraph "Natura simplex et fecunda" ("Nature simple and fertile"), Fresnel slightly expanded the two tables without changing the existing figures, except for a correction to the first minimum of intensity. For completeness, he repeated his solution to "the problem of interference", whereby sinusoidal functions are added like vectors. He acknowledged the directionality of the secondary sources and the variation in their distances from the observation point, chiefly to explain why these things make negligible difference in the context, provided of course that the secondary sources do not radiate in the retrograde direction. Then, applying his theory of interference to the secondary waves, he expressed the intensity of light diffracted by a single straight edge (half-plane) in terms of integrals which involved the dimensions of the problem, but which could be converted to the normalized forms above. With reference to the integrals, he explained the calculation of the maxima and minima of the intensity (external fringes), and noted that the calculated intensity falls very rapidly as one moves into the geometric shadow. The last result, as Olivier Darrigol says, "amounts to a proof of the rectilinear propagation of light in the wave theory, indeed the first proof that a modern physicist would still accept." For the experimental testing of his calculations, Fresnel used red light with a wavelength of 638nm, which he deduced from the diffraction pattern in the simple case in which light incident on a single slit was focused by a cylindrical lens. For a variety of distances from the source to the obstacle and from the obstacle to the field point, he compared the calculated and observed positions of the fringes for diffraction by a half-plane, a slit, and a narrow strip – concentrating on the minima, which were visually sharper than the maxima. For the slit and the strip, he could not use the previously computed table of maxima and minima; for each combination of dimensions, the intensity had to be expressed in terms of sums or differences of Fresnel integrals and calculated from the table of integrals, and the extrema had to be calculated anew. The agreement between calculation and measurement was better than 1.5% in almost every case. Near the end of the memoir, Fresnel summed up the difference between Huygens's use of secondary waves and his own: whereas Huygens says there is light only where the secondary waves exactly agree, Fresnel says there is complete darkness only where the secondary waves exactly cancel out. The judging committee comprised Laplace, Biot, and Poisson (all corpuscularists), Gay-Lussac (uncommitted), and Arago, who eventually wrote the committee's report. Although entries in the competition were supposed to be anonymous to the judges, Fresnel's must have been recognizable by the content. There was only one other entry, of which neither the manuscript nor any record of the author has survived. That entry (identified as "no.1") was mentioned only in the last paragraph of the judges' report, noting that the author had shown ignorance of the relevant earlier works of Young and Fresnel, used insufficiently precise methods of observation, overlooked known phenomena, and made obvious errors. In the words of John Worrall, "The competition facing Fresnel could hardly have been less stiff." We may infer that the committee had only two options: award the prize to Fresnel ("no. 2"), or withhold it. The committee deliberated into the new year. Then Poisson, exploiting a case in which Fresnel's theory gave easy integrals, predicted that if a circular obstacle were illuminated by a point-source, there should be (according to the theory) a bright spot in the center of the shadow, illuminated as brightly as the exterior. This seems to have been intended as a reductio ad absurdum. Arago, undeterred, assembled an experiment with an obstacle 2mm in diameter – and there, in the center of the shadow, was Poisson's spot. The unanimous report of the committee, read at the meeting of the Académie on 15 March 1819, awarded the prize to "the memoir marked no. 2, and bearing as epigraph: Natura simplex et fecunda." At the same meeting, after the judgment was delivered, the president of the Académie opened a sealed note accompanying the memoir, revealing the author as Fresnel. The award was announced at the public meeting of the Académie a week later, on 22 March. Arago's verification of Poisson's counter-intuitive prediction passed into folklore as if it had decided the prize. That view, however, is not supported by the judges' report, which gave the matter only two sentences in the penultimate paragraph. Neither did Fresnel's triumph immediately convert Laplace, Biot, and Poisson to the wave theory, for at least four reasons. First, although the professionalization of science in France had established common standards, it was one thing to acknowledge a piece of research as meeting those standards, and another thing to regard it as conclusive. Second, it was possible to interpret Fresnel's integrals as rules for combining rays. Arago even encouraged that interpretation, presumably in order to minimize resistance to Fresnel's ideas. Even Biot began teaching the Huygens-Fresnel principle without committing himself to a wave basis. Third, Fresnel's theory did not adequately explain the mechanism of generation of secondary waves or why they had any significant angular spread; this issue particularly bothered Poisson. Fourth, the question that most exercised optical physicists at that time was not diffraction, but polarization – on which Fresnel had been working, but was yet to make his critical breakthrough. Polarization Background: Emissionism and selectionism An emission theory of light was one that regarded the propagation of light as the transport of some kind of matter. While the corpuscular theory was obviously an emission theory, the converse did not follow: in principle, one could be an emissionist without being a corpuscularist. This was convenient because, beyond the ordinary laws of reflection and refraction, emissionists never managed to make testable quantitative predictions from a theory of forces acting on corpuscles of light. But they did make quantitative predictions from the premises that rays were countable objects, which were conserved in their interactions with matter (except absorbent media), and which had particular orientations with respect to their directions of propagation. According to this framework, polarization and the related phenomena of double refraction and partial reflection involved altering the orientations of the rays and/or selecting them according to orientation, and the state of polarization of a beam (a bundle of rays) was a question of how many rays were in what orientations: in a fully polarized beam, the orientations were all the same. This approach, which Jed Buchwald has called selectionism, was pioneered by Malus and diligently pursued by Biot. Fresnel, in contrast, decided to introduce polarization into interference experiments. Interference of polarized light, chromatic polarization (1816–21) In July or August 1816, Fresnel discovered that when a birefringent crystal produced two images of a single slit, he could not obtain the usual two-slit interference pattern, even if he compensated for the different propagation times. A more general experiment, suggested by Arago, found that if the two beams of a double-slit device were separately polarized, the interference pattern appeared and disappeared as the polarization of one beam was rotated, giving full interference for parallel polarizations, but no interference for perpendicular polarizations . These experiments, among others, were eventually reported in a brief memoir published in 1819 and later translated into English. In a memoir drafted on 30 August 1816 and revised on 6 October, Fresnel reported an experiment in which he placed two matching thin laminae in a double-slit apparatus – one over each slit, with their optic axes perpendicular – and obtained two interference patterns offset in opposite directions, with perpendicular polarizations. This, in combination with the previous findings, meant that each lamina split the incident light into perpendicularly polarized components with different velocities – just like a normal (thick) birefringent crystal, and contrary to Biot's "mobile polarization" hypothesis. Accordingly, in the same memoir, Fresnel offered his first attempt at a wave theory of chromatic polarization. When polarized light passed through a crystal lamina, it was split into ordinary and extraordinary waves (with intensities described by Malus's law), and these were perpendicularly polarized and therefore did not interfere, so that no colors were produced (yet). But if they then passed through an analyzer (second polarizer), their polarizations were brought into alignment (with intensities again modified according to Malus's law), and they would interfere. This explanation, by itself, predicts that if the analyzer is rotated 90°, the ordinary and extraordinary waves simply switch roles, so that if the analyzer takes the form of a calcite crystal, the two images of the lamina should be of the same hue (this issue is revisited below). But in fact, as Arago and Biot had found, they are of complementary colors. To correct the prediction, Fresnel proposed a phase-inversion rule whereby one of the constituent waves of one of the two images suffered an additional 180° phase shift on its way through the lamina. This inversion was a weakness in the theory relative to Biot's, as Fresnel acknowledged, although the rule specified which of the two images had the inverted wave. Moreover, Fresnel could deal only with special cases, because he had not yet solved the problem of superposing sinusoidal functions with arbitrary phase differences due to propagation at different velocities through the lamina. He solved that problem in a "supplement" signed on 15 January 1818 (mentioned above). In the same document, he accommodated Malus's law by proposing an underlying law: that if polarized light is incident on a birefringent crystal with its optic axis at an angle θ to the "plane of polarization", the ordinary and extraordinary vibrations (as functions of time) are scaled by the factors cosθ and sinθ, respectively. Although modern readers easily interpret these factors in terms of perpendicular components of a transverse oscillation, Fresnel did not (yet) explain them that way. Hence he still needed the phase-inversion rule. He applied all these principles to a case of chromatic polarization not covered by Biot's formulae, involving two successive laminae with axes separated by 45°, and obtained predictions that disagreed with Biot's experiments (except in special cases) but agreed with his own. Fresnel applied the same principles to the standard case of chromatic polarization, in which one birefringent lamina was sliced parallel to its axis and placed between a polarizer and an analyzer. If the analyzer took the form of a thick calcite crystal with its axis in the plane of polarization, Fresnel predicted that the intensities of the ordinary and extraordinary images of the lamina were respectively proportional to where is the angle from the initial plane of polarization to the optic axis of the lamina, is the angle from the initial plane of polarization to the plane of polarization of the final ordinary image, and is the phase lag of the extraordinary wave relative to the ordinary wave due to the difference in propagation times through the lamina. The terms in are the frequency-dependent terms and explain why the lamina must be thin in order to produce discernible colors: if the lamina is too thick, will pass through too many cycles as the frequency varies through the visible range, and the eye (which divides the visible spectrum into only three bands) will not be able to resolve the cycles. From these equations it is easily verified that for all so that the colors are complementary. Without the phase-inversion rule, there would be a plus sign in front of the last term in the second equation, so that the -dependent term would be the same in both equations, implying (incorrectly) that the colors were of the same hue. These equations were included in an undated note that Fresnel gave to Biot, to which Biot added a few lines of his own. If we substitute  and  then Fresnel's formulae can be rewritten as which are none other than Biot's empirical formulae of 1812, except that Biot interpreted and as the "unaffected" and "affected" selections of the rays incident on the lamina. If Biot's substitutions were accurate, they would imply that his experimental results were more fully explained by Fresnel's theory than by his own. Arago delayed reporting on Fresnel's works on chromatic polarization until June 1821, when he used them in a broad attack on Biot's theory. In his written response, Biot protested that Arago's attack went beyond the proper scope of a report on the nominated works of Fresnel. But Biot also claimed that the substitutions for and and therefore Fresnel's expressions for and were empirically wrong because when Fresnel's intensities of spectral colors were mixed according to Newton's rules, the squared cosine and sine functions varied too smoothly to account for the observed sequence of colors. That claim drew a written reply from Fresnel, who disputed whether the colors changed as abruptly as Biot claimed, and whether the human eye could judge color with sufficient objectivity for the purpose. On the latter question, Fresnel pointed out that different observers may give different names to the same color. Furthermore, he said, a single observer can only compare colors side by side; and even if they are judged to be the same, the identity is of sensation, not necessarily of composition. Fresnel's oldest and strongest point – that thin crystals were subject to the same laws as thick ones and did not need or allow a separate theory – Biot left unanswered.  Arago and Fresnel were seen to have won the debate. Moreover, by this time Fresnel had a new, simpler explanation of his equations on chromatic polarization. Breakthrough: Pure transverse waves (1821) In the draft memoir of 30 August 1816, Fresnel mentioned two hypotheses – one of which he attributed to Ampère – by which the non-interference of orthogonally-polarized beams could be explained if polarized light waves were partly transverse. But Fresnel could not develop either of these ideas into a comprehensive theory. As early as September 1816, according to his later account, he realized that the non-interference of orthogonally-polarized beams, together with the phase-inversion rule in chromatic polarization, would be most easily explained if the waves were purely transverse, and Ampère "had the same thought" on the phase-inversion rule. But that would raise a new difficulty: as natural light seemed to be unpolarized and its waves were therefore presumed to be longitudinal, one would need to explain how the longitudinal component of vibration disappeared on polarization, and why it did not reappear when polarized light was reflected or refracted obliquely by a glass plate. Independently, on 12 January 1817, Young wrote to Arago (in English) noting that a transverse vibration would constitute a polarization, and that if two longitudinal waves crossed at a significant angle, they could not cancel without leaving a residual transverse vibration. Young repeated this idea in an article published in a supplement to the Encyclopædia Britannica in February 1818, in which he added that Malus's law would be explained if polarization consisted in a transverse motion. Thus Fresnel, by his own testimony, may not have been the first person to suspect that light waves could have a transverse component, or that polarized waves were exclusively transverse. And it was Young, not Fresnel, who first published the idea that polarization depends on the orientation of a transverse vibration. But these incomplete theories had not reconciled the nature of polarization with the apparent existence of unpolarized light; that achievement was to be Fresnel's alone. In a note that Buchwald dates in the summer of 1818, Fresnel entertained the idea that unpolarized waves could have vibrations of the same energy and obliquity, with their orientations distributed uniformly about the wave-normal, and that the degree of polarization was the degree of non-uniformity in the distribution. Two pages later he noted, apparently for the first time in writing, that his phase-inversion rule and the non-interference of orthogonally-polarized beams would be easily explained if the vibrations of fully polarized waves were "perpendicular to the normal to the wave"—that is, purely transverse. But if he could account for lack of polarization by averaging out the transverse component, he did not also need to assume a longitudinal component. It was enough to suppose that light waves are purely transverse, hence always polarized in the sense of having a particular transverse orientation, and that the "unpolarized" state of natural or "direct" light is due to rapid and random variations in that orientation, in which case two coherent portions of "unpolarized" light will still interfere because their orientations will be synchronized. It is not known exactly when Fresnel made this last step, because there is no relevant documentation from 1820 or early 1821 (perhaps because he was too busy working on lighthouse-lens prototypes; see below). But he first published the idea in a paper on "Calcul des teintes…" ("calculation of the tints…"), serialized in Arago's Annales for May, June, and July 1821. In the first installment, Fresnel described "direct" (unpolarized) light as "the rapid succession of systems of waves polarized in all directions", and gave what is essentially the modern explanation of chromatic polarization, albeit in terms of the analogy between polarization and the resolution of forces in a plane, mentioning transverse waves only in a footnote. The introduction of transverse waves into the main argument was delayed to the second installment, in which he revealed the suspicion that he and Ampère had harbored since 1816, and the difficulty it raised. He continued: According to this new view, he wrote, "the act of polarization consists not in creating these transverse movements, but in decomposing them into two fixed perpendicular directions and in separating the two components". While selectionists could insist on interpreting Fresnel's diffraction integrals in terms of discrete, countable rays, they could not do the same with his theory of polarization. For a selectionist, the state of polarization of a beam concerned the distribution of orientations over the population of rays, and that distribution was presumed to be static. For Fresnel, the state of polarization of a beam concerned the variation of a displacement over time. That displacement might be constrained but was not static, and rays were geometric constructions, not countable objects. The conceptual gap between the wave theory and selectionism had become unbridgeable. The other difficulty posed by pure transverse waves, of course, was the apparent implication that the aether was an elastic solid, except that, unlike other elastic solids, it was incapable of transmitting longitudinal waves. The wave theory was cheap on assumptions, but its latest assumption was expensive on credulity. If that assumption was to be widely entertained, its explanatory power would need to be impressive. Partial reflection (1821) In the second installment of "Calcul des teintes" (June 1821), Fresnel supposed, by analogy with sound waves, that the density of the aether in a refractive medium was inversely proportional to the square of the wave velocity, and therefore directly proportional to the square of the refractive index. For reflection and refraction at the surface between two isotropic media of different indices, Fresnel decomposed the transverse vibrations into two perpendicular components, now known as the s and p components, which are parallel to the surface and the plane of incidence, respectively; in other words, the s and p components are respectively square and parallel to the plane of incidence. For the s component, Fresnel supposed that the interaction between the two media was analogous to an elastic collision, and obtained a formula for what we now call the reflectivity: the ratio of the reflected intensity to the incident intensity. The predicted reflectivity was non-zero at all angles. The third installment (July 1821) was a short "postscript" in which Fresnel announced that he had found, by a "mechanical solution", a formula for the reflectivity of the p component, which predicted that the reflectivity was zero at the Brewster angle. So polarization by reflection had been accounted for – but with the proviso that the direction of vibration in Fresnel's model was perpendicular to the plane of polarization as defined by Malus. (On the ensuing controversy, see Plane of polarization.) The technology of the time did not allow the s and p reflectivities to be measured accurately enough to test Fresnel's formulae at arbitrary angles of incidence. But the formulae could be rewritten in terms of what we now call the reflection coefficient: the signed ratio of the reflected amplitude to the incident amplitude. Then, if the plane of polarization of the incident ray was at 45° to the plane of incidence, the tangent of the corresponding angle for the reflected ray was obtainable from the ratio of the two reflection coefficients, and this angle could be measured. Fresnel had measured it for a range of angles of incidence, for glass and water, and the agreement between the calculated and measured angles was better than 1.5° in all cases. Fresnel gave details of the "mechanical solution" in a memoir read to the Académie des Sciences on 7 January 1823. Conservation of energy was combined with continuity of the tangential vibration at the interface. The resulting formulae for the reflection coefficients and reflectivities became known as the Fresnel equations. The reflection coefficients for the s and p polarizations are most succinctly expressed as and where and are the angles of incidence and refraction; these equations are known respectively as Fresnel's sine law and Fresnel's tangent law. By allowing the coefficients to be complex, Fresnel even accounted for the different phase shifts of the s and p components due to total internal reflection. This success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The same technique is applicable to non-metallic opaque media. With these generalizations, the Fresnel equations can predict the appearance of a wide variety of objects under illumination – for example, in computer graphics . Circular and elliptical polarization, optical rotation (1822) In a memoir dated 9 December 1822, Fresnel coined the terms linear polarization (French: polarisation rectiligne) for the simple case in which the perpendicular components of vibration are in phase or 180° out of phase, circular polarization for the case in which they are of equal magnitude and a quarter-cycle (±90°) out of phase, and elliptical polarization for other cases in which the two components have a fixed amplitude ratio and a fixed phase difference. He then explained how optical rotation could be understood as a species of birefringence. Linearly-polarized light could be resolved into two circularly-polarized components rotating in opposite directions. If these components propagated at slightly different speeds, the phase difference between them – and therefore the direction of their linearly-polarized resultant – would vary continuously with distance. These concepts called for a redefinition of the distinction between polarized and unpolarized light. Before Fresnel, it was thought that polarization could vary in direction, and in degree (e.g., due to variation in the angle of reflection off a transparent body), and that it could be a function of color (chromatic polarization), but not that it could vary in kind. Hence it was thought that the degree of polarization was the degree to which the light could be suppressed by an analyzer with the appropriate orientation. Light that had been converted from linear to elliptical or circular polarization (e.g., by passage through a crystal lamina, or by total internal reflection) was described as partly or fully "depolarized" because of its behavior in an analyzer. After Fresnel, the defining feature of polarized light was that the perpendicular components of vibration had a fixed ratio of amplitudes and a fixed difference in phase. By that definition, elliptically or circularly polarized light is fully polarized although it cannot be fully suppressed by an analyzer alone. The conceptual gap between the wave theory and selectionism had widened again. Total internal reflection (1817–23) By 1817 it had been discovered by Brewster, but not adequately reported, that plane-polarized light was partly depolarized by total internal reflection if initially polarized at an acute angle to the plane of incidence. Fresnel rediscovered this effect and investigated it by including total internal reflection in a chromatic-polarization experiment. With the aid of his first theory of chromatic polarization, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle (45°). Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle (90°). These findings were contained in a memoir submitted to the Académie on 10 November 1817 and read a fortnight later. An undated marginal note indicates that the two coupled prisms were later replaced by a single "parallelepiped in glass"—now known as a Fresnel rhomb. This was the memoir whose "supplement", dated January 1818, contained the method of superposing sinusoidal functions and the restatement of Malus's law in terms of amplitudes. In the same supplement, Fresnel reported his discovery that optical rotation could be emulated by passing the polarized light through a Fresnel rhomb (still in the form of "coupled prisms"), followed by an ordinary birefringent lamina sliced parallel to its axis, with the axis at 45° to the plane of reflection of the Fresnel rhomb, followed by a second Fresnel rhomb at 90° to the first. In a further memoir read on 30 March, Fresnel reported that if polarized light was fully "depolarized" by a Fresnel rhomb – now described as a parallelepiped – its properties were not further modified by a subsequent passage through an optically rotating medium or device. The connection between optical rotation and birefringence was further explained in 1822, in the memoir on elliptical and circular polarization. This was followed by the memoir on reflection, read in January 1823, in which Fresnel quantified the phase shifts in total internal reflection, and thence calculated the precise angle at which a Fresnel rhomb should be cut in order to convert linear polarization to circular polarization. For a refractive index of 1.51, there were two solutions: about 48.6° and 54.6°. Double refraction Background: Uniaxial and biaxial crystals; Biot's laws When light passes through a slice of calcite cut perpendicular to its optic axis, the difference between the propagation times of the ordinary and extraordinary waves has a second-order dependence on the angle of incidence. If the slice is observed in a highly convergent cone of light, that dependence becomes significant, so that a chromatic-polarization experiment will show a pattern of concentric rings. But most minerals, when observed in this manner, show a more complicated pattern of rings involving two foci and a lemniscate curve, as if they had two optic axes. The two classes of minerals naturally become known as uniaxal and biaxal—or, in later literature, uniaxial and biaxial. In 1813, Brewster observed the simple concentric pattern in "beryl, emerald, ruby &c." The same pattern was later observed in calcite by Wollaston, Biot, and Seebeck.  Biot, assuming that the concentric pattern was the general case, tried to calculate the colors with his theory of chromatic polarization, and succeeded better for some minerals than for others. In 1818, Brewster belatedly explained why: seven of the twelve minerals employed by Biot had the lemniscate pattern, which Brewster had observed as early as 1812; and the minerals with the more complicated rings also had a more complicated law of refraction. In a uniform crystal, according to Huygens's theory, the secondary wavefront that expands from the origin in unit time is the ray-velocity surface—that is, the surface whose "distance" from the origin in any direction is the ray velocity in that direction. In calcite, this surface is two-sheeted, consisting of a sphere (for the ordinary wave) and an oblate spheroid (for the extraordinary wave) touching each other at opposite points of a common axis—touching at the north and south poles, if we may use a geographic analogy. But according to Malus's corpuscular theory of double refraction, the ray velocity was proportional to the reciprocal of that given by Huygens's theory, in which case the velocity law was of the form where and were the ordinary and extraordinary ray velocities according to the corpuscular theory, and was the angle between the ray and the optic axis. By Malus's definition, the plane of polarization of a ray was the plane of the ray and the optic axis if the ray was ordinary, or the perpendicular plane (containing the ray) if the ray was extraordinary. In Fresnel's model, the direction of vibration was normal to the plane of polarization. Hence, for the sphere (the ordinary wave), the vibration was along the lines of latitude (continuing the geographic analogy); and for the spheroid (the extraordinary wave), the vibration was along the lines of longitude. On 29 March 1819, Biot presented a memoir in which he proposed simple generalizations of Malus's rules for a crystal with two axes, and reported that both generalizations seemed to be confirmed by experiment. For the velocity law, the squared sine was replaced by the product of the sines of the angles from the ray to the two axes (Biot's sine law). And for the polarization of the ordinary ray, the plane of the ray and the axis was replaced by the plane bisecting the dihedral angle between the two planes each of which contained the ray and one axis (Biot's dihedral law). Biot's laws meant that a biaxial crystal with axes at a small angle, cleaved in the plane of those axes, behaved nearly like a uniaxial crystal at near-normal incidence; this was fortunate because gypsum, which had been used in chromatic-polarization experiments, is biaxial. First memoir and supplements (1821–22) Until Fresnel turned his attention to biaxial birefringence, it was assumed that one of the two refractions was ordinary, even in biaxial crystals. But, in a memoir submitted on 19 November 1821, Fresnel reported two experiments on topaz showing that neither refraction was ordinary in the sense of satisfying Snell's law; that is, neither ray was the product of spherical secondary waves. The same memoir contained Fresnel's first attempt at the biaxial velocity law. For calcite, if we interchange the equatorial and polar radii of Huygens's oblate spheroid while preserving the polar direction, we obtain a prolate spheroid touching the sphere at the equator. A plane through the center/origin cuts this prolate spheroid in an ellipse whose major and minor semi-axes give the magnitudes of the extraordinary and ordinary ray velocities in the direction normal to the plane, and (said Fresnel) the directions of their respective vibrations. The direction of the optic axis is the normal to the plane for which the ellipse of intersection reduces to a circle. So, for the biaxial case, Fresnel simply replaced the prolate spheroid with a triaxial ellipsoid, which was to be sectioned by a plane in the same way. In general there would be two planes passing through the center of the ellipsoid and cutting it in a circle, and the normals to these planes would give two optic axes. From the geometry, Fresnel deduced Biot's sine law (with the ray velocities replaced by their reciprocals). The ellipsoid indeed gave the correct ray velocities (although the initial experimental verification was only approximate). But it did not give the correct directions of vibration, for the biaxial case or even for the uniaxial case, because the vibrations in Fresnel's model were tangential to the wavefront—which, for an extraordinary ray, is not generally normal to the ray. This error (which is small if, as in most cases, the birefringence is weak) was corrected in an "extract" that Fresnel read to the Académie a week later, on 26 November. Starting with Huygens's spheroid, Fresnel obtained a 4th-degree surface which, when sectioned by a plane as above, would yield the wave-normal velocities for a wavefront in that plane, together with their vibration directions. For the biaxial case, he generalized the equation to obtain a surface with three unequal principal dimensions; this he subsequently called the "surface of elasticity". But he retained the earlier ellipsoid as an approximation, from which he deduced Biot's dihedral law. Fresnel's initial derivation of the surface of elasticity had been purely geometric, and not deductively rigorous. His first attempt at a mechanical derivation, contained in a "supplement" dated 13 January 1822, assumed that (i) there were three mutually perpendicular directions in which a displacement produced a reaction in the same direction, (ii) the reaction was otherwise a linear function of the displacement, and (iii) the radius of the surface in any direction was the square root of the component, in that direction, of the reaction to a unit displacement in that direction. The last assumption recognized the requirement that if a wave was to maintain a fixed direction of propagation and a fixed direction of vibration, the reaction must not be outside the plane of those two directions. In the same supplement, Fresnel considered how he might find, for the biaxial case, the secondary wavefront that expands from the origin in unit time—that is, the surface that reduces to Huygens's sphere and spheroid in the uniaxial case. He noted that this "wave surface" (surface de l'onde) is tangential to all possible plane wavefronts that could have crossed the origin one unit of time ago, and he listed the mathematical conditions that it must satisfy. But he doubted the feasibility of deriving the surface from those conditions. In a "second supplement", Fresnel eventually exploited two related facts: (i) the "wave surface" was also the ray-velocity surface, which could be obtained by sectioning the ellipsoid that he had initially mistaken for the surface of elasticity, and (ii) the "wave surface" intersected each plane of symmetry of the ellipsoid in two curves: a circle and an ellipse. Thus he found that the "wave surface" is described by the 4th-degree equation where and are the propagation speeds in directions normal to the coordinate axes for vibrations along the axes (the ray and wave-normal speeds being the same in those special cases). Later commentators put the equation in the more compact and memorable form Earlier in the "second supplement", Fresnel modeled the medium as an array of point-masses and found that the force-displacement relation was described by a symmetric matrix, confirming the existence of three mutually perpendicular axes on which the displacement produced a parallel force. Later in the document, he noted that in a biaxial crystal, unlike a uniaxial crystal, the directions in which there is only one wave-normal velocity are not the same as those in which there is only one ray velocity. Nowadays we refer to the former directions as the optic axes or binormal axes, and the latter as the ray axes or biradial axes . Fresnel's "second supplement" was signed on 31 March 1822 and submitted the next day – less than a year after the publication of his pure-transverse-wave hypothesis, and just less than a year after the demonstration of his prototype eight-panel lighthouse lens . Second memoir (1822–26) Having presented the pieces of his theory in roughly the order of discovery, Fresnel needed to rearrange the material so as to emphasize the mechanical foundations; and he still needed a rigorous treatment of Biot's dihedral law. He attended to these matters in his "second memoir" on double refraction, published in the Recueils of the Académie des Sciences for 1824; this was not actually printed until late 1827, a few months after his death. In this work, having established the three perpendicular axes on which a displacement produces a parallel reaction, and thence constructed the surface of elasticity, he showed that Biot's dihedral law is exact provided that the binormals are taken as the optic axes, and the wave-normal direction as the direction of propagation. As early as 1822, Fresnel discussed his perpendicular axes with Cauchy. Acknowledging Fresnel's influence, Cauchy went on to develop the first rigorous theory of elasticity of non-isotropic solids (1827), hence the first rigorous theory of transverse waves therein (1830) — which he promptly tried to apply to optics. The ensuing difficulties drove a long competitive effort to find an accurate mechanical model of the aether. Fresnel's own model was not dynamically rigorous; for example, it deduced the reaction to a shear strain by considering the displacement of one particle while all others were fixed, and it assumed that the stiffness determined the wave velocity as in a stretched string, whatever the direction of the wave-normal. But it was enough to enable the wave theory to do what selectionist theory could not: generate testable formulae covering a comprehensive range of optical phenomena, from mechanical assumptions. Photoelasticity, multiple-prism experiments (1822) In 1815, Brewster reported that colors appear when a slice of isotropic material, placed between crossed polarizers, is mechanically stressed. Brewster himself immediately and correctly attributed this phenomenon to stress-induced birefringence — now known as photoelasticity. In a memoir read in September 1822, Fresnel announced that he had verified Brewster's diagnosis more directly, by compressing a combination of glass prisms so severely that one could actually see a double image through it. In his experiment, Fresnel lined up seven 45°-90°-45° prisms, short side to short side, with their 90° angles pointing in alternating directions. Two half-prisms were added at the ends to make the whole assembly rectangular. The prisms were separated by thin films of turpentine (térébenthine) to suppress internal reflections, allowing a clear line of sight along the row. When the four prisms with similar orientations were compressed in a vise across the line of sight, an object viewed through the assembly produced two images with perpendicular polarizations, with an apparent spacing of 1.5mm at one metre. At the end of that memoir, Fresnel predicted that if the compressed prisms were replaced by (unstressed) monocrystalline quartz prisms with matching directions of optical rotation, and with their optic axes aligned along the row, an object seen by looking along the common optic axis would give two images, which would seem unpolarized when viewed through an analyzer but, when viewed through a Fresnel rhomb, would be polarized at ±45° to the plane of reflection of the rhomb (indicating that they were initially circularly polarized in opposite directions). This would show directly that optical rotation is a form of birefringence. In the memoir of December 1822, in which he introduced the term circular polarization, he reported that he had confirmed this prediction using only one 14°-152°-14° prism and two glass half-prisms. But he obtained a wider separation of the images by replacing the glass half-prism with quartz half-prisms whose rotation was opposite to that of the 14°-152°-14° prism. He added in passing that one could further increase the separation by increasing the number of prisms. Reception For the supplement to Riffault's translation of Thomson's System of Chemistry, Fresnel was chosen to contribute the article on light. The resulting 137-page essay, titled De la Lumière (On Light), was apparently finished in June 1821 and published by February 1822. With sections covering the nature of light, diffraction, thin-film interference, reflection and refraction, double refraction and polarization, chromatic polarization, and modification of polarization by reflection, it made a comprehensive case for the wave theory to a readership that was not restricted to physicists. To examine Fresnel's first memoir and supplements on double refraction, the Académie des Sciences appointed Ampère, Arago, Fourier, and Poisson. Their report, of which Arago was clearly the main author, was delivered at the meeting of 19 August 1822. Then, in the words of Émile Verdet, as translated by Ivor Grattan-Guinness: Whether Laplace was announcing his conversion to the wave theory – at the age of 73 – is uncertain. Grattan-Guinness entertained the idea. Buchwald, noting that Arago failed to explain that the "ellipsoid of elasticity" did not give the correct planes of polarization, suggests that Laplace may have merely regarded Fresnel's theory as a successful generalization of Malus's ray-velocity law, embracing Biot's laws. In the following year, Poisson, who did not sign Arago's report, disputed the possibility of transverse waves in the aether. Starting from assumed equations of motion of a fluid medium, he noted that they did not give the correct results for partial reflection and double refraction – as if that were Fresnel's problem rather than his own – and that the predicted waves, even if they were initially transverse, became more longitudinal as they propagated. In reply Fresnel noted, inter alia, that the equations in which Poisson put so much faith did not even predict viscosity. The implication was clear: given that the behavior of light had not been satisfactorily explained except by transverse waves, it was not the responsibility of the wave-theorists to abandon transverse waves in deference to pre-conceived notions about the aether; rather, it was the responsibility of the aether modelers to produce a model that accommodated transverse waves. According to Robert H. Silliman, Poisson eventually accepted the wave theory shortly before his death in 1840. Among the French, Poisson's reluctance was an exception. According to Eugene Frankel, "in Paris no debate on the issue seems to have taken place after 1825. Indeed, almost the entire generation of physicists and mathematicians who came to maturity in the 1820s – Pouillet, Savart, Lamé, Navier, Liouville, Cauchy – seem to have adopted the theory immediately." Fresnel's other prominent French opponent, Biot, appeared to take a neutral position in 1830, and eventually accepted the wave theory – possibly by 1846 and certainly by 1858. In 1826, the British astronomer John Herschel, who was working on a book-length article on light for the Encyclopædia Metropolitana, addressed three questions to Fresnel concerning double refraction, partial reflection, and their relation to polarization. The resulting article, titled simply "Light", was highly sympathetic to the wave theory, although not entirely free of selectionist language. It was circulating privately by 1828 and was published in 1830. Meanwhile, Young's translation of Fresnel's De la Lumière was published in installments from 1827 to 1829. George Biddell Airy, the former Lucasian Professor at Cambridge and future Astronomer Royal, unreservedly accepted the wave theory by 1831. In 1834, he famously calculated the diffraction pattern of a circular aperture from the wave theory, thereby explaining the limited angular resolution of a perfect telescope . By the end of the 1830s, the only prominent British physicist who held out against the wave theory was Brewster, whose objections included the difficulty of explaining photochemical effects and (in his opinion) dispersion. A German translation of De la Lumière was published in installments in 1825 and 1828. The wave theory was adopted by Fraunhofer in the early 1820s and by Franz Ernst Neumann in the 1830s, and then began to find favor in German textbooks. The economy of assumptions under the wave theory was emphasized by William Whewell in his History of the Inductive Sciences, first published in 1837. In the corpuscular system, "every new class of facts requires a new supposition," whereas in the wave system, a hypothesis devised in order to explain one phenomenon is then found to explain or predict others. In the corpuscular system there is "no unexpected success, no happy coincidence, no convergence of principles from remote quarters"; but in the wave system, "all tends to unity and simplicity." Hence, in 1850, when Foucault and Fizeau found by experiment that light travels more slowly in water than in air, in accordance with the wave explanation of refraction and contrary to the corpuscular explanation, the result came as no surprise. Lighthouses and the Fresnel lens Fresnel was not the first person to focus a lighthouse beam using a lens. That distinction apparently belongs to the London glass-cutter Thomas Rogers, whose first lenses, 53cm in diameter and 14cm thick at the center, were installed at the Old Lower Lighthouse at Portland Bill in 1789. Further samples were installed in about half a dozen other locations by 1804. But much of the light was wasted by absorption in the glass. Nor was Fresnel the first to suggest replacing a convex lens with a series of concentric annular prisms, to reduce weight and absorption. In 1748, Count Buffon proposed grinding such prisms as steps in a single piece of glass. In 1790, the Marquis de Condorcet suggested that it would be easier to make the annular sections separately and assemble them on a frame; but even that was impractical at the time. These designs were intended not for lighthouses, but for burning glasses. Brewster, however, proposed a system similar to Condorcet's in 1811, and by 1820 was advocating its use in British lighthouses. Meanwhile, on 21 June 1819, Fresnel was "temporarily" seconded by the Commission des Phares (Commission of Lighthouses) on the recommendation of Arago (a member of the Commission since 1813), to review possible improvements in lighthouse illumination. The commission had been established by Napoleon in 1811 and placed under the Corps des Ponts – Fresnel's employer. By the end of August 1819, unaware of the Buffon-Condorcet-Brewster proposal, Fresnel made his first presentation to the commission, recommending what he called lentilles à échelons (lenses by steps) to replace the reflectors then in use, which reflected only about half of the incident light. One of the assembled commissioners, Jacques Charles, recalled Buffon's suggestion, leaving Fresnel embarrassed for having again "broken through an open door". But, whereas Buffon's version was biconvex and in one piece, Fresnel's was plano-convex and made of multiple prisms for easier construction. With an official budget of 500 francs, Fresnel approached three manufacturers. The third, François Soleil, produced the prototype. Finished in March 1820, it had a square lens panel 55cm on a side, containing 97 polygonal (not annular) prisms – and so impressed the Commission that Fresnel was asked for a full eight-panel version. This model, completed a year later in spite of insufficient funding, had panels 76cm square. In a public spectacle on the evening of 13 April 1821, it was demonstrated by comparison with the most recent reflectors, which it suddenly rendered obsolete. Fresnel's next lens was a rotating apparatus with eight "bull's-eye" panels, made in annular arcs by Saint-Gobain, giving eight rotating beams – to be seen by mariners as a periodic flash. Above and behind each main panel was a smaller, sloping bull's-eye panel of trapezoidal outline with trapezoidal elements. This refracted the light to a sloping plane mirror, which then reflected it horizontally, 7 degrees ahead of the main beam, increasing the duration of the flash. Below the main panels were 128 small mirrors arranged in four rings, stacked like the slats of a louver or Venetian blind. Each ring, shaped as a frustum of a cone, reflected the light to the horizon, giving a fainter steady light between the flashes. The official test, conducted on the unfinished Arc de Triomphe on 20 August 1822, was witnessed by the commission – and by Louis and his entourage – from 32km away. The apparatus was stored at Bordeaux for the winter, and then reassembled at Cordouan Lighthouse under Fresnel's supervision. On 25 July 1823, the world's first lighthouse Fresnel lens was lit. Soon afterwards, Fresnel started coughing up blood. In May 1824, Fresnel was promoted to secretary of the Commission des Phares, becoming the first member of that body to draw a salary, albeit in the concurrent role of Engineer-in-Chief. He was also an examiner (not a teacher) at the École Polytechnique since 1821; but poor health, long hours during the examination season, and anxiety about judging others induced him to resign that post in late 1824, to save his energy for his lighthouse work. In the same year he designed the first fixed lens – for spreading light evenly around the horizon while minimizing waste above or below. Ideally the curved refracting surfaces would be segments of toroids about a common vertical axis, so that the dioptric panel would look like a cylindrical drum. If this was supplemented by reflecting (catoptric) rings above and below the refracting (dioptric) parts, the entire apparatus would look like a beehive. The second Fresnel lens to enter service was indeed a fixed lens, of third order, installed at Dunkirk by 1 February 1825. However, due to the difficulty of fabricating large toroidal prisms, this apparatus had a 16-sided polygonal plan. In 1825, Fresnel extended his fixed-lens design by adding a rotating array outside the fixed array. Each panel of the rotating array was to refract part of the fixed light from a horizontal fan into a narrow beam. Also in 1825, Fresnel unveiled the Carte des Phares (Lighthouse Map), calling for a system of 51 lighthouses plus smaller harbor lights, in a hierarchy of lens sizes (called orders, the first order being the largest), with different characteristics to facilitate recognition: a constant light (from a fixed lens), one flash per minute (from a rotating lens with eight panels), and two per minute (sixteen panels). In late 1825, to reduce the loss of light in the reflecting elements, Fresnel proposed to replace each mirror with a catadioptric prism, through which the light would travel by refraction through the first surface, then total internal reflection off the second surface, then refraction through the third surface. The result was the lighthouse lens as we now know it. In 1826 he assembled a small model for use on the Canal Saint-Martin, but he did not live to see a full-sized version. The first fixed lens with toroidal prisms was a first-order apparatus designed by the Scottish engineer Alan Stevenson under the guidance of Léonor Fresnel, and fabricated by Isaac Cookson & Co. from French glass; it entered service at the Isle of May in 1836. The first large catadioptric lenses were fixed third-order lenses made in 1842 for the lighthouses at Gravelines and Île Vierge. The first fully catadioptric first-order lens, installed at Ailly in 1852, gave eight rotating beams assisted by eight catadioptric panels at the top (to lengthen the flashes), plus a fixed light from below. The first fully catadioptric lens with purely revolving beams – also of first order – was installed at Saint-Clément-des-Baleines in 1854, and marked the completion of Augustin Fresnel's original Carte des Phares. Production of one-piece stepped dioptric lenses—roughly as envisaged by Buffon—became practical in 1852, when John L. Gilliland of the Brooklyn Flint-Glass Company patented a method of making such lenses from press-molded glass. By the 1950s, the substitution of plastic for glass made it economic to use fine-stepped Fresnel lenses as condensers in overhead projectors. Still finer steps can be found in low-cost plastic "sheet" magnifiers. Honors Fresnel was elected to the Société Philomathique de Paris in April 1819, and in 1822 became one of the editors of the Société's Bulletin des Sciences. As early as May 1817, at Arago's suggestion, Fresnel applied for membership of the Académie des Sciences, but received only one vote. The successful candidate on that occasion was Joseph Fourier. In November 1822, Fourier's elevation to Permanent Secretary of the Académie created a vacancy in the physics section, which was filled in February 1823 by Pierre Louis Dulong, with 36 votes to Fresnel's 20. But in May 1823, after another vacancy was left by the death of Jacques Charles, Fresnel's election was unanimous. In 1824, Fresnel was made a chevalier de la Légion d'honneur (Knight of the Legion of Honour). Meanwhile, in Britain, the wave theory was yet to take hold; Fresnel wrote to Thomas Young in November 1824, saying in part: But "the praise of English scholars" soon followed. On 9 June 1825, Fresnel was made a Foreign Member of the Royal Society of London. In 1827 he was awarded the society's Rumford Medal for the year 1824, "For his Development of the Undulatory Theory as applied to the Phenomena of Polarized Light, and for his various important discoveries in Physical Optics." A monument to Fresnel at his birthplace was dedicated on 14 September 1884 with a speech by , Permanent Secretary of the Académie des Sciences.  "" is among the 72 names embossed on the Eiffel Tower (on the south-east side, fourth from the left). In the 19th century, as every lighthouse in France acquired a Fresnel lens, every one acquired a bust of Fresnel, seemingly watching over the coastline that he had made safer. The lunar features Promontorium Fresnel and Rimae Fresnel were later named after him. Decline and death Fresnel's health, which had always been poor, deteriorated in the winter of 1822–1823, increasing the urgency of his original research, and (in part) preventing him from contributing an article on polarization and double refraction for the Encyclopædia Britannica. The memoirs on circular and elliptical polarization and optical rotation, and on the detailed derivation of the Fresnel equations and their application to total internal reflection, date from this period. In the spring he recovered enough, in his own view, to supervise the lens installation at Cordouan. Soon afterwards, it became clear that his condition was tuberculosis. In 1824, he was advised that if he wanted to live longer, he needed to scale back his activities. Perceiving his lighthouse work to be his most important duty, he resigned as an examiner at the École Polytechnique, and closed his scientific notebooks. His last note to the Académie, read on 13 June 1825, described the first radiometer and attributed the observed repulsive force to a temperature difference. Although his fundamental research ceased, his advocacy did not; as late as August or September 1826, he found the time to answer Herschel's queries on the wave theory. It was Herschel who recommended Fresnel for the Royal Society's Rumford Medal. Fresnel's cough worsened in the winter of 1826–1827, leaving him too ill to return to Mathieu in the spring. The Académie meeting of 30 April 1827 was the last that he attended. In early June he was carried to Ville-d'Avray, 12km west of Paris. There his mother joined him. On 6 July, Arago arrived to deliver the Rumford Medal. Sensing Arago's distress, Fresnel whispered that "the most beautiful crown means little, when it is laid on the grave of a friend." Fresnel did not have the strength to reply to the Royal Society. He died eight days later, on Bastille Day. He is buried at Père Lachaise Cemetery, Paris. The inscription on his headstone is partly eroded away; the legible part says, when translated, "To the memory of Augustin Jean Fresnel, member of the Institute of France". Posthumous publications Fresnel's "second memoir" on double refraction was not printed until late 1827, a few months after his death. Until then, the best published source on his work on double refraction was an extract of that memoir, printed in 1822. His final treatment of partial reflection and total internal reflection, read to the Académie in January 1823, was thought to be lost until it was rediscovered among the papers of the deceased Joseph Fourier (1768–1830), and was printed in 1831. Until then, it was known chiefly through an extract printed in 1823 and 1825. The memoir introducing the parallelepiped form of the Fresnel rhomb, read in March 1818, was mislaid until 1846, and then attracted such interest that it was soon republished in English. Most of Fresnel's writings on polarized light before 1821 – including his first theory of chromatic polarization (submitted 7 October 1816) and the crucial "supplement" of January 1818 — were not published in full until his Oeuvres complètes ("complete works") began to appear in 1866. The "supplement" of July 1816, proposing the "efficacious ray" and reporting the famous double-mirror experiment, met the same fate, as did the "first memoir" on double refraction. Publication of Fresnel's collected works was itself delayed by the deaths of successive editors. The task was initially entrusted to Félix Savary, who died in 1841. It was restarted twenty years later by the Ministry of Public Instruction. Of the three editors eventually named in the Oeuvres, Sénarmont died in 1862, Verdet in 1866, and Léonor Fresnel in 1869, by which time only two of the three volumes had appeared. At the beginning of vol. 3 (1870), the completion of the project is described in a long footnote by "J. Lissajous." Not included in the Oeuvres are two short notes by Fresnel on magnetism, which were discovered among Ampère's manuscripts. In response to Ørsted's discovery of electromagnetism in 1820, Ampère initially supposed that the field of a permanent magnet was due to a macroscopic circulating current. Fresnel suggested instead that there was a microscopic current circulating around each particle of the magnet. In his first note, he argued that microscopic currents, unlike macroscopic currents, would explain why a hollow cylindrical magnet does not lose its magnetism when cut longitudinally. In his second note, dated 5 July 1821, he further argued that a macroscopic current had the counterfactual implication that a permanent magnet should be hot, whereas microscopic currents circulating around the molecules might avoid the heating mechanism. He was not to know that the fundamental units of permanent magnetism are even smaller than molecules . The two notes, together with Ampère's acknowledgment, were eventually published in 1885. Lost works Fresnel's essay Rêveries of 1814 has not survived. While its content would have been interesting to historians, its quality may perhaps be gauged from the fact that Fresnel himself never referred to it in his maturity. More disturbing is the fate of the late article "Sur les Différents Systèmes relatifs à la Théorie de la Lumière" ("On the Different Systems relating to the Theory of Light"), which Fresnel wrote for the newly launched English journal European Review. This work seems to have been similar in scope to the essay De la Lumière of 1821/22, except that Fresnel's views on double refraction, circular and elliptical polarization, optical rotation, and total internal reflection had developed since then. The manuscript was received by the publisher's agent in Paris in early September 1824, and promptly forwarded to London. But the journal failed before Fresnel's contribution could be published. Fresnel tried unsuccessfully to recover the manuscript. The editors of his collected works were also unable to find it, and admitted that it was probably lost. Unfinished business Aether drag and aether density In 1810, Arago found experimentally that the degree of refraction of starlight does not depend on the direction of the earth's motion relative to the line of sight. In 1818, Fresnel showed that this result could be explained by the wave theory, on the hypothesis that if an object with refractive index moved at velocity relative to the external aether (taken as stationary), then the velocity of light inside the object gained the additional component . He supported that hypothesis by supposing that if the density of the external aether was taken as unity, the density of the internal aether was , of which the excess, namely , was dragged along at velocity , whence the average velocity of the internal aether was . The factor in parentheses, which Fresnel originally expressed in terms of wavelengths, became known as the Fresnel drag coefficient. In his analysis of double refraction, Fresnel supposed that the different refractive indices in different directions within the same medium were due to a directional variation in elasticity, not density (because the concept of mass per unit volume is not directional). But in his treatment of partial reflection, he supposed that the different refractive indices of different media were due to different aether densities, not different elasticities. The latter decision, although puzzling in the context of double refraction, was consistent with the earlier treatment of aether drag. In 1846, George Gabriel Stokes pointed out that there was no need to divide the aether inside a moving object into two portions; all of it could be considered as moving at a common velocity. Then, if the aether was conserved while its density changed in proportion to , the resulting velocity of the aether inside the object was equal to Fresnel's additional velocity component. Dispersion The analogy between light waves and transverse waves in elastic solids does not predict dispersion — that is, the frequency-dependence of the speed of propagation, which enables prisms to produce spectra and causes lenses to suffer from chromatic aberration. Fresnel, in De la Lumière and in the second supplement to his first memoir on double refraction, suggested that dispersion could be accounted for if the particles of the medium exerted forces on each other over distances that were significant fractions of a wavelength. Later, more than once, Fresnel referred to the demonstration of this result as being contained in a note appended to his "second memoir" on double refraction. But no such note appeared in print, and the relevant manuscripts found after his death showed only that, around 1824, he was comparing refractive indices (measured by Fraunhofer) with a theoretical formula, the meaning of which was not fully explained. In the 1830s, Fresnel's suggestion was taken up by Cauchy, Powell, and Kelland, and it was indeed found to be tolerably consistent with the variation of refractive indices with wavelength over the visible spectrum, for a variety of transparent media . These investigations were enough to show that the wave theory was at least compatible with dispersion. However, if the model of dispersion was to be accurate over a wider range of frequencies, it needed to be modified so as to take account of resonances within the medium . Conical refraction The analytical complexity of Fresnel's derivation of the ray-velocity surface was an implicit challenge to find a shorter path to the result. This was answered by MacCullagh in 1830, and by William Rowan Hamilton in 1832. Hamilton went further, establishing two properties of the surface that Fresnel, in the short time given to him, had overlooked: (i) at each of the four points where the inner and outer sheets of the surface make contact, the surface has a tangent cone (tangential to both sheets), hence a cone of normals, indicating that a cone of wave-normal directions corresponds to a single ray-velocity vector; and (ii) around each of these points, the outer sheet has a circle of contact with a tangent plane, indicating that a cone of ray directions corresponds to a single wave-normal velocity vector. As Hamilton noted, these properties respectively imply that (i) a narrow beam propagating inside the crystal in the direction of the single ray velocity will, on exiting the crystal through a flat surface, break into a hollow cone (external conical refraction), and (ii) a narrow beam striking a flat surface of the crystal in the appropriate direction (corresponding to that of the single internal wave-normal velocity) will, on entering the crystal, break into a hollow cone (internal conical refraction). Thus a new pair of phenomena, qualitatively different from anything previously observed or suspected, had been predicted by mathematics as consequences of Fresnel's theory. The prompt experimental confirmation of those predictions by Humphrey Lloyd brought Hamilton a prize that had never come to Fresnel: immediate fame. Legacy Within a century of Fresnel's initial stepped-lens proposal, more than 10,000 lights with Fresnel lenses were protecting lives and property around the world. Concerning the other benefits, the science historian Theresa H. Levitt has remarked: In the history of physical optics, Fresnel's successful revival of the wave theory nominates him as the pivotal figure between Newton, who held that light consisted of corpuscles, and James Clerk Maxwell, who established that light waves are electromagnetic. Whereas Albert Einstein described Maxwell's work as "the most profound and the most fruitful that physics has experienced since the time of Newton," commentators of the era between Fresnel and Maxwell made similarly strong statements about Fresnel: MacCullagh, as early as 1830, wrote that Fresnel's mechanical theory of double refraction "would do honour to the sagacity of Newton". Lloyd, in his Report on the progress and present state of physical optics (1834) for the British Association for the Advancement of Science, surveyed previous knowledge of double refraction and declared:The theory of Fresnel to which I now proceed,— and which not only embraces all the known phenomena, but has even outstripped observation, and predicted consequences which were afterwards fully verified,— will, I am persuaded, be regarded as the finest generalization in physical science which has been made since the discovery of universal gravitation.In 1841, Lloyd published his Lectures on the Wave-theory of Light, in which he described Fresnel's transverse-wave theory as "the noblest fabric which has ever adorned the domain of physical science, Newton's system of the universe alone excepted." William Whewell, in all three editions of his History of the Inductive Sciences (1837, 1847, and 1857), at the end of Book , compared the histories of physical astronomy and physical optics and concluded:It would, perhaps, be too fanciful to attempt to establish a parallelism between the prominent persons who figure in these two histories. If we were to do this, we must consider Huyghens and Hooke as standing in the place of Copernicus, since, like him, they announced the true theory, but left it to a future age to give it development and mechanical confirmation; Malus and Brewster, grouping them together, correspond to Tycho Brahe and Kepler, laborious in accumulating observations, inventive and happy in discovering laws of phenomena; and Young and Fresnel combined, make up the Newton of optical science. What Whewell called the "true theory" has since undergone two major revisions. The first, by Maxwell, specified the physical fields whose variations constitute the waves of light. Without the benefit of this knowledge, Fresnel managed to construct the world's first coherent theory of light, showing in retrospect that his methods are applicable to multiple types of waves. The second revision, initiated by Einstein's explanation of the photoelectric effect, supposed that the energy of light waves was divided into quanta, which were eventually identified with particles called photons. But photons did not exactly correspond to Newton's corpuscles; for example, Newton's explanation of ordinary refraction required the corpuscles to travel faster in media of higher refractive index, which photons do not. Neither did photons displace waves; rather, they led to the paradox of wave–particle duality. Moreover, the phenomena studied by Fresnel, which included nearly all the optical phenomena known at his time, are still most easily explained in terms of the wave nature of light. So it was that, as late as 1927, the astronomer Eugène Michel Antoniadi declared Fresnel to be "the dominant figure in optics." See also Explanatory notes References Citations Bibliography D.F.J. Arago (tr. B. Powell), 1857, "Fresnel" (elegy read at the Public Meeting of the Academy of Sciences, 26 July 1830), in D.F.J. Arago (tr.  W.H. Smyth, B. Powell, and R. Grant), Biographies of Distinguished Scientific Men (single-volume edition), London: Longman, Brown, Green, Longmans, & Roberts, 1857, pp. 399–471. (On the translator's identity, see pp. 425n,452n.)  Erratum: In the translator's note on p. 413, a plane tangent to the outer sphere at point t should intersect the refractive surface (assumed flat); then, through that intersection, tangent planes should be drawn to the inner sphere and spheroid (cf. Mach, 1926, p.263). D.F.J. Arago and A. Fresnel, 1819, "Mémoire sur l'action que les rayons de lumière polarisée exercent les uns sur les autres", Annales de Chimie et de Physique, Ser.2, vol. 10, pp. 288–305, March 1819; reprinted in Fresnel, 1866–70, vol. 1, pp. 509–22; translated as "On the action of rays of polarized light upon each other", in Crew, 1900, pp. 145–55. G.-A. Boutry, 1948, "Augustin Fresnel: His time, life and work, 1788–1827", Science Progress, vol. 36, no. 144 (October 1948), pp. 587–604; jstor.org/stable/43413515. J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, . J.Z. Buchwald, 2013, "Optics in the Nineteenth Century", in J.Z. Buchwald and R. Fox (eds.), The Oxford Handbook of the History of Physics, Oxford, , pp. 445–72. H. Crew (ed.), 1900, The Wave Theory of Light: Memoirs by Huygens, Young and Fresnel, American Book Company. O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, . J. Elton, 2009, "A Light to Lighten our Darkness: Lighthouse Optics and the Later Development of Fresnel's Revolutionary Refracting Lens 1780–1900", International Journal for the History of Engineering & Technology, vol. 79, no. 2 (July 2009), pp. 183–244; . E. Frankel, 1974, "The search for a corpuscular theory of double refraction: Malus, Laplace and the competition of 1808", Centaurus, vol. 18, no. 3 (September 1974), pp. 223–245. E. Frankel, 1976, "Corpuscular optics and the wave theory of light: The science and politics of a revolution in physics", Social Studies of Science, vol. 6, no. 2 (May 1976), pp. 141–84; jstor.org/stable/284930. A. Fresnel, 1815a, Letter to Jean François "Léonor" Mérimée, 10 February 1815 (Smithsonian Dibner Library, MSS 546A), printed in G. Magalhães, "Remarks on a new autograph letter from Augustin Fresnel: Light aberration and wave theory", Science in Context, vol. 19, no.2 (June 2006), pp. 295–307, , at p.306 (original French) and p.307 (English translation). A. Fresnel, 1816, "Mémoire sur la diffraction de la lumière" ("Memoir on the diffraction of light"), Annales de Chimie et de Physique, Ser.2, vol. 1, pp. 239–81 (March 1816); reprinted as "Deuxième Mémoire…" ("Second Memoir…") in Fresnel, 1866–70, vol. 1, pp. 89–122.  Not to be confused with the later "prize memoir" (Fresnel, 1818b). A. Fresnel, 1818a, "Mémoire sur les couleurs développées dans les fluides homogènes par la lumière polarisée", read 30 March 1818 (according to Kipnis, 1991, p. 217), published 1846; reprinted in Fresnel, 1866–70, vol. 1, pp. 655–83; translated by E. Ronalds & H. Lloyd as "Memoir upon the colours produced in homogeneous fluids by polarized light", in Taylor, 1852, pp. 44–65. (Cited page numbers refer to the translation.) A. Fresnel, 1818b, "Mémoire sur la diffraction de la lumière" ("Memoir on the diffraction of light"), deposited 29 July 1818, "crowned" 15 March 1819, published (with appended notes) in Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol.  (for 1821 & 1822, printed 1826), pp. 339–475; reprinted (with notes) in Fresnel, 1866–70, vol. 1, pp. 247–383; partly translated as "Fresnel's prize memoir on the diffraction of light", in Crew, 1900, pp. 81–144.  Not to be confused with the earlier memoir with the same French title (Fresnel, 1816). A. Fresnel, 1818c, "Lettre de M. Fresnel à M. Arago sur l'influence du mouvement terrestre dans quelques phénomènes d'optique", Annales de Chimie et de Physique, Ser.2, vol. 9, pp. 57–66 & plate after p.111 (Sep. 1818), & p.286–7 (Nov. 1818); reprinted in Fresnel, 1866–70, vol. 2, pp. 627–36; translated as "Letter from Augustin Fresnel to François Arago, on the influence of the movement of the earth on some phenomena of optics" in K.F. Schaffner, Nineteenth-Century Aether Theories, Pergamon, 1972 (), pp. 125–35; also translated (with several errors) by R.R. Traill as "Letter from Augustin Fresnel to François Arago concerning the influence of terrestrial movement on several optical phenomena", General Science Journal, 23 January 2006 (PDF, 8pp.). A. Fresnel, 1821a, "Note sur le calcul des teintes que la polarisation développe dans les lames cristallisées" et seq., Annales de Chimie et de Physique, Ser.2, vol. 17, pp. 102–11 (May 1821), 167–96 (June 1821), 312–15 ("Postscript", July 1821); reprinted (with added section nos.) in Fresnel, 1866–70, vol. 1, pp. 609–48; translated as "On the calculation of the tints that polarization develops in crystalline plates, & postscript", / , 2021. A. Fresnel, 1821b, "Note sur les remarques de M. Biot...", Annales de Chimie et de Physique, Ser.2, vol. 17, pp. 393–403 (August 1821); reprinted (with added section nos.) in Fresnel, 1866–70, vol. 1, pp. 601–608; translated as "Note on the remarks of Mr. Biot relating to colors of thin plates", / , 2021. A. Fresnel, 1821c, Letter to D.F.J.Arago, 21 September 1821, in Fresnel, 1866–70, vol. 2, pp. 257–9; translated as "Letter to Arago on biaxial birefringence", Wikisource, April 2021. A. Fresnel, 1822a, De la Lumière (On Light), in J. Riffault (ed.), Supplément à la traduction française de la cinquième édition du "Système de Chimie" par Th.Thomson, Paris: Chez Méquignon-Marvis, 1822, pp. 1–137,535–9; reprinted in Fresnel, 1866–70, vol. 2, pp. 3–146; translated by T. Young as "Elementary view of the undulatory theory of light", Quarterly Journal of Science, Literature, and Art, vol. 22 (Jan.–Jun.1827), pp. 127–41, 441–54; vol. 23 (Jul.–Dec.1827), pp. 113–35, 431–48; vol. 24 (Jan.–Jun.1828), pp. 198–215; vol. 25 (Jul.–Dec.1828), pp. 168–91, 389–407; vol. 26 (Jan.–Jun.1829), pp. 159–65. A. Fresnel, 1822b, "Mémoire sur un nouveau système d'éclairage des phares", read 29 July 1822; reprinted in Fresnel, 1866–70, vol. 3, pp. 97–126; translated by T. Tag as "Memoir upon a new system of lighthouse illumination", U.S. Lighthouse Society, accessed 26 August 2017; archived 19 August 2016. (Cited page numbers refer to the translation.) A. Fresnel, 1827, "Mémoire sur la double réfraction", Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol.  (for 1824, printed 1827), pp. 45–176; reprinted as "Second mémoire…" in Fresnel, 1866–70, vol. 2, pp. 479–596; translated by A.W. Hobson as "Memoir on double refraction", in Taylor, 1852, pp. 238–333. (Cited page numbers refer to the translation. For notable errata in the original edition, and consequently in the translation, see Fresnel, 1866–70, vol. 2, p. 596n.) A. Fresnel (ed. H. de Sénarmont, E. Verdet, and L. Fresnel), 1866–70, Oeuvres complètes d'Augustin Fresnel (3 volumes), Paris: Imprimerie Impériale; vol. 1 (1866), vol. 2 (1868), vol. 3 (1870). I. Grattan-Guinness, 1990, Convolutions in French Mathematics, 1800–1840, Basel: Birkhäuser, vol. 2, , chapter 13 (pp. 852–915, "The entry of Fresnel: Physical optics, 1815–1824") and chapter 15 (pp. 968–1045, "The entry of Navier and the triumph of Cauchy: Elasticity theory, 1819–1830"). C. Huygens, 1690, Traité de la Lumière (Leiden: Van der Aa), translated by S.P. Thompson as Treatise on Light, University of Chicago Press, 1912; Project Gutenberg, 2005. (Cited page numbers match the 1912 edition and the Gutenberg HTML edition.) F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, . N. Kipnis, 1991, History of the Principle of Interference of Light, Basel: Birkhäuser, , chapters . K.A. Kneller (tr. T.M. Kettle), 1911, Christianity and the Leaders of Modern Science: A contribution to the history of culture in the nineteenth century, Freiburg im Breisgau: B. Herder, pp. 146–9. T.H. Levitt, 2009, The Shadow of Enlightenment: Optical and Political Transparency in France, 1789–1848, Oxford, . T.H. Levitt, 2013, A Short Bright Flash: Augustin Fresnel and the Birth of the Modern Lighthouse, New York: W.W. Norton, . H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp. 295–413. E. Mach (tr. J.S. Anderson & A.F.A. Young), The Principles of Physical Optics: An Historical and Philosophical Treatment, London: Methuen & Co., 1926. I. Newton, 1730, Opticks: or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light, 4th Ed. (London: William Innys, 1730; Project Gutenberg, 2010); republished with Foreword by A. Einstein and Introduction by E.T. Whittaker (London: George Bell & Sons, 1931); reprinted with additional Preface by I.B. Cohen and Analytical Table of Contents by D.H.D. Roller,  Mineola, NY: Dover, 1952, 1979 (with revised preface), 2012. (Cited page numbers match the Gutenberg HTML edition and the Dover editions.) R.H. Silliman, 1967, Augustin Fresnel (1788–1827) and the Establishment of the Wave Theory of Light (PhD dissertation, ), Princeton University, submitted 1967, accepted 1968; available from ProQuest (missing the first page of the preface). R.H. Silliman, 2008, "Fresnel, Augustin Jean", Complete Dictionary of Scientific Biography, Detroit: Charles Scribner's Sons, vol. 5, pp. 165–71. (The version at encyclopedia.com lacks the diagram and equations.) R. Taylor (ed.), 1852, Scientific Memoirs, selected from the Transactions of Foreign Academies of Science and Learned Societies, and from Foreign Journals (in English), vol. , London: Taylor & Francis. W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol. 2, book , chapters . E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the age of Descartes to the close of the nineteenth century, London: Longmans, Green, & Co., chapters . J. Worrall, 1989, "Fresnel, Poisson and the white spot: The role of successful predictions in the acceptance of scientific theories", in D. Gooding, T. Pinch, and S. Schaffer (eds.), The Uses of Experiment: Studies in the Natural Sciences, Cambridge University Press, , pp. 135–57. T. Young, 1807, A Course of Lectures on Natural Philosophy and the Mechanical Arts (2 volumes), London: J.Johnson; vol. 1, vol. 2. T. Young (ed. G. Peacock), 1855, Miscellaneous Works of the late Thomas Young, London: J. Murray, vol. 1. Further reading Some English translations of works by Fresnel are included in the above Bibliography. For a more comprehensive list, see "External links" below. The most detailed secondary source on Fresnel in English is apparently Buchwald 1989 —in which Fresnel, although not named in the title, is clearly the central character. On lighthouse lenses, this article heavily cites Levitt 2013, Elton 2009, and Thomas Tag at the U.S. Lighthouse Society (see "External links" below). All three authors deal not only with Fresnel's contributions but also with later innovations that are not mentioned here (see Fresnel lens: History). By comparison with the volume and impact of his scientific and technical writings, biographical information on Fresnel is remarkably scarce. There is no book-length critical biography of him, and anyone who proposes to write one must confront the fact that the letters published in his Oeuvres complètes—contrary to the title—are heavily redacted. In the words of Robert H. Silliman (1967, p. 6n): "By an unhappy judgment of the editors, dictated in part, one suspects, by political expediency, the letters appear in fragmentary form, preserving almost nothing beyond the technical discussions of Fresnel and his correspondents." It is not clear from the secondary sources whether the manuscripts of those letters are still extant (cf. Grattan-Guinness, 1990, p.854n). External links List of English translations of works by Augustin Fresnel at Zenodo. United States Lighthouse Society, especially "Fresnel Lenses". . . 1788 births 1827 deaths 19th-century deaths from tuberculosis 19th-century French physicists Burials at Père Lachaise Cemetery Corps des ponts École des Ponts ParisTech alumni École Polytechnique alumni Foreign Members of the Royal Society French civil engineers French humanitarians French Roman Catholics History of physics Tuberculosis deaths in France Jansenists Light Lighthouses Members of the French Academy of Sciences Optical physicists People from Eure Physical optics
Augustin-Jean Fresnel
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. Definition In the context of abstract algebra, a mathematical object is an algebraic structure such as a group, ring, or vector space. An automorphism is simply a bijective homomorphism of an object with itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator). The identity morphism (identity mapping) is called the trivial automorphism in some contexts. Respectively, other (non-identity) automorphisms are called nontrivial automorphisms. The exact definition of an automorphism depends on the type of "mathematical object" in question and what, precisely, constitutes an "isomorphism" of that object. The most general setting in which these words have meaning is an abstract branch of mathematics called category theory. Category theory deals with abstract objects and morphisms between those objects. In category theory, an automorphism is an endomorphism (i.e., a morphism from an object to itself) which is also an isomorphism (in the categorical sense of the word, meaning there exists a right and left inverse endomorphism). This is a very abstract definition since, in category theory, morphisms are not necessarily functions and objects are not necessarily sets. In most concrete settings, however, the objects will be sets with some additional structure and the morphisms will be functions preserving that structure. Automorphism group If the automorphisms of an object form a set (instead of a proper class), then they form a group under composition of morphisms. This group is called the automorphism group of . Closure Composition of two automorphisms is another automorphism. Associativity It is part of the definition of a category that composition of morphisms is associative. Identity The identity is the identity morphism from an object to itself, which is an automorphism. Inverses By definition every isomorphism has an inverse which is also an isomorphism, and since the inverse is also an endomorphism of the same object it is an automorphism. The automorphism group of an object X in a category C is denoted AutC(X), or simply Aut(X) if the category is clear from context. Examples In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X. In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group. In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).) A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension. The automorphism group of the quaternions (H) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space. The automorphism group of the octonions (O) is the exceptional Lie group G2. In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation. In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used: In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group. In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations. An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M). In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism. History One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing: so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity. Inner and outer automorphisms In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms. In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma. The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms. The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different. See also Antiautomorphism Automorphism (in Sudoku puzzles) Characteristic subgroup Endomorphism ring Frobenius automorphism Morphism Order automorphism (in order theory). Relation-preserving automorphism Fractional Fourier transform References External links Automorphism at Encyclopaedia of Mathematics Morphisms Abstract algebra Symmetry
Automorphism
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals. Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however, this definition is rejected by major AI researchers. AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia. The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques—including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields. The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction, and philosophy since antiquity. Science fiction and futurology have also suggested that, with its enormous potential and power, AI may become an existential risk to humanity. History Artificial beings with intelligence appeared as storytelling devices in antiquity, and have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence. The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight that digital computers can simulate any process of formal reasoning is known as the Church–Turing thesis. The Church-Turing thesis, along with concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to step-by-step symbol manipulation, known as Symbolic AI or GOFAI. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background. The field of AI research was born at a workshop at Dartmouth College in 1956. The attendees became the founders and leaders of AI research. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. Many researchers began to doubt that the symbolic approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems. Robotics researchers, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move, survive, and learn their environment. Interest in neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart and others in the middle of the 1980s. Soft computing tools were developed in the 80s, such as neural networks, fuzzy systems, Grey system theory, evolutionary computation and many tools drawn from statistics or mathematical optimization. AI gradually restored its reputation in the late 1990s and early 21st century by finding specific solutions to specific problems. The narrow focus allowed researchers to produce verifiable results, exploit more mathematical methods, and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence". Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes". The amount of research into AI (measured by total publications) increased by 50% in the years 2015–2019. Numerous academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Much of current research involves statistical AI, which is overwhelmingly used to solve specific problems, even highly successful techniques such as deep learning. This concern has led to the subfield artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s. Goals The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. Reasoning, problem solving Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Knowledge representation Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real world facts. A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge and act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). A truly intelligent program would also need access to commonsense knowledge; the set of facts that an average person knows. The semantics of an ontology is typically represented in a description logic, such as the Web Ontology Language. AI research has developed tools to represent specific domains, such as: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know);. default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); as well as other domains. Among the most difficult problems in AI are: the breadth of commonsense knowledge (the number of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas. Planning An intelligent agent that can plan makes a representation of the state of the world, makes predictions about how their actions will change it and makes choices that maximize the utility (or "value") of the available choices. In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions. However, if the agent is not the only actor, then it requires that the agent reason under uncertainty, and continuously re-assess its environment and adapt. Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence. Learning Machine learning (ML), a fundamental concept of AI research since the field's inception, is the study of computer algorithms that improve automatically through experience. Unsupervised learning finds patterns in a stream of input. Supervised learning requires a human to label the input data first, and comes in two main varieties: classification and numerical regression. Classification is used to determine what category something belongs in—the program sees a number of examples of things from several categories and will learn to classify new inputs. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent classifies its responses to form a strategy for operating in its problem space. Transfer learning is when knowledge gained from one problem is applied to a new problem. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization. Natural language processing Natural language processing (NLP) allows machines to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of NLP include information retrieval, question answering and machine translation. Symbolic AI used formal syntax to translate the deep structure of sentences into logic. This failed to produce useful applications, due to the intractability of logic and the breadth of commonsense knowledge. Modern statistical techniques include co-occurrence frequencies (how often one word appears near another), "Keyword spotting" (searching for a particular word to retrieve information), transformer-based deep learning (which finds patterns in text), and others. They have achieved acceptable accuracy at the page or paragraph level, and, by 2019, could generate coherent text. Perception Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition, facial recognition, and object recognition. Computer vision is the ability to analyze visual input. Motion and manipulation AI is heavily used in robotics. Localization is how a robot knows its location and maps its environment. When given a small, static, and visible environment, this is easy; however, dynamic environments, such as (in endoscopy) the interior of a patient's breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object. Robots can learn from experience how to move efficiently despite the presence of friction and gear slippage. Social intelligence Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human feeling, emotion and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction. However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject. General intelligence A machine with general intelligence can solve a wide variety of problems with a breadth and versatility similar to human intelligence. There are several competing ideas about how to develop artificial general intelligence. Hans Moravec and Marvin Minsky argue that work in different individual domains can be incorporated into an advanced multi-agent system or cognitive architecture with general intelligence. Pedro Domingos hopes that there is a conceptually straightforward, but mathematically difficult, "master algorithm" that could lead to AGI. Others believe that anthropomorphic features like an artificial brain or simulated child development will someday reach a critical point where general intelligence emerges. Tools Search and optimization Many problems in AI can be solved theoretically by intelligently searching through many possible solutions: Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies, heuristics can also serve to eliminate some choices unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies. Heuristics limit the search for solutions into a smaller sample size. A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization. Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming. Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails). Logic Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning and inductive logic programming is a method for learning. Several different forms of logic are used in AI research. Propositional logic involves truth functions such as "or" and "not". First-order logic adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry), that are too linguistically imprecise to be completely true or false. Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics; situation calculus, event calculus and fluent calculus (for representing events and time); causal calculus; belief calculus (belief revision); and modal logics. Logics to model contradictory or inconsistent statements arising in multi-agent systems have also been designed, such as paraconsistent logics. Probabilistic methods for uncertain reasoning Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics. Bayesian networks are a very general tool that can be used for various problems: reasoning (using the Bayesian inference algorithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design. Classifiers and statistical learning methods The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if diamond then pick up"). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class is a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s. Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s. The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Neural networks are also used for classification. Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as "naive Bayes" on most practical data sets. Artificial neural networks Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks model complex relationships between inputs and outputs or and find patterns in data. They can learn continuous functions and even digital logical operations. Neural networks can be viewed a type of mathematical optimization — they perform a gradient descent on a multi-dimensional topology that was created by training the network. The most common training technique is the backpropagation algorithm. Other learning techniques for neural networks are Hebbian learning ("fire together, wire together"), GMDH or competitive learning. The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks. Deep learning Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces. Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, image classification and others. Deep learning often uses convolutional neural networks for many or all of its layers. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. This can substantially reduce the number of weighted connections between neurons, and creates a hierarchy similar to the organization of the animal visual cortex. In a recurrent neural network the signal will propagate through a layer more than once; thus, an RNN is an example of deep learning. RNNs can be trained by gradient descent, however long-term gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity), known as the vanishing gradient problem. The long short term memory (LSTM) technique can prevent this in most cases. Specialized languages and hardware Specialized languages for artificial intelligence have been developed, such as Lisp, Prolog, TensorFlow and many others. Hardware developed for AI includes AI accelerators and neuromorphic computing. Applications AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect. In the 2010s, AI applications were at the heart of the most commercially successful areas of computing, and have become a ubiquitous feature of daily life. AI is used in search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace), image labeling (used by Facebook, Apple's iPhoto and TikTok) and spam filtering. There are also thousands of successful AI applications used to solve problems for specific industries or institutions. A few examples are: energy storage, deepfakes, medical diagnosis, military logistics, or supply chain management. Game playing has been a test of AI's strength since the 1950s. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus and Cepheus. DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own. By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks. DeepMind's AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. Other applications predict the result of judicial decisions, create art (such as poetry or painting) and prove mathematical theorems. Philosophy Defining artificial intelligence Thinking vs. acting: the Turing test Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". The only thing visible is the behavior of the machine, so it does not matter if the machine is conscious, or has a mind, or whether the intelligence is merely a "simulation" and not "the real thing". He noted that we also don't know these things about other people, but that we extend a "polite convention" that they are actually "thinking". This idea forms the basis of the Turing test. Acting humanly vs. acting intelligently: intelligent agents AI founder John McCarthy said: "Artificial intelligence is not, by definition, simulation of human intelligence". Russell and Norvig agree and criticize the Turing test. They wrote: "Aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons. Other researchers and analysts disagree and have argued that AI should simulate natural intelligence by studying psychology or neurobiology. The intelligent agent paradigm defines intelligent behavior in general, without reference to human beings. An intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. Any system that has goal-directed behavior can be analyzed as an intelligent agent: something as simple as a thermostat, as complex as a human being, as well as large systems such as firms, biomes or nations. The intelligent agent paradigm became widely accepted during the 1990s, and currently serves as the definition of the field. The paradigm has other advantages for AI. It provides a reliable and scientific way to test programs; researchers can directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given "goal function". It also gives them a common language to communicate with other fields — such as mathematical optimization (which is defined in terms of "goals") or economics (which uses the same definition of a "rational agent"). Evaluating approaches to AI No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, neat, soft and narrow (see below). Critics argue that these questions may have to be revisited by future generations of AI researchers. Symbolic AI and its limits Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." However, the symbolic approach failed dismally on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually AI research came to agree. The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. Neat vs. scruffy "Neats" hope that intelligent behavior be described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. This issue was actively discussed in the 70s and 80s, but in the 1990s mathematical methods and solid scientific standards became the norm, a transition that Russell and Norvig termed "the victory of the neats". Soft vs. hard computing Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 80s and most successful AI programs in the 21st century are examples of soft computing with neural networks. Narrow vs. general AI AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence (general AI) directly, or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focussing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively. Machine consciousness, sentience and mind The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant, because it does not effect the goals of the field. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the [philosophy of AI] — as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction. Consciousness David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all. Human information processing is easy to explain, however human subjective experience is difficult to explain. For example, it is easy to imagine a color blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like. Computationalism and functionalism Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind. Robot rights If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it could also suffer, and thus it would be entitled to certain rights. Any hypothetical robot rights would lie on a spectrum with animal rights and human rights. This issue has been considered in fiction for centuries, and is now being considered by, for example, California's Institute for the Future, however critics argue that the discussion is premature. Future Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence, is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. Its intelligence would increase exponentially in an intelligence explosion and could dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario the "singularity". Because it is difficult or impossible to know the limits of intelligence or the capabilities of superintelligent machines, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable. Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998. Risks Technological unemployment In the past technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk". Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. Bad actors and weaponized AI AI provides a number of tools that are particularly useful for authoritarian governments: smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation; advanced AI can make centralized decision making more competitive with liberal and decentralized systems such as markets. Terrorists, criminals and rogue states may use other forms of weaponized AI such as advanced digital warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching battlefield robots. Algorithmic bias AI programs can become biased after learning from real world data. It is not typically introduced by the system designers, but is learned by the program, and thus the programmers are often unaware that the bias exists. Bias can be inadvertently introduced by the way training data is selected. It can also emerge from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be unfair. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be an overestimate than that of white defendants, despite the fact that the program was not told the races of the defendants. Other examples where algorithmic bias can lead to unfair outcomes are when AI is used for credit rating or hiring. Existential risk Superintelligent AI may be able to improve itself to the point that humans could not control it. This could, as physicist Stephen Hawking puts it, "spell the end of the human race". Philosopher Nick Bostrom argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's, it might need to harm humanity to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal. He concludes that AI poses a risk to mankind, however humble or "friendly" its stated goals might be. Political scientist Charles T. Rubin argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would share our system of morality. The opinion of experts and industry insiders is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI. Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have all expressed serious misgivings about the future of AI. Prominent tech titans including Peter Thiel (Amazon Web Services) and Musk have committed more than $1 billion to nonprofit companies that champion responsible AI development, such as OpenAI and the Future of Life Institute. Mark Zuckerberg (CEO, Facebook) has said that artificial intelligence is helpful in its current form and will continue to assist humans. Other experts argue is that the risks are far enough in the future to not be worth researching, or that humans will be valuable from the perspective of a superintelligent machine. Rodney Brooks, in particular, has said that "malevolent" AI is still centuries away. Ethical machines Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. Machine ethics is also called machine morality, computational ethics or computational morality, and was founded at an AAAI symposium in 2005. Others approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines. Human-Centered AI Human-Centered Artificial Intelligence (HCAI) is a set of processes for designing applications that are reliable, safe, and trustworthy. These extend the processes of user experience design such as user observation and interviews. Further processes include discussions with stakeholders, usability testing, iterative refinement and continuing evaluation in use of systems that employ AI and machine learning algorithms. Human-Centered AI manifests in products that are designed to amplify, augment, empower and enhance human performance. These products ensure high levels of human control and high levels of automation. HCAI research includes governance structures that include safety cultures within organizations and independent oversight by experienced groups that review plans for new projects, continuous evaluation of usage, and retrospective analysis of failures. The rise of HCAI is visible in topics such as explainable AI, transparency, audit trail, fairness, trustworthiness, and controllable systems. Regulation The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, USA and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published an joint statement in November 2021 calling for a government commission to regulate AI. In fiction Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction. A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture. Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. Transhumanism (the merging of humans and machines) is explored in the manga Ghost in the Shell and the science-fiction series Dune. Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence. See also A.I. Rising AI control problem Artificial intelligence arms race Artificial general intelligence Behavior selection algorithm Business process automation Case-based reasoning Citizen Science Emergent algorithm Female gendering of AI technologies Glossary of artificial intelligence Robotic process automation Synthetic intelligence Universal basic income Weak AI Explanatory notes Citations References AI textbooks These were the four the most widely used AI textbooks in 2008. . Later editions. . The two most widely used textbooks in 2021. History of AI . . Other sources was introduced by Kunihiko Fukushima in 1980. | . Presidential Address to the Association for the Advancement of Artificial Intelligence. Later published as . Further reading DH Author, "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015) 29(3) Journal of Economic Perspectives 3. Boden, Margaret, Mind As Machine, Oxford University Press, 2006. Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93. Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65. Halpern, Sue, "The Human Costs of AI" (review of Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 327 pp.; Simon Chesterman, We, the Robots?: Regulating Artificial Intelligence and the Limits of the Law, Cambridge University Press, 2021, 289 pp.; Keven Roose, Futureproof: 9 Rules for Humans in the Age of Automation, Random House, 217 pp.; Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, Belknap Press / Harvard University Press, 312 pp.), The New York Review of Books, vol. LXVIII, no. 16 (21 October 2021), pp. 29–31. "AI training models can replicate entrenched social and cultural biases. [...] Machines only know what they know from the data they have been given. [p. 30.] [A]rtificial general intelligence–machine-based intelligence that matches our own–is beyond the capacity of algorithmic machine learning... 'Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.' [E]ven machines that master the tasks they are trained to perform can't jump domains. AIVA, for example, can't drive a car even though it can write music (and wouldn't even be able to do that without Bach and Beethoven [and other composers on which AIVA is trained])." (p. 31.) Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press. Koch, Christof, "Proust among the Machines", Scientific American, vol. 321, no. 6 (December 2019), pp. 46–49. Christof Koch doubts the possibility of "intelligent" machines attaining consciousness, because "[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings." (p. 48.) According to Koch, "Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects... with a point of view.... Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible—the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself." (p. 49.) Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable disambiguation. An example is the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence refers. (p. 61.) E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) . George Musser, "Artificial Imagination: How machines could learn creativity and common sense, among other human qualities", Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63. Myers, Courtney Boyd ed. (2009). "The AI Report" . Forbes June 2009 Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994. Taylor, Paul, "Insanely Complicated, Hopelessly Inadequate" (review of Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment, MIT, 2019, , 157 pp.; Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Ballantine, 2019, , 304 pp.; Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, Penguin, 2019, , 418 pp.), London Review of Books, vol. 43, no. 2 (21 January 2021), pp. 37–39. Paul Taylor writes (p. 39): "Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality." Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.) External links Artificial Intelligence. BBC Radio 4 discussion with John Agar, Alison Adam & Igor Aleksander (In Our Time, Dec. 8, 2005). Sources Cybernetics Formal sciences Computational neuroscience Emerging technologies Unsolved problems in computer science Computational fields of study
Artificial intelligence
An alloy is a mixture of chemical elements of which at least one is a metal. Unlike chemical compounds with metallic bases, an alloy will retain all the properties of a metal in the resulting material, such as electrical conductivity, ductility, opacity, and luster, but may have properties that differ from those of the pure metals, such as increased strength or hardness. In some cases, an alloy may reduce the overall cost of the material while preserving important properties. In other cases, the mixture imparts synergistic properties to the constituent metal elements such as corrosion resistance or mechanical strength. Alloys are defined by a metallic bonding character. The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. An alloy may be a solid solution of metal elements (a single phase, where all metallic grains (crystals) are of the same composition) or a mixture of metallic phases (two or more solutions, forming a microstructure of different crystals within the metal). Examples of alloys include red gold (gold and copper) white gold (gold and silver), sterling silver (silver and copper), steel or silicon steel (iron with non-metallic carbon or silicon respectively), solder, brass, pewter, duralumin, bronze, and amalgams. Alloys are used in a wide variety of applications, from the steel alloys, used in everything from buildings to automobiles to surgical tools, to exotic titanium alloys used in the aerospace industry, to beryllium-copper alloys for non-sparking tools. Characteristics An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture. The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel. Like oil and water, a molten metal may not always mix with another element. For example, pure iron is almost completely insoluble with copper. Even when the constituents are soluble, each will usually have a saturation point, beyond which no more of the constituent can be added. Iron, for example, can hold a maximum of 6.67% carbon. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and forming a second phase that serves to reinforce the crystals internally. Some alloys, such as electrum—an alloy of silver and gold—occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements. An alloy is technically an impure metal, but when referring to alloys, the term impurities usually denotes undesirable elements. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that simply contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy. Theory Alloying a metal is done by combining it with one or more other elements. The most common and oldest alloying process is performed by heating the base metal beyond its melting point and then dissolving the solutes into the molten liquid, which may be possible even if the melting point of the solute is far greater than that of the base. For example, in its liquid state, titanium is a very strong solvent capable of dissolving most metals and elements. In addition, it readily absorbs gases like oxygen and burns in the presence of nitrogen. This increases the chance of contamination from any contacting surface, and so must be melted in vacuum induction-heating and special, water-cooled, copper crucibles. However, some metals and solutes, such as iron and carbon, have very high melting-points and were impossible for ancient people to melt. Thus, alloying (in particular, interstitial alloying) may also be performed with one or more constituents in a gaseous state, such as found in a blast furnace to make pig iron (liquid-gas), nitriding, carbonitriding or other forms of case hardening (solid-gas), or the cementation process used to make blister steel (solid-gas). It may also be done with one, more, or all of the constituents in the solid state, such as found in ancient methods of pattern welding (solid-solid), shear steel (solid-solid), or crucible steel production (solid-liquid), mixing the elements via solid-state diffusion. By adding another element to a metal, differences in the size of the atoms create internal stresses in the lattice of the metallic crystals; stresses that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength, ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura. Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition. Heat treatment Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, et cetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel. The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (more brittle). While the high strength of steel results when diffusion and precipitation is prevented (forming martensite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle. In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. Wilm had been searching for a way to harden aluminium alloys for use in machine-gun cartridge cases. Knowing that aluminium-copper alloys were heat-treatable to some degree, Wilm tried quenching a ternary alloy of aluminium, copper, and the addition of magnesium, but was initially disappointed with the results. However, when Wilm retested it the next day he discovered that the alloy increased in hardness when left to age at room temperature, and far exceeded his expectations. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys used, becoming the primary building material for the first Zeppelins, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft. Mechanisms When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively. In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the spaces between the atoms of the crystal matrix, called the interstices. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix. Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms. History and examples Meteoric iron The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work. Bronze and brass Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit people. Native copper, however, was found worldwide, along with silver, gold, and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Around 10,000 years ago in the highlands of Anatolia (Turkey), humans learned to smelt metals such as copper and tin from ore. Around 2500 BC, people began alloying the two metals to form bronze, which was much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. For example, arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use. Amalgams Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for gilding objects such as armor and mirrors with precious metals. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver from their ores. Precious metals Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the King of Syracuse to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle. Pewter The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin is much too soft to use for most practical purposes. However, during the Bronze Age, tin was a rare metal in many parts of Europe and the Mediterranean, so it was often valued higher than gold. To make jewellery, cutlery, or other objects from tin, workers usually alloyed it with other metals to increase strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes were sometimes added individually in varying amounts, or added together, making a wide variety of objects, ranging from practical items such as dishes, surgical tools, candlesticks or funnels, to decorative items like ear rings and hair clips. The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East. The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines. Iron The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the ancient world. While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the absorption of carbon in this manner is extremely slow thus the penetration was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s. The introduction of the blast furnace to Europe in the Middle Ages meant that people could produce pig iron in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes to reduce carbon in liquid pig iron to create steel. Puddling had been used in China since the first century, and was introduced in Europe during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process led to the first large scale manufacture of steel. Steel is an alloy of iron and carbon, but the term alloy steel usually only refers to steels that contain other elements— like vanadium, molybdenum, or cobalt—in amounts sufficient to alter the properties of the base steel. Since ancient times, when steel was used primarily for tools and weapons, the methods of producing and working the metal were often closely guarded secrets. Even long after the Age of reason, the steel industry was very competitive and manufacturers went through great lengths to keep their processes confidential, resisting any attempts to scientifically analyze the material for fear it would reveal their methods. For example, the people of Sheffield, a center of steel production in England, were known to routinely bar visitors and tourists from entering town to deter industrial espionage. Thus, almost no metallurgical information existed about steel until 1860. Because of this lack of understanding, steel was not generally considered an alloy until the decades between 1930 and 1970 (primarily due to the work of scientists like William Chandler Roberts-Austen, Adolf Martens, and Edgar Bain), so "alloy steel" became the popular term for ternary and quaternary steel-alloys. After Benjamin Huntsman developed his crucible steel in 1740, he began experimenting with the addition of elements like manganese (in the form of a high-manganese pig-iron called spiegeleisen), which helped remove impurities such as phosphorus and oxygen; a process adopted by Bessemer and still used in modern steels (albeit in concentrations low enough to still be considered carbon steel). Afterward, many people began experimenting with various alloys of steel without much success. However, in 1882, Robert Hadfield, being a pioneer in steel metallurgy, took an interest and produced a steel alloy containing around 12% manganese. Called mangalloy, it exhibited extreme hardness and toughness, becoming the first commercially viable alloy-steel. Afterward, he created silicon steel, launching the search for other possible alloys of steel. Robert Forester Mushet found that by adding tungsten to steel it could produce a very hard edge that would resist losing its hardness at high temperatures. "R. Mushet's special steel" (RMS) became the first high-speed steel. Mushet's steel was quickly replaced by tungsten carbide steel, developed by Taylor and White in 1900, in which they doubled the tungsten content and added small amounts of chromium and vanadium, producing a superior steel for use in lathes and machining tools. In 1903, the Wright brothers used a chromium-nickel steel to make the crankshaft for their airplane engine, while in 1908 Henry Ford began using vanadium steels for parts like crankshafts and valves in his Model T Ford, due to their higher strength and resistance to high temperatures. In 1912, the Krupp Ironworks in Germany developed a rust-resistant steel by adding 21% chromium and 7% nickel, producing the first stainless steel. Others Due to their high reactivity, most metals were not discovered until the 19th century. A method for extracting aluminium from bauxite was proposed by Humphry Davy in 1807, using an electric arc. Although his attempts were unsuccessful, by 1855 the first sales of pure aluminium reached the market. However, as extractive metallurgy was still in its infancy, most aluminium extraction-processes produced unintended alloys contaminated with other elements found in the ore; the most abundant of which was copper. These aluminium-copper alloys (at the time termed "aluminum bronze") preceded pure aluminium, offering greater strength and hardness over the soft, pure metal, and to a slight degree were found to be heat treatable. However, due to their softness and limited hardenability these alloys found little practical use, and were more of a novelty, until the Wright brothers used an aluminium alloy to construct the first airplane engine in 1903. During the time between 1865 and 1910, processes for extracting many other metals were discovered, such as chromium, vanadium, tungsten, iridium, cobalt, and molybdenum, and various alloys were developed. Prior to 1910, research mainly consisted of private individuals tinkering in their own laboratories. However, as the aircraft and automotive industries began growing, research into alloys became an industrial effort in the years following 1910, as new magnesium alloys were developed for pistons and wheels in cars, and pot metal for levers and knobs, and aluminium alloys developed for airframes and aircraft skins were put into use. See also Alloy broadening CALPHAD Ideal mixture List of alloys References Bibliography External links Metallurgy Chemistry
Alloy
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Isolated atoms Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms. Electronic configuration Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved. If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic x-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon. There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however there are no such rules for excitation by collision processes. History and developments One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC such as those of Democritus or Vaisheshika Sutra written by Kanad. This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Mendeleev was another great step forward. The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy. Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work. Significant atomic physicists See also Particle physics Isomeric shift Atomic engineering Bibliography References External links MIT-Harvard Center for Ultracold Atoms Joint Quantum Institute at University of Maryland and NIST Atomic Physics on the Internet JILA (Atomic Physics) ORNL Physics Division Atomic, molecular, and optical physics
Atomic physics
In atomic theory and quantum mechanics, an atomic orbital is a mathematical function describing the location and wave-like behavior of an electron in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital. Each orbital in an atom is characterized by a set of values of the three quantum numbers , , and , which respectively correspond to the electron's energy, angular momentum, and an angular momentum vector component (the magnetic quantum number). Alternative to the magnetic quantum number, the orbitals are often labeled by the associated harmonic polynomials (e.g. xy, x2−y2). Each such orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with the value of , are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between the letters "i" and "j". Atomic orbitals are the basic building blocks of the atomic orbital model (alternatively known as the electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of a multi-electron atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of the blocks of 2, 6, 10, and 14 elements within sections of the periodic table arises naturally from the total number of electrons that occupy a complete set of s, p, d, and f atomic orbitals, respectively, although for higher values of the quantum number , particularly when the atom in question bears a positive charge, the energies of certain sub-shells become very similar and so the order in which they are said to be populated by electrons (e.g. Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can only be rationalized somewhat arbitrarily. Electron properties With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the orbiting electrons around a nucleus could not be fully described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties: Wave-like properties: The electrons do not orbit the nucleus in the manner of a planet orbiting the sun, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency. The electrons are never in a single point location, although the probability of interacting with the electron at a single point can be found from the wave function of the electron. The charge on the electron acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: The number of electrons orbiting the nucleus can only be an integer. Electrons jump between orbitals like particles. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon. The electrons retain particle-like properties such as: each wave state has the same electrical charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition. Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom's "electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. Formal quantum mechanical definition Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrodinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and must be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though most electrons do not exist in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbitals Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., an atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for atomic orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates (for atoms) is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions  which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: The hydrogen-like atomic orbitals are derived from the exact solutions of the Schrödinger Equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on the distance r from the nucleus has nodes (radial nodes) and decays as . The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does the hydrogen-like orbital. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as . Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like atomic orbital. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. History The term "orbital" was coined by Robert Mulliken in 1932 as an abbreviation for one-electron orbital wave function. However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr, and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics. Early models With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolved in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were only permitted to have discrete values of angular momentum, quantized in units h/2π. This constraint automatically permitted only certain values of electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step towards the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum, and thus a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation in the hydrogen atom) and remains empty. Modern conceptions and connections to the Heisenberg uncertainty principle Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require an infinite particle momentum. In chemistry, Schrödinger, Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names Orbital notation and subshells Orbitals have been given names, which are usually given in the form: where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number . For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively. The set of orbitals for a given n and is called a subshell, denoted . The exponent y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1. X-ray notation There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can only be solved with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of the hydrogen-like atoms are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of  are even more closely related, and are said to comprise a "subshell". Quantum numbers Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers only occur in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the  shell has only orbitals with , and the  shell has only orbitals with , and . The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, , describes the magnetic moment of an electron in an arbitrary direction, and is also always an integer. Within a subshell where is some integer , ranges thus: . The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist. Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'. Each electron also has a spin quantum number, s, which describes the spin of each electron (spin up or spin down). The number s can be + or −. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment — where an atom is exposed to a magnetic field — provides one such example. Real orbitals In addition to the complex orbitals described above, it is common, especially in the chemistry literature, to utilize real atomic orbitals. These real orbitals arise from simple linear combinations of the complex orbitals. Using the Condon-Shortley phase convention, the real atomic orbitals are related to the complex atomic orbitals in the same way that the real spherical harmonics are related to the complex spherical harmonics. Letting denote a complex atomic orbital with quantum numbers , , and , we define the real atomic orbitals by If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic . Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction . Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In the real hydrogen-like orbitals, the quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (though its absolute value is). Some real atomic orbitals are given specific names beyond the simple designation. Orbitals with quantum number equal to are referred to as orbitals. With this it already possible to assigns names to complex orbitals such as where the first symbol is the quantum number, the second number is the symbol for that particular quantum number and the subscript is the quantum number. As an example of how the full orbital names are generated for real orbitals, we may calculate . From the table of spherical harmonics, we have that with . We then have Likewise we have . As a more complicated example, we also have In all of these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in , , and appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in . We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers. Note that the expression above all use the Condon-Shortley phase convention which is favored by quantum physicists. Other conventions for the phase of the spherical harmonics exists. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above. Below is a tabulation of these Cartesian polynomial names for the atomic orbitals. Note that there does not seem to be reference in the literature as to how to abbreviate the lengthy Cartesian spherical harmonic polynomials for so there does not seem be consensus as to the naming of orbitals or higher according to this nomenclature. Shapes of orbitals Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture. Sometimes the function will be graphed to show its phases, rather than the which shows probability density but has no phases (which have been lost in the process of taking the absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, in order to show wave function phases, shows mostly graphs. The lobes can be viewed as standing wave interference patterns between the two counter rotating, ring resonant travelling wave "" and "" modes, with the projection of the orbital onto the xy plane having a resonant "" wavelengths around the circumference. Though rarely depicted, the travelling wave solutions can be viewed as rotating banded tori, with the bands representing phase information. For each there are two standing wave solutions and . For the case where the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. For the case where there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. For any given , the smaller is, the more radial nodes there are. For any given , the smaller is, the fewer radial nodes there are (zero for whichever first has that orbital). Loosely speaking is energy, is analogous to eccentricity, and is orientation. In the classical case, a ring resonant travelling wave, for example in a circular transmission line, unless actively forced, will spontaneously decay into a ring resonant standing wave because reflections will build up over time at even the smallest imperfection or discontinuity. Generally speaking, the number determines the size and energy of the orbital for a given nucleus: as increases, the size of the orbital increases. When comparing different elements, the higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher ) increases. Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s-orbitals () are shaped like spheres. For it is roughly a solid ball (it is most dense at the center and fades exponentially outwardly), but for or more, each single s-orbital is composed of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s-orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the final outcome (see the figure at right). The shapes of p, d and f-orbitals are described verbally here and shown graphically in the Orbitals table below. The three p-orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes. Four of the five d-orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the centre along the x and y axes themselves. The fifth and final d-orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f-orbitals, each with shapes more complex than those of the d-orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g. all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem. Orbitals table This table shows all orbital configurations for the real hydrogen-like wave functions up to 7s, and therefore covers the simple electronic configuration for all elements in the periodic table up to radium. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics. * No elements with this magnetic quantum number have been discovered yet. † The elements with this magnetic quantum number have been discovered, but their electronic configuration is only a prediction. ‡ The electronic configuration of the elements with this magnetic quantum number has only been confirmed for a spin quantum number of +1/2. These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are contain combinations of two eigenstates. See comparison in the following picture: Qualitative understanding of shapes The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate . None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. Orbital energy In atoms with a single electron (hydrogen-like atoms), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher level of energy, but the difference decreases as increases. For high , the level of energy becomes so high that the electron can easily escape from the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken to a slight extent in the solution to the Dirac equation (where the energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s-orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as , or spin quantum number. Thus, two electrons may occupy a single orbital, so long as they have different values of . However, only two electrons, because of their spin, can be associated with each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr Model, an  electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of  due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical  value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes. pp hybridisation (conjectured) In late period-8 elements a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. Transitions between orbitals Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus only happen if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State , , and State , , and By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can only jump to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. See also Atomic electron configuration table Wiswesser's rule Condensed matter physics Electron configuration Energy level Hund's rules Molecular orbital Quantum chemistry Quantum chemistry computer programs Solid state physics Wave function collapse Notes References External links 3D hydrogen orbitals on Wikimedia Commons Guide to atomic orbitals Covalent Bonds and Molecular Structure Animation of the time evolution of an hydrogenic orbital The Orbitron, a visualization of all common and uncommon atomic orbitals, from 1s to 7g Grand table Still images of many orbitals Atomic physics Chemical bonding Electron states Quantum chemistry Articles containing video clips
Atomic orbital
Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Born in Maida Vale, London, Turing was raised in southern England. He graduated at King's College, Cambridge, with a degree in mathematics. Whilst he was a fellow at Cambridge, he published a proof demonstrating that some purely mathematical yes–no questions can never be answered by computation and defined a Turing machine, and went on to prove the halting problem for Turing machines is undecidable. In 1938, he obtained his PhD from the Department of Mathematics at Princeton University. During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine. Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Axis powers in many crucial engagements, including the Battle of the Atlantic. After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine (ACE), one of the first designs for a stored-program computer. In 1948, Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Despite these accomplishments, he was never fully recognised in his home country during his lifetime because much of his work was covered by the Official Secrets Act. Turing was prosecuted in 1952 for homosexual acts. He accepted hormone treatment with DES, so-called chemical castration, as an alternative to prison. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The "Alan Turing law" is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. Turing has an extensive legacy with statues of him and many things named after him, including an annual award for computer science innovations. He appears on the current Bank of England £50 note, which was released to coincide with his birthday. A 2019 BBC series, as voted by the audience, named him the greatest person of the 20th century. Early life and education Family Turing was born in Maida Vale, London, while his father, Julius Mathison Turing (1873–1947), was on leave from his position with the Indian Civil Service (ICS) at Chatrapur, then in the Madras Presidency and presently in Odisha state, in India. Turing's father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had been based in the Netherlands and included a baronet. Turing's mother, Julius's wife, was Ethel Sara Turing (; 1881–1976), daughter of Edward Waller Stoney, chief engineer of the Madras Railways. The Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius's work with the ICS brought the family to British India, where his grandfather had been a general in the Bengal Army. However, both Julius and Ethel wanted their children to be brought up in Britain, so they moved to Maida Vale, London, where Alan Turing was born on 23 June 1912, as recorded by a blue plaque on the outside of the house of his birth, later the Colonnade Hotel. Turing had an elder brother, John (the father of Sir John Dermot Turing, 12th Baronet of the Turing baronets). Turing's father's civil service commission was still active and during Turing's childhood years, his parents travelled between Hastings in the United Kingdom and India, leaving their two sons to stay with a retired Army couple. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, now marked with a blue plaque. The plaque was unveiled on 23 June 2012, the centenary of Turing's birth. Very early in life, Turing showed signs of the genius that he was later to display prominently. His parents purchased a house in Guildford in 1927, and Turing lived there during school holidays. The location is also marked with a blue plaque. School Turing's parents enrolled him at St Michael's, a primary school at 20 Charles Road, St Leonards-on-Sea, from the age of six to nine. The headmistress recognised his talent, noting that she has "...had clever boys and hardworking boys, but Alan is a genius." Between January 1922 and 1926, Turing was educated at Hazelhurst Preparatory School, an independent school in the village of Frant in Sussex (now East Sussex). In 1926, at the age of 13, he went on to Sherborne School, a boarding independent school in the market town of Sherborne in Dorset, where he boarded at Westcott House. The first day of term coincided with the 1926 General Strike, in Britain, but Turing was so determined to attend, that he rode his bicycle unaccompanied from Southampton to Sherborne, stopping overnight at an inn. Turing's natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics. His headmaster wrote to his parents: "I hope he will not fall between two stools. If he is to stay at public school, he must aim at becoming educated. If he is to be solely a Scientific Specialist, he is wasting his time at a public school". Despite this, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having studied even elementary calculus. In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but it is possible that he managed to deduce Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit. Christopher Morcom At Sherborne, Turing formed a significant friendship with fellow pupil Christopher Collan Morcom (13 July 1911 – 13 February 1930), who has been described as Turing's "first love". Their relationship provided inspiration in Turing's future endeavours, but it was cut short by Morcom's death, in February 1930, from complications of bovine tuberculosis, contracted after drinking infected cow's milk some years previously. The event caused Turing great sorrow. He coped with his grief by working that much harder on the topics of science and mathematics that he had shared with Morcom. In a letter to Morcom's mother, Frances Isobel Morcom (née Swan), Turing wrote: Turing's relationship with Morcom's mother continued long after Morcom's death, with her sending gifts to Turing, and him sending letters, typically on Morcom's birthday. A day before the third anniversary of Morcom's death (13 February 1933), he wrote to Mrs. Morcom: Some have speculated that Morcom's death was the cause of Turing's atheism and materialism. Apparently, at this point in his life he still believed in such concepts as a spirit, independent of the body and surviving death. In a later letter, also written to Morcom's mother, Turing wrote: University and work on computability After Sherborne, Turing studied as an undergraduate from 1931 to 1934 at King's College, Cambridge, where he was awarded first-class honours in mathematics. In 1935, at the age of 22, he was elected a Fellow of King's College on the strength of a dissertation in which he proved the central limit theorem. Unknown to the committee, the theorem had already been proven, in 1922, by Jarl Waldemar Lindeberg. In 1936, Turing published his paper "On Computable Numbers, with an Application to the Entscheidungsproblem". It was published in the Proceedings of the London Mathematical Society journal in two parts, the first on 30 November and the second on 23 December. In this paper, Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. The Entscheidungsproblem (decision problem) was originally posed by German mathematician David Hilbert in 1928. Turing proved that his "universal computing machine" would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the decision problem by first showing that the halting problem for Turing machines is undecidable: it is not possible to decide algorithmically whether a Turing machine will ever halt. This paper has been called "easily the most influential math paper in history". Although Turing's proof was published shortly after Alonzo Church's equivalent proof using his lambda calculus, Turing's approach is considerably more accessible and intuitive than Church's. It also included a notion of a 'Universal Machine' (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other computation machine (as indeed could Church's lambda calculus). According to the Church–Turing thesis, Turing machines and the lambda calculus are capable of computing anything that is computable. John von Neumann acknowledged that the central concept of the modern computer was due to Turing's paper. To this day, Turing machines are a central object of study in theory of computation. From September 1936 to July 1938, Turing spent most of his time studying under Church at Princeton University, in the second year as a Jane Eliza Procter Visiting Fellow. In addition to his purely mathematical work, he studied cryptology and also built three of four stages of an electro-mechanical binary multiplier. In June 1938, he obtained his PhD from the Department of Mathematics at Princeton; his dissertation, Systems of Logic Based on Ordinals, introduced the concept of ordinal logic and the notion of relative computing, in which Turing machines are augmented with so-called oracles, allowing the study of problems that cannot be solved by Turing machines. John von Neumann wanted to hire him as his postdoctoral assistant, but he went back to the United Kingdom. Career and research When Turing returned to Cambridge, he attended lectures given in 1939 by Ludwig Wittgenstein about the foundations of mathematics. The lectures have been reconstructed verbatim, including interjections from Turing and other students, from students' notes. Turing and Wittgenstein argued and disagreed, with Turing defending formalism and Wittgenstein propounding his view that mathematics does not discover any absolute truths, but rather invents them. Cryptanalysis During the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. The historian and wartime codebreaker Asa Briggs has said, "You needed exceptional talent, you needed genius at Bletchley and Turing's was that genius." From September 1938, Turing worked part-time with the Government Code and Cypher School (GC&CS), the British codebreaking organisation. He concentrated on cryptanalysis of the Enigma cipher machine used by Nazi Germany, together with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 meeting near Warsaw at which the Polish Cipher Bureau gave the British and French details of the wiring of Enigma machine's rotors and their method of decrypting Enigma machine's messages, Turing and Knox developed a broader solution. The Polish method relied on an insecure indicator procedure that the Germans were likely to change, which they in fact did in May 1940. Turing's approach was more general, using crib-based decryption for which he produced the functional specification of the bombe (an improvement on the Polish Bomba). On 4 September 1939, the day after the UK declared war on Germany, Turing reported to Bletchley Park, the wartime station of GC&CS. Like all others who came to Bletchley, he was required to sign the Official Secrets Act, in which he agreed not to disclose anything about his work at Bletchley, with severe legal penalties for violating the Act. Specifying the bombe was the first of five major cryptanalytical advances that Turing made during the war. The others were: deducing the indicator procedure used by the German navy; developing a statistical procedure dubbed Banburismus for making much more efficient use of the bombes; developing a procedure dubbed Turingery for working out the cam settings of the wheels of the Lorenz SZ 40/42 (Tunny) cipher machine and, towards the end of the war, the development of a portable secure voice scrambler at Hanslope Park that was codenamed Delilah. By using statistical techniques to optimise the trial of different possibilities in the code breaking process, Turing made an innovative contribution to the subject. He wrote two papers discussing mathematical approaches, titled The Applications of Probability to Cryptography and Paper on Statistics of Repetitions, which were of such value to GC&CS and its successor GCHQ that they were not released to the UK National Archives until April 2012, shortly before the centenary of his birth. A GCHQ mathematician, "who identified himself only as Richard," said at the time that the fact that the contents had been restricted under the Official Secrets Act for some 70 years demonstrated their importance, and their relevance to post-war cryptanalysis: Turing had a reputation for eccentricity at Bletchley Park. He was known to his colleagues as "Prof" and his treatise on Enigma was known as the "Prof's Book". According to historian Ronald Lewin, Jack Good, a cryptanalyst who worked with Turing, said of his colleague: Peter Hilton recounted his experience working with Turing in Hut 8 in his "Reminiscences of Bletchley Park" from A Century of Mathematics in America: Hilton echoed similar thoughts in the Nova PBS documentary Decoding Nazi Secrets. While working at Bletchley, Turing, who was a talented long-distance runner, occasionally ran the to London when he was needed for meetings, and he was capable of world-class marathon standards. Turing tried out for the 1948 British Olympic team, but he was hampered by an injury. His tryout time for the marathon was only 11 minutes slower than British silver medallist Thomas Richards' Olympic race time of 2 hours 35 minutes. He was Walton Athletic Club's best runner, a fact discovered when he passed the group while running alone. When asked why he ran so hard in training he replied: Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war. However, official war historian Harry Hinsley estimated that this work shortened the war in Europe by more than two years and saved over 14 million lives. At the end of the war, a memo was sent to all those who had worked at Bletchley Park, reminding them that the code of silence dictated by the Official Secrets Act did not end with the war but would continue indefinitely. Thus, even though Turing was appointed an Officer of the Order of the British Empire (OBE) in 1946 by King George VI for his wartime services, his work remained secret for many years. Bombe Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine called the bombe, which could break Enigma more effectively than the Polish bomba kryptologiczna, from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages. The bombe searched for possible correct settings used for an Enigma message (i.e., rotor order, rotor settings and plugboard settings) using a suitable crib: a fragment of probable plaintext. For each possible setting of the rotors (which had on the order of 1019 states, or 1022 states for the four-rotor U-boat variant), the bombe performed a chain of logical deductions based on the crib, implemented electromechanically. The bombe detected when a contradiction had occurred and ruled out that setting, moving on to the next. Most of the possible settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. A contradiction would occur when an enciphered letter would be turned back into the same plaintext letter, which was impossible with the Enigma. The first bombe was installed on 18 March 1940. By late 1941, Turing and his fellow cryptanalysts Gordon Welchman, Hugh Alexander and Stuart Milner-Barry were frustrated. Building on the work of the Poles, they had set up a good working system for decrypting Enigma signals, but their limited staff and bombes meant they could not translate all the signals. In the summer, they had considerable success, and shipping losses had fallen to under 100,000 tons a month; however, they badly needed more resources to keep abreast of German adjustments. They had tried to get more people and fund more bombes through the proper channels, but had failed. On 28 October they wrote directly to Winston Churchill explaining their difficulties, with Turing as the first named. They emphasised how small their need was compared with the vast expenditure of men and money by the forces and compared with the level of assistance they could offer to the forces. As Andrew Hodges, biographer of Turing, later wrote, "This letter had an electric effect." Churchill wrote a memo to General Ismay, which read: "ACTION THIS DAY. Make sure they have all they want on extreme priority and report to me that this has been done." On 18 November, the chief of the secret service reported that every possible measure was being taken. The cryptographers at Bletchley Park did not know of the Prime Minister's response, but as Milner-Barry recalled, "All that we did notice was that almost from that day the rough ways began miraculously to be made smooth." More than two hundred bombes were in operation by the end of the war. Hut 8 and the naval Enigma Turing decided to tackle the particularly difficult problem of German naval Enigma "because no one else was doing anything about it and I could have it to myself". In December 1939, Turing solved the essential part of the naval indicator system, which was more complex than the indicator systems used by the other services. That same night, he also conceived of the idea of Banburismus, a sequential statistical technique (what Abraham Wald later called sequential analysis) to assist in breaking the naval Enigma, "though I was not sure that it would work in practice, and was not, in fact, sure until some days had actually broken." For this, he invented a measure of weight of evidence that he called the ban. Banburismus could rule out certain sequences of the Enigma rotors, substantially reducing the time needed to test settings on the bombes. Later this sequential process of accumulating sufficient weight of evidence using decibans (one tenth of a ban) was used in Cryptanalysis of the Lorenz cipher. Turing travelled to the United States in November 1942 and worked with US Navy cryptanalysts on the naval Enigma and bombe construction in Washington; he also visited their Computing Machine Laboratory in Dayton, Ohio. Turing's reaction to the American bombe design was far from enthusiastic: During this trip, he also assisted at Bell Labs with the development of secure speech devices. He returned to Bletchley Park in March 1943. During his absence, Hugh Alexander had officially assumed the position of head of Hut 8, although Alexander had been de facto head for some time (Turing having little interest in the day-to-day running of the section). Turing became a general consultant for cryptanalysis at Bletchley Park. Alexander wrote of Turing's contribution: Turingery In July 1942, Turing devised a technique termed Turingery (or jokingly Turingismus) for use against the Lorenz cipher messages produced by the Germans' new Geheimschreiber (secret writer) machine. This was a teleprinter rotor cipher attachment codenamed Tunny at Bletchley Park. Turingery was a method of wheel-breaking, i.e., a procedure for working out the cam settings of Tunny's wheels. He also introduced the Tunny team to Tommy Flowers who, under the guidance of Max Newman, went on to build the Colossus computer, the world's first programmable digital electronic computer, which replaced a simpler prior machine (the Heath Robinson), and whose superior speed allowed the statistical decryption techniques to be applied usefully to the messages. Some have mistakenly said that Turing was a key figure in the design of the Colossus computer. Turingery and the statistical approach of Banburismus undoubtedly fed into the thinking about cryptanalysis of the Lorenz cipher, but he was not directly involved in the Colossus development. Delilah Following his work at Bell Labs in the US, Turing pursued the idea of electronic enciphering of speech in the telephone system. In the latter part of the war, he moved to work for the Secret Service's Radio Security Service (later HMGCC) at Hanslope Park. At the park, he further developed his knowledge of electronics with the assistance of engineer Donald Bayley. Together they undertook the design and construction of a portable secure voice communications machine codenamed Delilah. The machine was intended for different applications, but it lacked the capability for use with long-distance radio transmissions. In any case, Delilah was completed too late to be used during the war. Though the system worked fully, with Turing demonstrating it to officials by encrypting and decrypting a recording of a Winston Churchill speech, Delilah was not adopted for use. Turing also consulted with Bell Labs on the development of SIGSALY, a secure voice system that was used in the later years of the war. Early computers and the Turing test Between 1945 and 1947, Turing lived in Hampton, London, while he worked on the design of the ACE (Automatic Computing Engine) at the National Physical Laboratory (NPL). He presented a paper on 19 February 1946, which was the first detailed design of a stored-program computer. Von Neumann's incomplete First Draft of a Report on the EDVAC had predated Turing's paper, but it was much less detailed and, according to John R. Womersley, Superintendent of the NPL Mathematics Division, it "contains a number of ideas which are Dr. Turing's own". Although ACE was a feasible design, the effect of the Official Secrets Act surrounding the wartime work at Bletchley Park made it impossible for Turing to explain the basis of his analysis of how a computer installation involving human operators would work. This led to delays in starting the project and he became disillusioned. In late 1947 he returned to Cambridge for a sabbatical year during which he produced a seminal work on Intelligent Machinery that was not published in his lifetime. While he was at Cambridge, the Pilot ACE was being built in his absence. It executed its first program on 10 May 1950, and a number of later computers around the world owe much to it, including the English Electric DEUCE and the American Bendix G-15. The full version of Turing's ACE was not built until after his death. According to the memoirs of the German computer pioneer Heinz Billing from the Max Planck Institute for Physics, published by Genscher, Düsseldorf, there was a meeting between Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing (for more details see Herbert Bruderer, Konrad Zuse und die Schweiz). In 1948, Turing was appointed reader in the Mathematics Department at the Victoria University of Manchester. A year later, he became deputy director of the Computing Machine Laboratory, where he worked on software for one of the earliest stored-program computers—the Manchester Mark 1. Turing wrote the first version of the Programmer's Manual for this machine, and was recruited by Ferranti as a consultant in the development of their commercialised machine, the Ferranti Mark 1. He continued to be paid consultancy fees by Ferranti until his death. During this time, he continued to do more abstract work in mathematics, and in "Computing Machinery and Intelligence" (Mind, October 1950), Turing addressed the problem of artificial intelligence, and proposed an experiment that became known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that a computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being. In the paper, Turing suggested that rather than building a program to simulate the adult mind, it would be better to produce a simpler one to simulate a child's mind and then to subject it to a course of education. A reversed form of the Turing test is widely used on the Internet; the CAPTCHA test is intended to determine whether the user is a human or a computer. In 1948, Turing, working with his former undergraduate colleague, D.G. Champernowne, began writing a chess program for a computer that did not yet exist. By 1950, the program was completed and dubbed the Turochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the program. Instead, Turing "ran" the program by flipping through the pages of the algorithm and carrying out its instructions on a chessboard, taking about half an hour per move. The game was recorded. According to Garry Kasparov, Turing's program "played a recognizable game of chess." The program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife, Isabel. His Turing test was a significant, characteristically provocative, and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century. Pattern formation and mathematical biology When Turing was 39 years old in 1951, he turned to mathematical biology, finally publishing his masterpiece "The Chemical Basis of Morphogenesis" in January 1952. He was interested in morphogenesis, the development of patterns and shapes in biological organisms. He suggested that a system of chemicals reacting with each other and diffusing across space, termed a reaction–diffusion system, could account for "the main phenomena of morphogenesis". He used systems of partial differential equations to model catalytic chemical reactions. For example, if a catalyst A is required for a certain chemical reaction to take place, and if the reaction produced more of the catalyst A, then we say that the reaction is autocatalytic, and there is positive feedback that can be modelled by nonlinear differential equations. Turing discovered that patterns could be created if the chemical reaction not only produced catalyst A, but also produced an inhibitor B that slowed down the production of A. If A and B then diffused through the container at different rates, then you could have some regions where A dominated and some where B did. To calculate the extent of this, Turing would have needed a powerful computer, but these were not so freely available in 1951, so he had to use linear approximations to solve the equations by hand. These calculations gave the right qualitative results, and produced, for example, a uniform mixture that oddly enough had regularly spaced fixed red spots. The Russian biochemist Boris Belousov had performed experiments with similar results, but could not get his papers published because of the contemporary prejudice that any such thing violated the second law of thermodynamics. Belousov was not aware of Turing's paper in the Philosophical Transactions of the Royal Society. Although published before the structure and role of DNA was understood, Turing's work on morphogenesis remains relevant today and is considered a seminal piece of work in mathematical biology. One of the early applications of Turing's paper was the work by James Murray explaining spots and stripes on the fur of cats, large and small. Further research in the area suggests that Turing's work can partially explain the growth of "feathers, hair follicles, the branching pattern of lungs, and even the left-right asymmetry that puts the heart on the left side of the chest." In 2012, Sheth, et al. found that in mice, removal of Hox genes causes an increase in the number of digits without an increase in the overall size of the limb, suggesting that Hox genes control digit formation by tuning the wavelength of a Turing-type mechanism. Later papers were not available until Collected Works of A. M. Turing was published in 1992. Personal life Engagement In 1941, Turing proposed marriage to Hut 8 colleague Joan Clarke, a fellow mathematician and cryptanalyst, but their engagement was short-lived. After admitting his homosexuality to his fiancée, who was reportedly "unfazed" by the revelation, Turing decided that he could not go through with the marriage. Conviction for indecency In January 1952, Turing was 39 when he started a relationship with Arnold Murray, a 19-year-old unemployed man. Just before Christmas, Turing was walking along Manchester's Oxford Road when he met Murray just outside the Regal Cinema and invited him to lunch. On 23 January, Turing's house was burgled. Murray told Turing that he and the burglar were acquainted, and Turing reported the crime to the police. During the investigation, he acknowledged a sexual relationship with Murray. Homosexual acts were criminal offences in the United Kingdom at that time, and both men were charged with "gross indecency" under Section 11 of the Criminal Law Amendment Act 1885. Initial committal proceedings for the trial were held on 27 February during which Turing's solicitor "reserved his defence", i.e., did not argue or provide evidence against the allegations. Turing was later convinced by the advice of his brother and his own solicitor, and he entered a plea of guilty. The case, Regina v. Turing and Murray, was brought to trial on 31 March 1952. Turing was convicted and given a choice between imprisonment and probation. His probation would be conditional on his agreement to undergo hormonal physical changes designed to reduce libido. He accepted the option of injections of what was then called stilboestrol (now known as diethylstilbestrol or DES), a synthetic oestrogen; this feminization of his body was continued for the course of one year. The treatment rendered Turing impotent and caused breast tissue to form, fulfilling in the literal sense Turing's prediction that "no doubt I shall emerge from it all a different man, but quite who I've not found out". Murray was given a conditional discharge. Turing's conviction led to the removal of his security clearance and barred him from continuing with his cryptographic consultancy for the Government Communications Headquarters (GCHQ), the British signals intelligence agency that had evolved from GC&CS in 1946, though he kept his academic job. He was denied entry into the United States after his conviction in 1952, but was free to visit other European countries. Death On 8 June 1954, at his house at 43 Adlington Road, Wilmslow, Turing's housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death. When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide, it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt's words) he took "an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew". Turing's remains were cremated at Woking Crematorium on 12 June 1954, and his ashes were scattered in the gardens of the crematorium, just as his father's had been. Philosopher Jack Copeland has questioned various aspects of the coroner's historical verdict. He suggested an alternative explanation for the cause of Turing's death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten. Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) "with good humour" and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend. Turing's mother believed that the ingestion was accidental, resulting from her son's careless storage of laboratory chemicals. Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims. It has been suggested that Turing's belief in fortune-telling may have caused his depressed mood. As a youth, Turing had been told by a fortune-teller that he would be a genius. In mid-May 1954, shortly before his death, Turing again decided to consult a fortune-teller during a day-trip to St Annes-on-Sea with the Greenbaum family. According to the Greenbaums' daughter, Barbara: But it was a lovely sunny day and Alan was in a cheerful mood and off we went... Then he thought it would be a good idea to go to the Pleasure Beach at Blackpool. We found a fortune-teller's tent[,] and Alan said he'd like to go in[,] so we waited around for him to come back... And this sunny, cheerful visage had shrunk into a pale, shaking, horror-stricken face. Something had happened. We don't know what the fortune-teller said[,] but he obviously was deeply unhappy. I think that was probably the last time we saw him before we heard of his suicide. Government apology and pardon In August 2009, British programmer John Graham-Cumming started a petition urging the British government to apologise for Turing's prosecution as a homosexual. The petition received more than 30,000 signatures. The Prime Minister, Gordon Brown, acknowledged the petition, releasing a statement on 10 September 2009 apologising and describing the treatment of Turing as "appalling": In December 2011, William Jones and his Member of Parliament, John Leech, created an e-petition requesting that the British government pardon Turing for his conviction of "gross indecency": The petition gathered over 37,000 signatures, and was submitted to Parliament by the Manchester MP John Leech but the request was discouraged by Justice Minister Lord McNally, who said: John Leech, the MP for Manchester Withington (2005–15), submitted several bills to Parliament and led a high-profile campaign to secure the pardon. Leech made the case in the House of Commons that Turing's contribution to the war made him a national hero and that it was "ultimately just embarrassing" that the conviction still stood. Leech continued to take the bill through Parliament and campaigned for several years, gaining the public support of numerous leading scientists, including Stephen Hawking. At the British premiere of a film based on Turing's life, The Imitation Game, the producers thanked Leech for bringing the topic to public attention and securing Turing's pardon. Leech is now regularly described as the "architect" of Turing's pardon and subsequently the Alan Turing Law which went on to secure pardons for 75,000 other men and women convicted of similar crimes. On 26 July 2012, a bill was introduced in the House of Lords to grant a statutory pardon to Turing for offences under section 11 of the Criminal Law Amendment Act 1885, of which he was convicted on 31 March 1952. Late in the year in a letter to The Daily Telegraph, the physicist Stephen Hawking and 10 other signatories including the Astronomer Royal Lord Rees, President of the Royal Society Sir Paul Nurse, Lady Trumpington (who worked for Turing during the war) and Lord Sharkey (the bill's sponsor) called on Prime Minister David Cameron to act on the pardon request. The government indicated it would support the bill, and it passed its third reading in the House of Lords in October. At the bill's second reading in the House of Commons on 29 November 2013, Conservative MP Christopher Chope objected to the bill, delaying its passage. The bill was due to return to the House of Commons on 28 February 2014, but before the bill could be debated in the House of Commons, the government elected to proceed under the royal prerogative of mercy. On 24 December 2013, Queen Elizabeth II signed a pardon for Turing's conviction for "gross indecency", with immediate effect. Announcing the pardon, Lord Chancellor Chris Grayling said Turing deserved to be "remembered and recognised for his fantastic contribution to the war effort" and not for his later criminal conviction. The Queen officially pronounced Turing pardoned in August 2014. The Queen's action is only the fourth royal pardon granted since the conclusion of the Second World War. Pardons are normally granted only when the person is technically innocent, and a request has been made by the family or other interested party; neither condition was met in regard to Turing's conviction. In September 2016, the government announced its intention to expand this retroactive exoneration to other men convicted of similar historical indecency offences, in what was described as an "Alan Turing law". The Alan Turing law is now an informal term for the law in the United Kingdom, contained in the Policing and Crime Act 2017, which serves as an amnesty law to retroactively pardon men who were cautioned or convicted under historical legislation that outlawed homosexual acts. The law applies in England and Wales. Legacy Awards, honours, and tributes Turing was appointed an officer of the Order of the British Empire in 1946. He was also elected a Fellow of the Royal Society (FRS) in 1951. Turing has been honoured in various ways in Manchester, the city where he worked towards the end of his life. In 1994, a stretch of the A6010 road (the Manchester city intermediate ring road) was named "Alan Turing Way". A bridge carrying this road was widened, and carries the name Alan Turing Bridge. A statue of Turing was unveiled in Manchester on 23 June 2001 in Sackville Park, between the University of Manchester building on Whitworth Street and Canal Street. The memorial statue depicts the "father of computer science" sitting on a bench at a central position in the park. Turing is shown holding an apple. The cast bronze bench carries in relief the text 'Alan Mathison Turing 1912–1954', and the motto 'Founder of Computer Science' as it could appear if encoded by an Enigma machine: 'IEKYF ROMSI ADXUO KVKZC GUBJ'. However, the meaning of the coded message is disputed, as the 'u' in 'computer' matches up with the 'u' in 'ADXUO'. As a letter encoded by an enigma machine cannot appear as itself, the actual message behind the code is uncertain. A plaque at the statue's feet reads 'Father of computer science, mathematician, logician, wartime codebreaker, victim of prejudice'. There is also a Bertrand Russell quotation: "Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture." The sculptor buried his own old Amstrad computer under the plinth as a tribute to "the godfather of all modern computers". In 1999, Time magazine named Turing as one of the 100 Most Important People of the 20th century and stated, "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine." A blue plaque was unveiled at King's College on the centenary of his birth on 23 June 2012 and is now installed at the college's Keynes Building on King's Parade. On 25 March 2021, the Bank of England publicly unveiled the design for a new £50 note, featuring Turing's portrait, before its official issue on 23 June, Turing's birthday. Turing was selected as the new face of the note in 2019 following a public nomination process. Centenary celebrations To mark the 100th anniversary of Turing's birth, the Turing Centenary Advisory Committee (TCAC) co-ordinated the Alan Turing Year in 2012, a year-long programme of events around the world honouring Turing's life and achievements. The TCAC, chaired by S. Barry Cooper with Turing's nephew Sir John Dermot Turing acting as Honorary President, worked with the University of Manchester faculty members and a broad spectrum of people from Cambridge University and Bletchley Park. Steel sculpture controversy In May 2020 it was reported by Gay Star News that a high steel sculpture, to honour Turing, designed by Sir Antony Gormley, was planned to be installed at King's College, Cambridge. Historic England, however, was quoted as saying that the abstract work of 19 steel slabs "... would be at odds with the existing character of the College. This would result in harm, of a less than substantial nature, to the significance of the listed buildings and landscape, and by extension the conservation area." References Sources Bruderer, Herbert: Konrad Zuse und die Schweiz. Wer hat den Computer erfunden? Charles Babbage, Alan Turing und John von Neumann Oldenbourg Verlag, München 2012, XXVI, 224 Seiten, in Petzold, Charles (2008). "The Annotated Turing: A Guided Tour through Alan Turing's Historic Paper on Computability and the Turing Machine". Indianapolis: Wiley Publishing. Smith, Roger (1997). Fontana History of the Human Sciences. London: Fontana. Weizenbaum, Joseph (1976). Computer Power and Human Reason. London: W.H. Freeman. and Turing's mother, who survived him by many years, wrote this 157-page biography of her son, glorifying his life. It was published in 1959, and so could not cover his war work. Scarcely 300 copies were sold (Sara Turing to Lyn Newman, 1967, Library of St John's College, Cambridge). The six-page foreword by Lyn Irvine includes reminiscences and is more frequently quoted. It was re-published by Cambridge University Press in 2012, to honour the centenary of his birth, and included a new foreword by Martin Davis, as well as a never-before-published memoir by Turing's older brother John F. Turing. This 1986 Hugh Whitemore play tells the story of Turing's life and death. In the original West End and Broadway runs, Derek Jacobi played Turing and he recreated the role in a 1997 television film based on the play made jointly by the BBC and WGBH, Boston. The play is published by Amber Lane Press, Oxford, ASIN: B000B7TM0Q Williams, Michael R. (1985) A History of Computing Technology, Englewood Cliffs, New Jersey: Prentice-Hall, Further reading Articles Books (originally published in 1983); basis of the film The Imitation Game (originally published in 1959 by W. Heffer & Sons, Ltd) External links Oral history interview with Nicholas C. Metropolis, Charles Babbage Institute, University of Minnesota. Metropolis was the first director of computing services at Los Alamos National Laboratory; topics include the relationship between Turing and John von Neumann How Alan Turing Cracked The Enigma Code Imperial War Museums Alan Turing RKBExplorer Alan Turing Year CiE 2012: Turing Centenary Conference Science in the Making Alan Turing's papers in the Royal Society's archives Alan Turing site maintained by Andrew Hodges including a short biography AlanTuring.net – Turing Archive for the History of Computing by Jack Copeland The Turing Archive – contains scans of some unpublished documents and material from the King's College, Cambridge archive Alan Turing Papers – University of Manchester Library, Manchester Sherborne School Archives – holds papers relating to Turing's time at Sherborne School Alan Turing plaques recorded on openplaques.org Alan Turing archive on New Scientist 1912 births 1954 deaths 1954 suicides 20th-century mathematicians 20th-century atheists 20th-century British scientists 20th-century English philosophers Academics of the University of Manchester Academics of the University of Manchester Institute of Science and Technology Alumni of King's College, Cambridge Artificial intelligence researchers Bayesian statisticians Bletchley Park people British anti-fascists British cryptographers British people of World War II Computability theorists Computer designers English atheists English computer scientists English inventors English logicians English male long-distance runners English mathematicians English people of Irish descent English people of Scottish descent Fellows of King's College, Cambridge Fellows of the Royal Society Former Protestants Foreign Office personnel of World War II Gay academics Gay scientists Gay sportsmen GCHQ people History of artificial intelligence History of computing in the United Kingdom LGBT-related suicides LGBT mathematicians LGBT philosophers LGBT scientists from the United Kingdom LGBT sportspeople from England LGBT track and field athletes Officers of the Order of the British Empire People educated at Sherborne School People from Maida Vale People from Wilmslow People convicted for homosexuality in the United Kingdom People who have received posthumous pardons Princeton University alumni Recipients of British royal pardons Suicides by cyanide poisoning Suicides in England Theoretical computer scientists Deaths by poisoning
Alan Turing
Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, extended from Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, called Ada 2012 informally, is ISO/IEC 8652:2012. Ada was originally designed by a team led by French computer scientist Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer. Features Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP). Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as "or else" and "and then") to symbols (such as "||" and "&&"). Ada uses the basic arithmetical operators "+", "-", "*", and "/", but avoids using other symbols. Code blocks are delimited by words such as "declare", "begin", and "end", where the "end" (in most cases) is followed by the identifier of the block it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is widely used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology. Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to. Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool. A double-dash ("--"), resembling an em dash, denotes comment text. Comments stop at end of line, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code now requires the prefixing of each line (or column) individually with "--". While clearly denoting disabled code with a column of repeated "--" down the page this renders the experimental dis/re-enablement of large blocks a more drawn out process. The semicolon (";") is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed. Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written. One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, part of the GNU Compiler Collection. History In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original Straw man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996. The HOLWG working group crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (CII Honeywell Bull, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at CII Honeywell Bull, was chosen and given the name Ada—after Augusta Ada, Countess of Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, C. A. R. Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook. Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required "ACVC" validation suite that was required in another novel feature of the Ada language effort. The Jargon File, a dictionary of computer hacker slang originating in 1975–1983, notes in an entry on Ada that "it is precisely what one might expect given that kind of endorsement by fiat; designed by committee...difficult to use, and overall a disastrous, multi-billion-dollar boondoggle...Ada Lovelace...would almost certainly blanch at the use her name has been latterly put to; the kindest thing that has been said about it is that there is probably a good small language screaming to get out from inside its vast, elephantine bulk." The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada. By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to. Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking. For example, the Airplane Information Management System, the fly-by-wire system software in the Boeing 777, was written in Ada. Developed by Honeywell Air Transport Systems in collaboration with consultants from DDC-I, it became arguably the best-known of any Ada project, civilian or military. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support (iFACTS) air traffic control system is designed and implemented using SPARK Ada. It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City. Standardization The language became an ANSI standard in 1983 (ANSI/MIL-STD 1815A), and after translation in French and without any further changes in English became an ISO standard in 1987 (ISO-8652:1987). This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. Ada 95, the joint ISO/ANSI standard (ISO-8652:1995) was published in February 1995, making Ada 95 the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection. Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007. At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the International Organization for Standardization (ISO) for approval. ISO/IEC 8652:2012 was published in December 2012. Other related standards include ISO 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada. Language constructs Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal. "Hello, world!" in Ada A common example of a language's syntax is the Hello world program: (hello.adb) with Ada.Text_IO; use Ada.Text_IO; procedure Hello is begin Put_Line ("Hello, world!"); end Hello; This program can be compiled by using the freely available open source compiler GNAT, by executing gnatmake hello.adb Data types Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted. Special types provided by the language are task types and protected types. For example, a date might be represented as: type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; Types can be refined by declaring subtypes: subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day subtype Working_Day is Weekday range Monday .. Friday; -- Days to work Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration := (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization Types can have modifiers such as limited, abstract, private etc. Private types can only be accessed and limited types can only be modified or copied within the scope of the package that defines them. Ada 95 adds further features for object-oriented extension of types. Control structures Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed. -- while a is not equal to b, loop. while a /= b loop Ada.Text_IO.Put_Line ("Waiting"); end loop; if a > b then Ada.Text_IO.Put_Line ("Condition met"); else Ada.Text_IO.Put_Line ("Condition not met"); end if; for i in 1 .. 10 loop Ada.Text_IO.Put ("Iteration: "); Ada.Text_IO.Put (i); Ada.Text_IO.Put_Line; end loop; loop a := a + 1; exit when a = 10; end loop; case i is when 0 => Ada.Text_IO.Put ("zero"); when 1 => Ada.Text_IO.Put ("one"); when 2 => Ada.Text_IO.Put ("two"); -- case statements have to cover all possible cases: when others => Ada.Text_IO.Put ("none of the above"); end case; for aWeekday in Weekday'Range loop -- loop over an enumeration Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration if aWeekday in Working_Day then -- check of a subtype of an enumeration Put_Line ( " to work for " & Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table end if; end loop; Packages, procedures and functions Among the parts of an Ada program are packages, procedures and functions. Example: Package specification (example.ads) package Example is type Number is range 1 .. 11; procedure Print_and_Increment (j: in out Number); end Example; Package body (example.adb) with Ada.Text_IO; package body Example is i : Number := Number'First; procedure Print_and_Increment (j: in out Number) is function Next (k: in Number) return Number is begin return k + 1; end Next; begin Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) ); j := Next (j); end Print_and_Increment; -- package initialization executed when the package is elaborated begin while i < Number'Last loop Print_and_Increment (i); end loop; end Example; This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing gnatmake -z example.adb Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block. Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order. Concurrency Ada has language support for task-based concurrency. The fundamental concurrent unit in Ada is a task, which is a built-in limited type. Tasks are specified in two parts – the task declaration defines the task interface (similar to a type declaration), the task body specifies the implementation of the task. Depending on the implementation, Ada tasks are either mapped to operating system threads or processes, or are scheduled internally by the Ada runtime. Tasks can have entries for synchronisation (a form of synchronous message passing). Task entries are declared in the task specification. Each task entry can have one or more accept statements within the task body. If the control flow of the task reaches an accept statement, the task is blocked until the corresponding entry is called by another task (similarly, a calling task is blocked until the called task reaches the corresponding accept statement). Task entries can have parameters similar to procedures, allowing tasks to synchronously exchange data. In conjunction with select statements it is possible to define guards on accept statements (similar to Dijkstra's guarded commands). Ada also offers protected objects for mutual exclusion. Protected objects are a monitor-like construct, but use guards instead of conditional variables for signaling (similar to conditional critical regions). Protected objects combine the data encapsulation and safe mutual exclusion from monitors, and entry guards from conditional critical regions. The main advantage over classical monitors is that conditional variables are not required for signaling, avoiding potential deadlocks due to incorrect locking semantics. Like tasks, the protected object is a built-in limited type, and it also has a declaration part and a body. A protected object consists of encapsulated private data (which can only be accessed from within the protected object), and procedures, functions and entries which are guaranteed to be mutually exclusive (with the only exception of functions, which are required to be side effect free and can therefore run concurrently with other functions). A task calling a protected object is blocked if another task is currently executing inside the same protected object, and released when this other task leaves the protected object. Blocked tasks are queued on the protected object ordered by time of arrival. Protected object entries are similar to procedures, but additionally have guards. If a guard evaluates to false, a calling task is blocked and added to the queue of that entry; now another task can be admitted to the protected object, as no task is currently executing inside the protected object. Guards are re-evaluated whenever a task leaves the protected object, as this is the only time when the evaluation of guards can have changed. Calls to entries can be requeued to other entries with the same signature. A task that is requeued is blocked and added to the queue of the target entry; this means that the protected object is released and allows admission of another task. The select statement in Ada can be used to implement non-blocking entry calls and accepts, non-deterministic selection of entries (also with guards), time-outs and aborts. The following example illustrates some concepts of concurrent programming in Ada. with Ada.Text_IO; use Ada.Text_IO; procedure Traffic is type Airplane_ID is range 1..10; -- 10 airplanes task type Airplane (ID: Airplane_ID); -- task representing airplanes, with ID as initialisation parameter type Airplane_Access is access Airplane; -- reference type to Airplane protected type Runway is -- the shared runway (protected to allow concurrent access) entry Assign_Aircraft (ID: Airplane_ID); -- all entries are guaranteed mutually exclusive entry Cleared_Runway (ID: Airplane_ID); entry Wait_For_Clear; private Clear: Boolean := True; -- protected private data - generally more than only a flag... end Runway; type Runway_Access is access all Runway; -- the air traffic controller task takes requests for takeoff and landing task type Controller (My_Runway: Runway_Access) is -- task entries for synchronous message passing entry Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access); entry Request_Approach(ID: in Airplane_ID; Approach: out Runway_Access); end Controller; -- allocation of instances Runway1 : aliased Runway; -- instantiate a runway Controller1: Controller (Runway1'Access); -- and a controller to manage it ------ the implementations of the above types ------ protected body Runway is entry Assign_Aircraft (ID: Airplane_ID) when Clear is -- the entry guard - calling tasks are blocked until the condition is true begin Clear := False; Put_Line (Airplane_ID'Image (ID) & " on runway "); end; entry Cleared_Runway (ID: Airplane_ID) when not Clear is begin Clear := True; Put_Line (Airplane_ID'Image (ID) & " cleared runway "); end; entry Wait_For_Clear when Clear is begin null; -- no need to do anything here - a task can only enter if "Clear" is true end; end Runway; task body Controller is begin loop My_Runway.Wait_For_Clear; -- wait until runway is available (blocking call) select -- wait for two types of requests (whichever is runnable first) when Request_Approach'count = 0 => -- guard statement - only accept if there are no tasks queuing on Request_Approach accept Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access) do -- start of synchronized part My_Runway.Assign_Aircraft (ID); -- reserve runway (potentially blocking call if protected object busy or entry guard false) Takeoff := My_Runway; -- assign "out" parameter value to tell airplane which runway end Request_Takeoff; -- end of the synchronised part or accept Request_Approach (ID: in Airplane_ID; Approach: out Runway_Access) do My_Runway.Assign_Aircraft (ID); Approach := My_Runway; end Request_Approach; or -- terminate if no tasks left who could call terminate; end select; end loop; end; task body Airplane is Rwy : Runway_Access; begin Controller1.Request_Takeoff (ID, Rwy); -- This call blocks until Controller task accepts and completes the accept block Put_Line (Airplane_ID'Image (ID) & " taking off..."); delay 2.0; Rwy.Cleared_Runway (ID); -- call will not block as "Clear" in Rwy is now false and no other tasks should be inside protected object delay 5.0; -- fly around a bit... loop select -- try to request a runway Controller1.Request_Approach (ID, Rwy); -- this is a blocking call - will run on controller reaching accept block and return on completion exit; -- if call returned we're clear for landing - leave select block and proceed... or delay 3.0; -- timeout - if no answer in 3 seconds, do something else (everything in following block) Put_Line (Airplane_ID'Image (ID) & " in holding pattern"); -- simply print a message end select; end loop; delay 4.0; -- do landing approach... Put_Line (Airplane_ID'Image (ID) & " touched down!"); Rwy.Cleared_Runway (ID); -- notify runway that we're done here. end; New_Airplane: Airplane_Access; begin for I in Airplane_ID'Range loop -- create a few airplane tasks New_Airplane := new Airplane (I); -- will start running directly after creation delay 4.0; end loop; end Traffic; Pragmas A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific. Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions). Generics See also APSE – a specification for a programming environment to support software development in Ada Ravenscar profile – a subset of the Ada tasking features designed for safety-critical hard real-time computing SPARK (programming language) – a programming language consisting of a highly restricted subset of Ada, annotated with meta information describing desired component behavior and individual runtime requirements References International standards ISO/IEC 8652: Information technology—Programming languages—Ada ISO/IEC 15291: Information technology—Programming languages—Ada Semantic Interface Specification (ASIS) ISO/IEC 18009: Information technology—Programming languages—Ada: Conformity assessment of a language processor (ACATS) IEEE Standard 1003.5b-1996, the POSIX Ada binding Ada Language Mapping Specification, the CORBA interface description language (IDL) to Ada mapping Rationale These documents have been published in various forms, including print. Also available apps.dtic.mil, pdf Books 795 pages. Archives Ada Programming Language Materials, 1981–1990. Charles Babbage Institute, University of Minnesota. Includes literature on software products designed for the Ada language; U.S. government publications, including Ada 9X project reports, technical reports, working papers, newsletters; and user group information. External links Ada - C/C++ changer - MapuSoft DOD Ada programming language (ANSI/MIL STD 1815A-1983) specification JTC1/SC22/WG9 ISO home of Ada Standards Programming languages .NET programming languages Avionics programming languages High Integrity Programming Language Multi-paradigm programming languages Programming language standards Programming languages created in 1980 Programming languages with an ISO standard Statically typed programming languages Systems programming languages 1980 software High-level programming languages
Ada (programming language)
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or 'decays' into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. Alpha particles have a charge , but as a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms – the charge is not usually shown. Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitters being the lightest isotopes (mass numbers 104–109) of tellurium (element 52). Exceptionally, however, beryllium-8 decays to two alpha particles. Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and a relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force. Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the heavy dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air. Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production. History Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions. By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically, and was known as the Geiger–Nuttall law. Mechanism The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 1 femtometer, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of nucleons, but the total disruptive electromagnetic force trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size. One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two protons and two neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation where is the initial mass of the nucleus, is the mass of the nucleus after particle emission, and is the mass of the emitted particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission gives 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle itself, although to maintain conservation of momentum part of the energy goes to the recoil of the nucleus itself (see Atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4) the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%, however the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry. These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to escape. Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it: It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed. The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is . The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide). Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except 146Sm.) Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law. Uses Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm. Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones). Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay. Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the 'static cling' to dissipate more rapidly. Toxicity Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission. Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons. However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha (4 u) divided by the weight of the parent (typically about 200 u) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations. The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden. The Russian dissident Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter. References Alpha emitters by increasing energy (Appendix 1) Notes External links The LIVEChart of Nuclides - IAEA with filter on alpha decay Alpha decay with 3 animated examples showing the recoil of daughter Helium Nuclear physics Radioactivity
Alpha decay
In mathematics, the term "almost all" means "all but a negligible amount". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null. In contrast, "almost no" means "a negligible amount"; that is, "almost no elements of " means "a negligible amount of elements of ". Meanings in different areas of mathematics Prevalent meaning Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) but finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) but countably many". Examples: Almost all positive integers are greater than 1,000,000,000,000. Almost all prime numbers are odd (as 2 is the only exception). Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra). If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x). Meaning in measure theory When speaking about the reals, sometimes "almost all" can mean "all reals but a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S but those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points but those in a null set" or "all points in S but those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory. Examples: In a measure space, such as the real line, countable sets are null. The set of rational numbers is countable, and thus almost all real numbers are irrational. As Georg Cantor proved in his first set theory article, the set of algebraic numbers is countable as well, so almost all reals are transcendental. Almost all reals are normal. The Cantor set is null as well. Thus, almost all reals are not members of it even though it is uncountable. The derivative of the Cantor function is 0 for almost all numbers in the unit interval. It follows from the previous example because the Cantor function is locally constant, and thus has derivative 0 outside the Cantor set. Meaning in number theory In number theory, "almost all positive integers" can mean "the positive integers in a set whose natural density is 1". That is, if A is a set of positive integers, and if the proportion of positive integers in A below n (out of all positive integers below n) tends to 1 as n tends to infinity, then almost all positive integers are in A. More generally, let S be an infinite set of positive integers, such as the set of even positive numbers or the set of primes, if A is a subset of S, and if the proportion of elements of S below n that are in A (out of all elements of S below n) tends to 1 as n tends to infinity, then it can be said that almost all elements of S are in A. Examples: The natural density of cofinite sets of positive integers is 1, so each of them contains almost all positive integers. Almost all positive integers are composite. Almost all even positive numbers can be expressed as the sum of two primes. Almost all primes are isolated. Moreover, for every positive integer , almost all primes have prime gaps of more than both to their left and to their right; that is, there is no other primes between and . Meaning in graph theory In graph theory, if A is a set of (finite labelled) graphs, it can be said to contain almost all graphs, if the proportion of graphs with n vertices that are in A tends to 1 as n tends to infinity. However, it is sometimes easier to work with probabilities, so the definition is reformulated as follows. The proportion of graphs with n vertices that are in A equals the probability that a random graph with n vertices (chosen with the uniform distribution) is in A, and choosing a graph in this way has the same outcome as generating a graph by flipping a coin for each pair of vertices to decide whether to connect them. Therefore, equivalently to the preceding definition, the set A contains almost all graphs if the probability that a coin flip-generated graph with n vertices is in A tends to 1 as n tends to infinity. Sometimes, the latter definition is modified so that the graph is chosen randomly in some other way, where not all graphs with n vertices have the same probability, and those modified definitions are not always equivalent to the main one. The use of the term "almost all" in graph theory is not standard; the term "asymptotically almost surely" is more commonly used for this concept. Example: Almost all graphs are asymmetric. Almost all graphs have diameter 2. Meaning in topology In topology and especially dynamical systems theory (including applications in economics), "almost all" of a topological space's points can mean "all of the space's points but those in a meagre set". Some use a more limited definition, where a subset only contains almost all of the space's points if it contains some open dense set. Example: Given an irreducible algebraic variety, the properties that hold for almost all points in the variety are exactly the generic properties. This is due to the fact that in an irreducible algebraic variety equipped with the Zariski topology, all nonempty open sets are dense. Meaning in algebra In abstract algebra and mathematical logic, if U is an ultrafilter on a set X, "almost all elements of X" sometimes means "the elements of some element of U". For any partition of X into two disjoint sets, one of them will necessarily contain almost all elements of X. It is possible to think of the elements of a filter on X as containing almost all elements of X, even if it isn't an ultrafilter. Proofs See also Almost Almost everywhere Almost surely References Primary sources Secondary sources Mathematical terminology
Almost all
In particle physics, every type of particle is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the antielectron (which is often referred to as positron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron. Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle. Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography. The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle, which can occur in particle accelerators such as the Large Hadron Collider at CERN. Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact. However, other neutral particles are their own antiparticles, such as photons, Z0 bosons,  mesons, and hypothetical gravitons and some hypothetical WIMPs. History Experiment In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber— a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios. The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps. Dirac hole theory Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons. This picture implied an infinite negative charge for the universe—a problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction  +  →  + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory. Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field i.e. particles moving backwards in time. Particle–antiparticle annihilation If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as  +  →  (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair,  +  → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization. Properties Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal . and are linear, unitary operators, is antilinear and antiunitary, . If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin. If , and can be defined separately on the particles and antiparticles, then where the proportionality sign indicates that there might be a phase on the right hand side. As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q. Quantum field theory This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory. One may try to quantize an electron field without mixing the annihilation and creation operators by writing where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0. So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form where the first sum is over positive energy states and the second over those of negative energy. The energy becomes where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion. This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons. Feynman–Stueckelberg interpretation By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time. This technique is the most widespread method of computing amplitudes in quantum field theory today. Since this picture was first developed by Stueckelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stueckelberg interpretation of antiparticles to honor both scientists. See also List of particles Gravitational interaction of antimatter Parity, charge conjugation and time reversal symmetry CP violations Quantum field theory Baryogenesis, baryon asymmetry and Leptogenesis One-electron universe Paul Dirac Notes References External links Antimatter
Antiparticle
The Andes, Andes Mountains or Andean Mountains () are the longest continental mountain range in the world, forming a continuous highland along the western edge of South America. The range is long, wide (widest between 18°S - 20°S latitude), and has an average height of about . The Andes extend from north to south through seven South American countries: Venezuela, Colombia, Ecuador, Peru, Bolivia, Chile, and Argentina. Along their length, the Andes are split into several ranges, separated by intermediate depressions. The Andes are the location of several high plateaus—some of which host major cities such as Quito, Bogotá, Cali, Arequipa, Medellín, Bucaramanga, Sucre, Mérida, El Alto and La Paz. The Altiplano plateau is the world's second-highest after the Tibetan plateau. These ranges are in turn grouped into three major divisions based on climate: the Tropical Andes, the Dry Andes, and the Wet Andes. The Andes Mountains are the highest mountain range outside Asia. The highest mountain outside Asia, Argentina's Mount Aconcagua, rises to an elevation of about above sea level. The peak of Chimborazo in the Ecuadorian Andes is farther from the Earth's center than any other location on the Earth's surface, due to the equatorial bulge resulting from the Earth's rotation. The world's highest volcanoes are in the Andes, including Ojos del Salado on the Chile-Argentina border, which rises to . The Andes are also part of the American Cordillera, a chain of mountain ranges (cordillera) that consists of an almost continuous sequence of mountain ranges that form the western "backbone" of North America, Central America, South America and Antarctica. Etymology The etymology of the word Andes has been debated. The majority consensus is that it derives from the Quechua word 'east' as in Antisuyu (Quechua for 'east region'), one of the four regions of the Inca Empire. The term cordillera comes from the Spanish word cordel 'rope' and is used as a descriptive name for several contiguous sections of the Andes, as well as the entire Andean range, and the combined mountain chain along the western part of the North and South American continents. Geography The Andes can be divided into three sections: The Southern Andes in Argentina and Chile, south of Llullaillaco. The Central Andes in Peru and Bolivia. The Northern Andes in Venezuela, Colombia, and Ecuador. In the northern part of the Andes, the separate Sierra Nevada de Santa Marta range is often treated as part of the Northern Andes. The Leeward Antilles islands Aruba, Bonaire, and Curaçao, which lie in the Caribbean Sea off the coast of Venezuela, were formerly thought to represent the submerged peaks of the extreme northern edge of the Andes range, but ongoing geological studies indicate that such a simplification does not do justice to the complex tectonic boundary between the South American and Caribbean plates. Geology The Andes are a Mesozoic–Tertiary orogenic belt of mountains along the Pacific Ring of Fire, a zone of volcanic activity that encompasses the Pacific rim of the Americas as well as the Asia-Pacific region. The Andes are the result of tectonic plate processes, caused by the subduction of oceanic crust beneath the South American Plate. It is the result of a convergent plate boundary between the Nazca Plate and the South American Plate. The main cause of the rise of the Andes is the compression of the western rim of the South American Plate due to the subduction of the Nazca Plate and the Antarctic Plate. To the east, the Andes range is bounded by several sedimentary basins, such as Orinoco, Amazon Basin, Madre de Dios and Gran Chaco, that separate the Andes from the ancient cratons in eastern South America. In the south, the Andes share a long boundary with the former Patagonia Terrane. To the west, the Andes end at the Pacific Ocean, although the Peru-Chile trench can be considered their ultimate western limit. From a geographical approach, the Andes are considered to have their western boundaries marked by the appearance of coastal lowlands and a less rugged topography. The Andes Mountains also contain large quantities of iron ore located in many mountains within the range. The Andean orogen has a series of bends or oroclines. The Bolivian Orocline is a seaward concave bending in the coast of South America and the Andes Mountains at about 18° S. At this point, the orientation of the Andes turns from Northwest in Peru to South in Chile and Argentina. The Andean segment north and south of the Orocline have been rotated 15° to 20° counter clockwise and clockwise respectively. The Bolivian Orocline area overlaps with the area of maximum width of the Altiplano Plateau and according to Isacks (1988) the Orocline is related to crustal shortening. The specific point at 18° S where the coastline bends is known as the "Arica Elbow". Further south lies the Maipo Orocline a more subtle Orocline between 30° S and 38°S with a seaward-concave break in trend at 33° S. Near the southern tip of the Andes lies the Patagonian Orocline. Orogeny The western rim of the South American Plate has been the place of several pre-Andean orogenies since at least the late Proterozoic and early Paleozoic, when several terranes and microcontinents collided and amalgamated with the ancient cratons of eastern South America, by then the South American part of Gondwana. The formation of the modern Andes began with the events of the Triassic when Pangaea began the break up that resulted in developing several rifts. The development continued through the Jurassic Period. It was during the Cretaceous Period that the Andes began to take their present form, by the uplifting, faulting and folding of sedimentary and metamorphic rocks of the ancient cratons to the east. The rise of the Andes has not been constant, as different regions have had different degrees of tectonic stress, uplift, and erosion. Tectonic forces above the subduction zone along the entire west coast of South America where the Nazca Plate and a part of the Antarctic Plate are sliding beneath the South American Plate continue to produce an ongoing orogenic event resulting in minor to major earthquakes and volcanic eruptions to this day. In the extreme south, a major transform fault separates Tierra del Fuego from the small Scotia Plate. Across the wide Drake Passage lie the mountains of the Antarctic Peninsula south of the Scotia Plate which appear to be a continuation of the Andes chain. The regions immediately east of the Andes experience a series of changes resulting from the Andean orogeny. Parts of the Sunsás Orogen in Amazonian craton disappeared from the surface of earth being overridden by the Andes. The Sierras de Córdoba, where the effects of the ancient Pampean orogeny can be observed, owe their modern uplift and relief to the Andean orogeny in the Tertiary. Further south in southern Patagonia the onset of the Andean orogeny caused the Magallanes Basin to evolve from being an extensional back-arc basin in the Mesozoic to being a compressional foreland basin in the Cenozoic. Volcanism The Andes range has many active volcanoes distributed in four volcanic zones separated by areas of inactivity. The Andean volcanism is a result of subduction of the Nazca Plate and Antarctic Plate underneath the South American Plate. The belt is subdivided into four main volcanic zones that are separated from each other by volcanic gaps. The volcanoes of the belt are diverse in terms of activity style, products and morphology. While some differences can be explained by which volcanic zone a volcano belongs to, there are significant differences inside volcanic zones and even between neighbouring volcanoes. Despite being a type location for calc-alkalic and subduction volcanism, the Andean Volcanic Belt has a large range of volcano-tectonic settings, such as rift systems and extensional zones, transpressional faults, subduction of mid-ocean ridges and seamount chains apart from a large range of crustal thicknesses and magma ascent paths, and different amount of crustal assimilations. Ore deposits and evaporates The Andes Mountains host large ore and salt deposits and some of their eastern fold and thrust belt acts as traps for commercially exploitable amounts of hydrocarbons. In the forelands of the Atacama Desert some of the largest porphyry copper mineralizations occur making Chile and Peru the first- and second-largest exporters of copper in the world. Porphyry copper in the western slopes of the Andes has been generated by hydrothermal fluids (mostly water) during the cooling of plutons or volcanic systems. The porphyry mineralization further benefited from the dry climate that let them largely out of the disturbing actions of meteoric water. The dry climate in the central western Andes has also led to the creation of extensive saltpeter deposits which were extensively mined until the invention of synthetic nitrates. Yet another result of the dry climate are the salars of Atacama and Uyuni, the first one being the largest source of lithium today and the second the world's largest reserve of the element. Early Mesozoic and Neogene plutonism in Bolivia's Cordillera Central created the Bolivian tin belt as well as the famous, now depleted, deposits of Cerro Rico de Potosí. History Climate and hydrology The climate in the Andes varies greatly depending on latitude, altitude, and proximity to the sea. Temperature, atmospheric pressure and humidity decrease in higher elevations. The southern section is rainy and cool, the central section is dry. The northern Andes are typically rainy and warm, with an average temperature of in Colombia. The climate is known to change drastically in rather short distances. Rainforests exist just kilometres away from the snow-covered peak Cotopaxi. The mountains have a large effect on the temperatures of nearby areas. The snow line depends on the location. It is at between in the tropical Ecuadorian, Colombian, Venezuelan, and northern Peruvian Andes, rising to in the drier mountains of southern Peru south to northern Chile south to about 30°S before descending to on Aconcagua at 32°S, at 40°S, at 50°S, and only in Tierra del Fuego at 55°S; from 50°S, several of the larger glaciers descend to sea level. The Andes of Chile and Argentina can be divided into two climatic and glaciological zones: the Dry Andes and the Wet Andes. Since the Dry Andes extend from the latitudes of Atacama Desert to the area of Maule River, precipitation is more sporadic and there are strong temperature oscillations. The line of equilibrium may shift drastically over short periods of time, leaving a whole glacier in the ablation area or in the accumulation area. In the high Andes of Central Chile and Mendoza Province, rock glaciers are larger and more common than glaciers; this is due to the high exposure to solar radiation. In these regions glaciers occur typically at higher altitudes than rock glaciers. The lowest active rock glacier occur at 900 m a.s.l. in Aconcagua. Though precipitation increases with the height, there are semiarid conditions in the nearly highest mountains of the Andes. This dry steppe climate is considered to be typical of the subtropical position at 32–34° S. The valley bottoms have no woods, just dwarf scrub. The largest glaciers, for example the Plomo glacier and the Horcones glaciers, do not even reach in length and have an only insignificant ice thickness. At glacial times, however, c. 20,000 years ago, the glaciers were over ten times longer. On the east side of this section of the Mendozina Andes, they flowed down to and on the west side to about above sea level. The massifs of Cerro Aconcagua (), Cerro Tupungato () and Nevado Juncal () are tens of kilometres away from each other and were connected by a joint ice stream network. The Andes' dendritic glacier arms, i.e. components of valley glaciers, were up to long, over thick and overspanned a vertical distance of . The climatic glacier snowline (ELA) was lowered from to at glacial times. Flora The Andean region cuts across several natural and floristic regions, due to its extension, from Caribbean Venezuela to cold, windy and wet Cape Horn passing through the hyperarid Atacama Desert. Rainforests and tropical dry forests used to encircle much of the northern Andes but are now greatly diminished, especially in the Chocó and inter-Andean valleys of Colombia. Opposite of the humid Andean slopes are the relatively dry Andean slopes in most of western Peru, Chile and Argentina. Along with several Interandean Valles, they are typically dominated by deciduous woodland, shrub and xeric vegetation, reaching the extreme in the slopes near the virtually lifeless Atacama Desert. About 30,000 species of vascular plants live in the Andes, with roughly half being endemic to the region, surpassing the diversity of any other hotspot. The small tree Cinchona pubescens, a source of quinine which is used to treat malaria, is found widely in the Andes as far south as Bolivia. Other important crops that originated from the Andes are tobacco and potatoes. The high-altitude Polylepis forests and woodlands are found in the Andean areas of Colombia, Ecuador, Peru, Bolivia and Chile. These trees, by locals referred to as Queñua, Yagual and other names, can be found at altitudes of above sea level. It remains unclear if the patchy distribution of these forests and woodlands is natural, or the result of clearing which began during the Incan period. Regardless, in modern times the clearance has accelerated, and the trees are now considered to be highly endangered, with some believing that as little as 10% of the original woodland remains. Fauna The Andes are rich in fauna: With almost 1,000 species, of which roughly 2/3 are endemic to the region, the Andes are the most important region in the world for amphibians. The diversity of animals in the Andes is high, with almost 600 species of mammals (13% endemic), more than 1,700 species of birds (about 1/3 endemic), more than 600 species of reptile (about 45% endemic), and almost 400 species of fish (about 1/3 endemic). The vicuña and guanaco can be found living in the Altiplano, while the closely related domesticated llama and alpaca are widely kept by locals as pack animals and for their meat and wool. The crepuscular (active during dawn and dusk) chinchillas, two threatened members of the rodent order, inhabit the Andes' alpine regions. The Andean condor, the largest bird of its kind in the Western Hemisphere, occurs throughout much of the Andes but generally in very low densities. Other animals found in the relatively open habitats of the high Andes include the huemul, cougar, foxes in the genus Pseudalopex, and, for birds, certain species of tinamous (notably members of the genus Nothoprocta), Andean goose, giant coot, flamingos (mainly associated with hypersaline lakes), lesser rhea, Andean flicker, diademed sandpiper-plover, miners, sierra-finches and diuca-finches. Lake Titicaca hosts several endemics, among them the highly endangered Titicaca flightless grebe and Titicaca water frog. A few species of hummingbirds, notably some hillstars, can be seen at altitudes above , but far higher diversities can be found at lower altitudes, especially in the humid Andean forests ("cloud forests") growing on slopes in Colombia, Ecuador, Peru, Bolivia and far northwestern Argentina. These forest-types, which includes the Yungas and parts of the Chocó, are very rich in flora and fauna, although few large mammals exist, exceptions being the threatened mountain tapir, spectacled bear and yellow-tailed woolly monkey. Birds of humid Andean forests include mountain-toucans, quetzals and the Andean cock-of-the-rock, while mixed species flocks dominated by tanagers and furnariids commonly are seen – in contrast to several vocal but typically cryptic species of wrens, tapaculos and antpittas. A number of species such as the royal cinclodes and white-browed tit-spinetail are associated with Polylepis, and consequently also threatened. Human activity The Andes Mountains form a north–south axis of cultural influences. A long series of cultural development culminated in the expansion of the Inca civilization and Inca Empire in the central Andes during the 15th century. The Incas formed this civilization through imperialistic militarism as well as careful and meticulous governmental management. The government sponsored the construction of aqueducts and roads in addition to preexisting installations. Some of these constructions are still in existence today. Devastated by European diseases and by civil war, the Incas were defeated in 1532 by an alliance composed of tens of thousands of allies from nations they had subjugated (e.g. Huancas, Chachapoyas, Cañaris) and a small army of 180 Spaniards led by Francisco Pizarro. One of the few Inca sites the Spanish never found in their conquest was Machu Picchu, which lay hidden on a peak on the eastern edge of the Andes where they descend to the Amazon. The main surviving languages of the Andean peoples are those of the Quechua and Aymara language families. Woodbine Parish and Joseph Barclay Pentland surveyed a large part of the Bolivian Andes from 1826 to 1827. Cities In modern times, the largest cities in the Andes are Bogotá, with a population of about eight million, Santiago, Medellín, Cali, and Quito. Lima is a coastal city adjacent to the Andes and is the largest city of all Andean countries. It is the seat of the Andean Community of Nations. La Paz, Bolivia's seat of government, is the highest capital city in the world, at an elevation of approximately . Parts of the La Paz conurbation, including the city of El Alto, extend up to . Other cities in or near the Andes include Bariloche, Catamarca, Jujuy, Mendoza, Salta, San Juan, and Tucumán in Argentina; Calama and Rancagua in Chile; Cochabamba, Oruro, Potosí, Sucre, Sacaba, Tarija, and Yacuiba in Bolivia; Arequipa, Cajamarca, Cusco, Huancayo, Huánuco, Huaraz, Juliaca, and Puno in Peru; Ambato, Cuenca, Ibarra, Latacunga, Loja, Riobamba and Tulcán in Ecuador; Armenia, Cúcuta, Bucaramanga, Duitama, Ibagué, Ipiales, Manizales, Palmira, Pasto, Pereira, Popayán, Sogamoso, Tunja, and Villavicencio in Colombia; and Barquisimeto, La Grita, Mérida, San Cristóbal, Tovar, Trujillo, and Valera in Venezuela. The cities of Caracas, Valencia, and Maracay are in the Venezuelan Coastal Range, which is a debatable extension of the Andes at the northern extremity of South America. Transportation Cities and large towns are connected with asphalt-paved roads, while smaller towns are often connected by dirt roads, which may require a four-wheel-drive vehicle. The rough terrain has historically put the costs of building highways and railroads that cross the Andes out of reach of most neighboring countries, even with modern civil engineering practices. For example, the main crossover of the Andes between Argentina and Chile is still accomplished through the Paso Internacional Los Libertadores. Only recently the ends of some highways that came rather close to one another from the east and the west have been connected. Much of the transportation of passengers is done via aircraft. However, there is one railroad that connects Chile with Peru via the Andes, and there are others that make the same connection via southern Bolivia. See railroad maps of that region. There are multiple highways in Bolivia that cross the Andes. Some of these were built during a period of war between Bolivia and Paraguay, in order to transport Bolivian troops and their supplies to the war front in the lowlands of southeastern Bolivia and western Paraguay. For decades, Chile claimed ownership of land on the eastern side of the Andes. However, these claims were given up in about 1870 during the War of the Pacific between Chile, the allied Bolivia and Peru, in a diplomatic deal to keep Peru out of the war. The Chilean Army and Chilean Navy defeated the combined forces of Bolivia and Peru, and Chile took over Bolivia's only province on the Pacific Coast, some land from Peru that was returned to Peru decades later. Bolivia has been a completely landlocked country ever since. It mostly uses seaports in eastern Argentina and Uruguay for international trade because its diplomatic relations with Chile have been suspended since 1978. Because of the tortuous terrain in places, villages and towns in the mountains—to which travel via motorized vehicles is of little use—are still located in the high Andes of Chile, Bolivia, Peru, and Ecuador. Locally, the relatives of the camel, the llama, and the alpaca continue to carry out important uses as pack animals, but this use has generally diminished in modern times. Donkeys, mules, and horses are also useful. Agriculture The ancient peoples of the Andes such as the Incas have practiced irrigation techniques for over 6,000 years. Because of the mountain slopes, terracing has been a common practice. Terracing, however, was only extensively employed after Incan imperial expansions to fuel their expanding realm. The potato holds a very important role as an internally consumed staple crop. Maize was also an important crop for these people, and was used for the production of chicha, important to Andean native people. Currently, tobacco, cotton and coffee are the main export crops. Coca, despite eradication programmes in some countries, remains an important crop for legal local use in a mildly stimulating herbal tea, and, both controversially and illegally, for the production of cocaine. Irrigation In unirrigated land, pasture is the most common type of land use. In the rainy season (summer), part of the rangeland is used for cropping (mainly potatoes, barley, broad beans and wheat). Irrigation is helpful in advancing the sowing data of the summer crops which guarantees an early yield in the period of food shortage. Also, by early sowing, maize can be cultivated higher up in the mountains (up to ). In addition, it makes cropping in the dry season (winter) possible and allows the cultivation of frost-resistant vegetable crops like onion and carrot. Mining The Andes rose to fame for their mineral wealth during the Spanish conquest of South America. Although Andean Amerindian peoples crafted ceremonial jewelry of gold and other metals, the mineralizations of the Andes were first mined on a large scale after the Spanish arrival. Potosí in present-day Bolivia and Cerro de Pasco in Peru was one of the principal mines of the Spanish Empire in the New World. Río de la Plata and Argentina derive their names from the silver of Potosí. Currently, mining in the Andes of Chile and Peru places these countries as the first and second major producers of copper in the world. Peru also contains the 4th largest goldmine in the world: the Yanacocha. The Bolivian Andes produce principally tin although historically silver mining had a huge impact on the economy of 17th century Europe. There is a long history of mining in the Andes, from the Spanish silver mines in Potosí in the 16th century to the vast current porphyry copper deposits of Chuquicamata and Escondida in Chile and Toquepala in Peru. Other metals including iron, gold, and tin in addition to non-metallic resources are important. Peaks This list contains some of the major peaks in the Andes mountain range. The highest peak is Aconcagua of Argentina (see below). Argentina Aconcagua, Cerro Bonete, Galán, Mercedario, Pissis, Border between Argentina and Chile Cerro Bayo, Cerro Fitz Roy, or 3,405 m, Patagonia, also known as Cerro Chaltén Cerro Escorial, Cordón del Azufre, Falso Azufre, Incahuasi, Lastarria, Llullaillaco, Maipo, Marmolejo, Ojos del Salado, Olca, Sierra Nevada de Lagunas Bravas, Socompa, Nevado Tres Cruces, (south summit) (III Region) Tronador, Tupungato, Nacimiento, Bolivia Janq'u Uma, Cabaraya, Chacaltaya, Wayna Potosí, Illampu, Illimani, Laram Q'awa, Macizo de Pacuni, Nevado Anallajsi, Nevado Sajama, Patilla Pata, Tata Sabaya, Border between Bolivia and Chile Acotango, Michincha, Iru Phutunqu, Licancabur, Olca, Parinacota, Paruma, Pomerape, Chile Monte San Valentin, Cerro Paine Grande, Cerro Macá, c. Monte Darwin, c. Volcan Hudson, c. Cerro Castillo Dynevor, c. Mount Tarn, c. Polleras, c. Acamarachi, c. Colombia Nevado del Huila, Nevado del Ruiz, Nevado del Tolima, Pico Pan de Azúcar, Ritacuba Negro, Nevado del Cumbal, Cerro Negro de Mayasquer, Ritacuba Blanco, Nevado del Quindío, Puracé, Santa Isabel, Doña Juana, Galeras, Azufral, Ecuador Antisana, Cayambe, Chiles, Chimborazo, Corazón, Cotopaxi, El Altar, Illiniza, Pichincha, Quilotoa, Reventador, Sangay, Tungurahua, Peru Alpamayo, Artesonraju, Carnicero, Chumpe, Coropuna, El Misti, El Toro, Huandoy, Huascarán, Jirishanca, Pumasillo, Rasac, Rondoy, Sarapo, Salcantay, Seria Norte, Siula Grande, Huaytapallana, Yerupaja, Yerupaja Chico, Venezuela Pico Bolívar, Pico Humboldt, Pico Bonpland, Pico La Concha, Pico Piedras Blancas, Pico El Águila, Pico El Toro Pico El León Pico Mucuñuque See also Andean Geology—a scientific journal Andesite line Apu (god) Mountain passes of the Andes Notes References Biggar, J. (2005). The Andes: A Guide For Climbers. 3rd. edition. Andes: Kirkcudbrightshire. de Roy, T. (2005). The Andes: As the Condor Flies. Firefly books: Richmond Hill. Fjeldså, J. & N. Krabbe (1990). The Birds of the High Andes. Zoological Museum, University of Copenhagen: Fjeldså, J. & M. Kessler (1996). Conserving the biological diversity of Polylepis woodlands of the highlands on Peru and Bolivia, a contribution to sustainable natural resource management in the Andes. NORDECO: Copenhagen. Bibliography External links University of Arizona: Andes geology Blueplanetbiomes.org: Climate and animal life of the Andes Discover-peru.org: Regions and Microclimates in the Andes Peaklist.org: Complete list of mountains in South America with an elevation at/above Mountain ranges of South America Regions of South America Physiographic divisions
Andes
Ammonia is a compound of nitrogen and hydrogen with the formula NH3. A stable binary hydride, and the simplest pnictogen hydride, ammonia is a colourless gas with a distinct pungent smell. It is a common nitrogenous waste, particularly among aquatic organisms, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to 45 percent of the world's food and fertilizers. Ammonia, either directly or indirectly, is also a building block for the synthesis of many pharmaceutical products and is used in many commercial cleaning products. It is mainly collected by downward displacement of both air and water. Although common in natureboth terrestrially and in the outer planets of the Solar Systemand in wide use, ammonia is both caustic and hazardous in its concentrated form. In many countries it is classified as an extremely hazardous substance, and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. The global industrial production of ammonia in 2018 was 175 million tonnes, with no significant change relative to the 2013 global industrial production of 175 million tonnes. Industrial ammonia is sold either as ammonia liquor (usually 28% ammonia in water) or as pressurized or refrigerated anhydrous liquid ammonia transported in tank cars or cylinders. NH3 boils at at a pressure of one atmosphere, so the liquid must be stored under pressure or at low temperature. Household ammonia or ammonium hydroxide is a solution of NH3 in water. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% (by weight) ammonia at ) being the typical high-concentration commercial product. Etymology Pliny, in Book XXXI of his Natural History, refers to a salt produced in the Roman province of Cyrenaica named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon). However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name. Natural occurrence Ammonia is a chemical found in trace quantities in nature, being produced from nitrogenous animal and vegetable matter. Ammonia and ammonium salts are also found in small quantities in rainwater, whereas ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts; crystals of ammonium bicarbonate have been found in Patagonia guano. The kidneys secrete ammonia to neutralize excess acid. Ammonium salts are found distributed through fertile soil and in seawater. Ammonia is also found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal. Properties Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules; the liquid boils at , and freezes to white crystals at . Solid The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm. Liquid Liquid ammonia possesses strong ionising powers reflecting its high ε of 22. Liquid ammonia has a very high standard enthalpy change of vaporization (23.35 kJ/mol, cf. water 40.65 kJ/mol, methane 8.19 kJ/mol, phosphine 14.6 kJ/mol) and can therefore be used in laboratories in uninsulated vessels without additional refrigeration. See liquid ammonia as a solvent. Solvent properties Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic. The maximum concentration of ammonia in water (a saturated solution) has a density of 0.880 g/cm3 and is often known as '.880 ammonia'. Combustion Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–25% air. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride (NCl3) is also formed. Decomposition At high temperature and in the presence of a suitable catalyst, ammonia is decomposed into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas. Ammonia can also be used as a source of hydrogen for acid fuel cells if the unreacted ammonia can be removed. Ruthenium and platinum catalysts were found to be the most active, whereas supported Ni catalysts were the less active. Structure The ammonia molecule has a trigonal pyramidal shape as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs, therefore the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.8°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially, its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (pH = 7), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of NH4+. The latter has the shape of a regular tetrahedron and is isoelectronic with methane. The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser. Amphotericity One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form salts; thus with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction. As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear "out of nothing" as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles. NH3 + HCl → NH4Cl The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion (NH4+). Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the NH2− ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide: 2 Li + 2 NH3 → 2 LiNH2 + H2 Self-dissociation Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates: 2 + Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature, K = [] × [] = 10. Combustion The combustion of ammonia to form nitrogen and water is exothermic: 4 NH3 + 3 O2 → 2 N2 + 6 H2O (g) ΔH°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of NH3) The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to N2 and O2, which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid: 4 NH3 + 5 O2 → 4 NO + 6 H2O A subsequent reaction leads to NO2: 2 NO + O2 → 2 NO2 The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vaporization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain. Formation of other compounds In organic chemistry, ammonia can act as a nucleophile in substitution reactions. Amines can be formed by the reaction of ammonia with alkyl halides, although the resulting −NH2 group is also nucleophilic and secondary and tertiary amines are often formed as byproducts. An excess of ammonia helps minimise multiple substitution and neutralises the hydrogen halide formed. Methylamine is prepared commercially by the reaction of ammonia with chloromethane, and the reaction of ammonia with 2-bromopropanoic acid has been used to prepare racemic alanine in 70% yield. Ethanolamine is prepared by a ring-opening reaction with ethylene oxide: the reaction is sometimes allowed to go further to produce diethanolamine and triethanolamine. Amides can be prepared by the reaction of ammonia with carboxylic acid derivatives. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides so long as there are no thermally sensitive groups present: temperatures of 150–200 °C are required. The hydrogen in ammonia is susceptible to replacement by a myriad of substituents. When dry ammonia gas is heated with metallic sodium it converts to sodamide, NaNH2. With chlorine, monochloramine is formed. Pentavalent ammonia is known as λ5-amine or, more commonly, ammonium hydride. This crystalline solid is only stable under high pressure and decomposes back into trivalent ammonia and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966. Ammonia as a ligand Ammonia can act as a ligand in transition metal complexes. It is a pure σ-donor, in the middle of the spectrochemical series, and shows intermediate hard–soft behaviour (see also ECW model). Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. Some notable ammine complexes include tetraamminediaquacopper(II) ([Cu(NH3)4(H2O)2]2+), a dark blue complex formed by adding ammonia to a solution of copper(II) salts. Tetraamminediaquacopper(II) hydroxide is known as Schweizer's reagent, and has the remarkable ability to dissolve cellulose. Diamminesilver(I) ([Ag(NH3)2]+) is the active species in Tollens' reagent. Formation of this complex can also help to distinguish between precipitates of the different silver halides: silver chloride (AgCl) is soluble in dilute (2 M) ammonia solution, silver bromide (AgBr) is only soluble in concentrated ammonia solution, whereas silver iodide (AgI) is insoluble in aqueous ammonia. Ammine complexes of chromium(III) were known in the late 19th century, and formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex [CrCl3(NH3)3] could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron. This proposal has since been confirmed by X-ray crystallography. An ammine ligand bound to a metal ion is markedly more acidic than a free ammonia molecule, although deprotonation in aqueous solution is still rare. One example is the Calomel reaction, where the resulting amidomercury(II) compound is highly insoluble. HgCl2 + 2 NH3 → HgCl(NH2) + NH4Cl Ammonia forms 1:1 adducts with a variety of Lewis acids such as I2, phenol, and Al(CH3)3. Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and C B = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. Detection and determination Ammonia in solution Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium or potassium hydroxide, the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, (NH4)2PtCl6. Gaseous ammonia Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume. Ammoniacal nitrogen (NH3-N) Ammoniacal nitrogen (NH3-N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre). History The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the "Ammonians" (now: the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is identical with the more modern sal ammoniac (ammonium chloride). The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists as early as the 8th century, first mentioned by the Persian-Arab chemist Jābir ibn Hayyān, and to the European alchemists since the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name "spirit of hartshorn" was applied to ammonia. Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him "alkaline air". Eleven years later in 1785, Claude Louis Berthollet ascertained its composition. The Haber–Bosch process to produce ammonia from the nitrogen in the air was developed by Fritz Haber and Carl Bosch in 1909 and patented in 1910. It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts. Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process. With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal. Applications Solvent Liquid ammonia is the best-known and most widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of NH3 with those of water shows NH3 has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity; this is due at least in part to the weaker hydrogen bonding in NH3 and because such bonding cannot form cross-linked networks, since each NH3 molecule has only one lone pair of electrons compared with two for each H2O moleculeGive. The ionic self-dissociation constant of liquid NH3 at −50 °C is about 10−33. Solubility of salts Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at . Solutions of metals Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu, and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules. These solutions are very useful as strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases. Redox properties of liquid ammonia The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (N2 + 6NH4+ + 6e− ⇌ 8NH3), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts. Fertilizer In the US as of 2019, approximately 88% of ammonia was used as fertilizers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia and worldwide 110 million tonnes are applied each year. Precursor to nitrogenous compounds Ammonia is directly or indirectly the precursor to most nitrogen-containing compounds. Virtually all synthetic nitrogen compounds are derived from ammonia. An important derivative is nitric acid. This key material is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide is an intermediate in this conversion: NH3 + 2 O2 → HNO3 + H2O Nitric acid is used for the production of fertilizers, explosives, and many organonitrogen compounds. Ammonia is also used to make the following compounds: Hydrazine, in the Olin Raschig process and the peroxide process Hydrogen cyanide, in the BMA process and the Andrussow process Hydroxylamine and ammonium carbonate, in the Raschig process Phenol, in the Raschig–Hooker process Urea, in the Bosch–Meiser urea process and in Wöhler synthesis Amino acids, using Strecker amino-acid synthesis Acrylonitrile, in the Sohio process Ammonia can also be used to make compounds in reactions which are not specifically named. Examples of such compounds include: ammonium perchlorate, ammonium nitrate, formamide, dinitrogen tetroxide, alprazolam, ethanolamine, ethyl carbamate, hexamethylenetetramine, and ammonium bicarbonate. Cleansing agent Household "ammonia" (also incorrectly called ammonium hydroxide) is a solution of NH3 in water, and is used as a general purpose cleaner for many surfaces. Because ammonia results in a relatively streak-free shine, one of its most common uses is to clean glass, porcelain and stainless steel. It is also frequently used for cleaning ovens and soaking items to loosen baked-on grime. Household ammonia ranges in concentration by weight from 5 to 10% ammonia. United States manufacturers of cleaning products are required to provide the product's material safety data sheet which lists the concentration used. Solutions of ammonia (5–10% by weight) are used as household cleaners, particularly for glass. These solutions are irritating to the eyes and mucous membranes (respiratory and digestive tracts), and to a lesser extent the skin. Experts advise that caution be used to ensure the substance is not mixed into any liquid containing bleach, due to the danger of toxic gas. Mixing with chlorine-containing products or strong oxidants, such as household bleach, can generate chloramines. Experts also warn not to use ammonia-based cleaners (such as glass or window cleaners) on car touchscreens, due to the risk of damage to the screen's anti-glare and anti-fingerprint coatings. Fermentation Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation. Antimicrobial agent for food products As early as in 1895, it was known that ammonia was "strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth)." In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in 3 types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef. Lean finely textured beef (popularly known as "pink slime") in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef. Other Fuel The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel. There is the opportunity to convert ammonia back to hydrogen, where it can be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. The conversion of ammonia to hydrogen via the sodium amide process, either for combustion or as fuel for a proton exchange membrane fuel cell, is possible. Another method is the catalytic decomposition of ammonia using solid catalysts. Conversion to hydrogen would allow the storage of hydrogen at nearly 18 wt% compared to ≈5% for gaseous hydrogen under pressure. Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium. Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high NOx production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot. Ammonia production currently creates 1.8% of global emissions. "Green ammonia" is ammonia produced by using green hydrogen (hydrogen produced by electrolysis), whereas "blue ammonia" is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming where the carbon dioxide has been captured and stored). However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range, and there are also other barriers to widespread automobile usage. In terms of raw ammonia supplies, plants would have to be built to increase production levels, requiring significant capital and energy sources. Although it is the second most produced chemical (after sulfuric acid), the scale of ammonia production is a small fraction of world petroleum usage. It could be manufactured from renewable energy sources, as well as coal or nuclear power. The 60 MW Rjukan dam in Telemark, Norway, produced ammonia for many years from 1913, providing fertilizer for much of Europe. Despite this, several tests have been run. In 1981, a Canadian company converted a 1981 Chevrolet Impala to operate using ammonia as fuel. In 2007, a University of Michigan pickup powered by ammonia drove from Detroit to San Francisco as part of a demonstration, requiring only one fill-up in Wyoming. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored, and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidizer, liquid oxygen, which simplified the aircraft's design. In early August 2018, scientists from Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) announced the success of developing a process to release hydrogen from ammonia and harvest that at ultra-high purity as a fuel for cars. This uses a special membrane. Two demonstration fuel cell vehicles have the technology, a Hyundai Nexo and Toyota Mirai. In 2020, Saudi Arabia shipped forty metric tons of liquid "blue ammonia" to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020. Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. Japan is targeting to bring forward a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality. In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held. Remediation of gaseous emissions Ammonia is used to scrub SO2 from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertilizer. Ammonia neutralises the nitrogen oxide (NOx) pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst. Ammonia may be used to mitigate gaseous spills of phosgene. As a hydrogen carrier Due to its attributes, being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen. Refrigeration – R717 Because of ammonia's vaporization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapor-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture. Ammonia coolant is also used in the S1 radiator aboard the International Space Station in two loops which are used to regulate the internal temperature and enable temperature dependent experiments. The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are extremely potent and stable greenhouse gases. Stimulant Ammonia, as the vapor released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added. Textile Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool. Lifting gas At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast). Fuming Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colours. Safety The U.S. Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Life and Health, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 to 300 based on recent more conservative interpretations of original research in 1943. Other organizations have varying exposure levels. U.S. Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): continuous exposure (60 days): 25 ppm / 1 hour: 400 ppm. Ammonia vapour has a sharp, irritating, pungent odour that acts as a warning of potentially dangerous exposure. The average odour threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the United States as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than 13,248 L (3,500 gallons). Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information. Toxicity The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as dangerous for the environment. Atmospheric ammonia plays a key role in the formation of fine particulate matter. Ammonia is a constituent of tobacco smoke. Coking wastewater Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla steelworks in South Australia is one example of a coke-producing facility which discharges ammonia into marine waters. Aquaculture Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/l. During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment. Storage information Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial application requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2 percent by weight water content is not corrosive to carbon steel. NH3 carbon steel construction storage tanks with 0.2 percent by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released. Laboratory The hazards of ammonia solutions depend on the concentration: "dilute" ammonia solutions are usually 5–10% by weight (<5.62 mol/L); "concentrated" solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table. The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ("0.880" – see #Properties) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care; this is not usually a problem for 25% ("0.900") solutions. Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed. Laboratory use of anhydrous ammonia (gas or liquid) Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics. Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides. Ammonia adsorption followed by FTIR as well as temperature programmed desorption of ammonia (NH3-TPD) are very valuable methods to characterize acid-base properties of heterogeneous catalysts. Production Ammonia is one of the most produced inorganic chemicals, with global production reported at 175 million tonnes in 2018. China accounted for 28.5% of that, followed by Russia at 10.3%, the United States at 9.1%, and India at 6.7%. Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime: 2 NH4Cl + 2 CaO → CaCl2 + Ca(OH)2 + 2 NH3(g) For small scale laboratory synthesis, one can heat urea and calcium hydroxide: (NH2)2CO + Ca(OH)2 → CaCO3 + 2 NH3 Haber–Bosch Mass production uses the Haber–Bosch process, a gas phase reaction between hydrogen (H2) and nitrogen (N2) at a moderately-elevated temperature (450 °C) and high pressure (): This reaction is exothermic and results in decreased entropy, meaning that the reaction is favoured at lower temperatures and higher pressures. It is difficult and expensive to achieve, as lower temperatures result in slower reaction kinetics (hence a slower reaction rate) and high pressure requires high-strength pressure vessels that are not weakened by hydrogen embrittlement. Diatomic nitrogen is bound together by a triple bond, which makes it rather inert. Yield and efficiency are low, meaning that the output must be continuously separated and extracted for the reaction to proceed at an acceptable pace. Combined with the energy needed to produce hydrogen and purified atmospheric nitrogen, ammonia production is energy-intensive, accounting for 1 to 2% of global energy consumption, 3% of global carbon emissions, and 3 to 5% of natural gas consumption. Electrochemical Ammonia can be synthesized electrochemically. The only required inputs are sources of nitrogen (potentially atmospheric) and hydrogen (water), allowing generation at the point of use. The availability of renewable energy creates the possibility of zero emission production. In 2012, Hideo Hosono's group found that Ru-loaded Electride works well as a catalyst and pursued more efficient formation. This method is implemented in a small plant for ammonia synthesis in Japan. In 2019, Hosono's group found another catalyst, a novel perovskite oxynitride-hydride BaCe, that works at lower temperature and without costly Ruthenium. Another electrochemical synthesis mode involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. Ethanol has been used as such a source, although it may degrade. One study used lithium electrodeposition in tetrahydrofuran. In 2021, Suryanto et al. replaced ethanol with a tetraalkyl phosphonium salt. This cation can stably undergo deprotonation–reprotonation cycles, while it enhances the medium's ionic conductivity. The study observed production rates of 53 ± nanomoles/s/cm2 at 69 ± 1% faradaic efficiency experiments under 0.5-bar hydrogen and 19.5-bar nitrogen partial pressure at ambient temperature. Role in biological systems and human disease Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using this atmospheric nitrogen in its diatomic form, N2 gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia that create ammonia from atmospheric nitrogen. Biosynthesis In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation; the scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an Fe7MoS9 ensemble. Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea, which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste. Physiology Ammonia also plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurologic disease common in people with urea cycle defects and organic acidurias. Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion. Excretion Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss. Beyond Earth Ammonia has been detected in the atmospheres of the giant planets, including Jupiter, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos – the two moons of Mars. Interstellar space Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected. The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium. The following isotopic species of ammonia have been detected: NH3, 15NH3, NH2D, NHD2, and ND3 The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate. Since its interstellar discovery, NH3 has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, NH3 has been widely astronomically detected – its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia. The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer. Interstellar formation mechanisms The interstellar abundance for ammonia has been measured for a variety of environments. The [NH3]/[H2] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion Molecular Cloud Complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar NH3 is the reaction: NH4+ + e− → NH3 + H· The rate constant, k, of this reaction depends on the temperature of the environment, with a value of 5.2×10−6 at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an NH4+ abundance of 3×10−7 and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density . All other proposed formation reactions have rate constants of between 2 and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction: H2 + NH2 → NH3 + H has a rate constant of 2.2. Assuming H2 densities of 105 and [NH2]/[H2] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than 3 orders of magnitude slower than the primary reaction above. Some of the other possible formation reactions are: H− + NH4+ → NH3 + H2 PNH3+ + e− → P + NH3 Interstellar destruction mechanisms There are 113 total proposed reactions leading to the destruction of NH3. Of these, 39 were tabulated in extensive tables of the chemistry among C, N, and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms: with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of [NH3]/[H2] = 10−5, [H3+]/[H2] = 2×10−5, [HCO+]/[H2] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of H3+. Single antenna detections Radio observations of NH3 from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components – a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H-H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds. Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy. Interferometric studies VLA observations of NH3 in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region. Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. NH3 was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region. Infrared detections Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin-Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines. A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze. A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars. Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk. Observations of nearby dark clouds By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of NH3 gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation. UC HII regions Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars. Extragalactic detection Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of pc. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred pc (giant molecular clouds; GMCs). See also Notes References Works Cited Further reading External links International Chemical Safety Card 0414 (anhydrous ammonia), ilo.org. International Chemical Safety Card 0215 (aqueous solutions), ilo.org. Emergency Response to Ammonia Fertilizer Releases (Spills) for the Minnesota Department of Agriculture.ammoniaspills.org National Institute for Occupational Safety and Health – Ammonia Page, cdc.gov NIOSH Pocket Guide to Chemical Hazards – Ammonia, cdc.gov Ammonia, video Bases (chemistry) Foul-smelling chemicals Gaseous signaling molecules Household chemicals Industrial gases Inorganic solvents Nitrogen cycle Nitrogen hydrides Nitrogen(−III) compounds Refrigerants Toxicology
Ammonia
In computer programming, assembly language (or assembler language), sometimes abbreviated asm, is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Assembly language may also be called symbolic machine code. Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture. Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, a much more complicated task than assembling. Assembly language syntax Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging. Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s. IBM System/360 All of the IBM assemblers for System/360, by default, have a label in column 1, fields separated by delimiters in columns 2-71, a continuation indicator in column 72 and a sequence number in columns 73-80. The delimiter for label, opcode, operands and comments is spaces, while individual operands are separated by commas and parentheses. Terminology A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code. Open code refers to any assembler input outside of a macro definition. A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record). A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets. A microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer. A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series. inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware. Key concepts Assembler An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines. Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible. Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples. There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming). Number of passes There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file. One-pass assemblers go through the source code once. Any symbol used before it is defined will require "errata" at the end of the object code (or, at least, no earlier than the point where the symbol is defined) telling the linker or the loader to "go back" and overwrite a placeholder which had been left where the as yet undefined symbol was used. Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code. In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more "no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target. The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster. Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2. B ... EQU * ... EQU * ... B High-level assemblers More sophisticated high-level assemblers provide language abstractions such as: High-level procedure/function declarations and invocations Advanced control structures (IF/THEN/ELSE, SWITCH) High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines) Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance See Language design below for more details. Assembly language A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied," which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed. For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001. 10110000 01100001 This binary computer code can be made more human-readable by expressing it in hexadecimal as follows. B0 61 Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember. MOV AL, 61h ; Load AL with 97 decimal (61 hex) In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc. If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is: 88 E0 The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL. In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable. Assembly languages are always designed so that this sort of unambiguousness is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".) Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow. MOV AL, 1h ; Load AL with immediate value 1 MOV CL, 2h ; Load CL with immediate value 2 MOV DL, 3h ; Load DL with immediate value 3 The syntax of MOV can also be more complex as the following examples show. MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX MOV DS, DX ; Move the contents of DX into segment register DS In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which. Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments. Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences. Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation. Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products. Language design Basic elements There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations: Opcode mnemonics Data definitions Assembly directives Opcode mnemonics and extended mnemonics Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0. Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as synthetic instructions. Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as pseudo-opcodes. Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn. Data directives There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops. Assembly directives Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data. The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values. Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination). Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses. Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made. Macros Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s. Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM. In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly. Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time. Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today. It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements. This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop. Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers. Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters. Support for structured programming Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package. A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans. There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages. Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program): include \masm32\include\masm32rt.inc ; use the Masm32 library .code demomain: REPEAT 20 switch rv(nrandom, 9) ; generate a number between 0 and 8 mov ecx, 7 case 0 print "case 0" case ecx ; in contrast to most other programming languages, print "case 7" ; the Masm32 switch allows "variable cases" case 1 .. 3 .if eax==1 print "case 1" .elseif eax==2 print "case 2" .else print "cases 1 to 3: other" .endif case 4, 6, 8 print "cases 4, 6 or 8" default mov ebx, 19 ; print 20 stars .Repeat print "*" dec ebx .Until Sign? ; loop until the sign flag is set endsw print chr$(13, 10) ENDM exit end demomain Use of assembly language Historical perspective Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study. In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955. Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. Assembly languages were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems. Historically, numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s. Most early microcomputers relied on hand-coded assembly language, including most operating systems and large applications. This was because these systems had severe resource constraints, imposed idiosyncratic memory and display architectures, and provided limited, buggy system services. Perhaps more important was the lack of first-class high-level language compilers suitable for microcomputer use. A psychological factor may have also played a role: the first generation of microcomputer programmers retained a hobbyist, "wires and pliers" attitude. In a more commercial context, the biggest reasons for using assembly language were minimal bloat (size), minimal overhead, greater speed, and reliability. Typical examples of large assembly language programs from this time are IBM PC DOS operating systems, the Turbo Pascal compiler and early applications such as the spreadsheet program Lotus 1-2-3. Assembly language was used to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for. The 1993 arcade game NBA Jam is another example. Assembly language has long been the primary development language for many popular home computers of the 1980s and 1990s (such as the MSX, Sinclair ZX Spectrum, Commodore 64, Commodore Amiga, and Atari ST). This was in large part because interpreted BASIC dialects on these systems offered insufficient execution speed, as well as insufficient facilities to take full advantage of the available hardware on these systems. Some systems even have an integrated development environment (IDE) with highly advanced debugging and macro facilities. Some compilers available for the Radio Shack TRS-80 and its successors had the capability to combine inline assembly source with high-level program statements. Upon compilation, a built-in assembler produced inline machine code. Current usage There have always been debates over the usefulness and performance of assembly language relative to high-level languages. Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization. , the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers. There are some situations in which developers might choose to use assembly language: Writing code for systems with that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures. Code that must interact directly with the hardware, for example in device drivers and interrupt handlers. In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second. Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition. A stand-alone executable of compact size is required that must execute without recourse to the run-time components or libraries associated with a high-level language. Examples have included firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, security systems, and sensors. Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264). Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor. Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. However, some higher-level languages incorporate run-time components and operating system interfaces that can introduce such delays. Choosing assembly or lower level languages for such systems gives programmers greater visibility and control over processing details. Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks. Modify and extend legacy code written for IBM mainframe computers. Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted. Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system. Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum. Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available. Reverse-engineering and modifying program files such as: existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software. Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level. Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages. Typical applications Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.) Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running. Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes. Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface. Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies. Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM. Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code. See also Compiler Comparison of assemblers Disassembler Hexadecimal Instruction set architecture Little man computer – an educational computer model with a base-10 assembly language Nibble Typed assembly language Notes References Further reading (2+xiv+270+6 pages) Kann, Charles W. (2021). "Introduction to Assembly Language Programming: From Soup to Nuts: ARM Edition" ("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.) External links Unix Assembly Language Programming Linux Assembly PPR: Learning Assembly Language NASM – The Netwide Assembler (a popular assembly language) Assembly Language Programming Examples Authoring Windows Applications In Assembly Language Assembly Optimization Tips by Mark Larson The table for assembly language to machine code Assembly language Computer-related introductions in 1949 Embedded systems Low-level programming languages Programming language implementation Programming languages created in 1949
Assembly language
In the ancient Greek myths, ambrosia (, ) is the food or drink of the Greek gods, often depicted as conferring longevity or immortality upon whoever consumed it. It was brought to the gods in Olympus by doves and served by either Hebe or Ganymede at the heavenly feast. Ambrosia is sometimes depicted in ancient art as distributed by a nymph labeled with that name and a nurse of Dionysus. Definition Ambrosia is very closely related to the gods' other form of sustenance, nectar. The two terms may not have originally been distinguished; though in Homer's poems nectar is usually the drink and ambrosia the food of the gods; it was with ambrosia Hera "cleansed all defilement from her lovely flesh", and with ambrosia Athena prepared Penelope in her sleep, so that when she appeared for the final time before her suitors, the effects of years had been stripped away, and they were inflamed with passion at the sight of her. On the other hand, in Alcman, nectar is the food, and in Sappho and Anaxandrides, ambrosia is the drink. A character in Aristophanes' Knights says, "I dreamed the goddess poured ambrosia over your head—out of a ladle." Both descriptions could be correct, as ambrosia could be a liquid considered a food (such as honey). The consumption of ambrosia was typically reserved for divine beings. Upon his assumption into immortality on Olympus, Heracles is given ambrosia by Athena, while the hero Tydeus is denied the same thing when the goddess discovers him eating human brains. In one version of the myth of Tantalus, part of Tantalus' crime is that after tasting ambrosia himself, he attempts to steal some to give to other mortals. Those who consume ambrosia typically have ichor, not blood, in their veins. Both nectar and ambrosia are fragrant, and may be used as perfume: in the Odyssey Menelaus and his men are disguised as seals in untanned seal skins, "...and the deadly smell of the seal skins vexed us sore; but the goddess saved us; she brought ambrosia and put it under our nostrils." Homer speaks of ambrosial raiment, ambrosial locks of hair, even the gods' ambrosial sandals. Among later writers, ambrosia has been so often used with generic meanings of "delightful liquid" that such late writers as Athenaeus, Paulus and Dioscurides employ it as a technical terms in contexts of cookery, medicine, and botany. Pliny used the term in connection with different plants, as did early herbalists. Additionally, some modern ethnomycologists, such as Danny Staples, identify ambrosia with the hallucinogenic mushroom Amanita muscaria: "...it was the food of the gods, their ambrosia, and nectar was the pressed sap of its juices", Staples asserts. W. H. Roscher thinks that both nectar and ambrosia were kinds of honey, in which case their power of conferring immortality would be due to the supposed healing and cleansing powers of honey, and because fermented honey (mead) preceded wine as an entheogen in the Aegean world; on some Minoan seals, goddesses were represented with bee faces (compare Merope and Melissa). Etymology The concept of an immortality drink is attested in at least two ancient Indo-European languages: Greek and Sanskrit. The Greek ἀμβροσία (ambrosia) is semantically linked to the Sanskrit (amṛta) as both words denote a drink or food that gods use to achieve immortality. The two words appear to be derived from the same Indo-European form *ṇ-mṛ-tós, "un-dying" (n-: negative prefix from which the prefix a- in both Greek and Sanskrit are derived; mṛ: zero grade of *mer-, "to die"; and -to-: adjectival suffix). A semantically similar etymology exists for nectar, the beverage of the gods (Greek: νέκταρ néktar) presumed to be a compound of the PIE roots *nek-, "death", and -*tar, "overcoming". Other examples in mythology In one version of the story of the birth of Achilles, Thetis anoints the infant with ambrosia and passes the child through the fire to make him immortal but Peleus, appalled, stops her, leaving only his heel unimmortalised (Argonautica 4.869–879). In the Iliad xvi, Apollo washes the black blood from the corpse of Sarpedon and anoints it with ambrosia, readying it for its dreamlike return to Sarpedon's native Lycia. Similarly, Thetis anoints the corpse of Patroclus in order to preserve it. Ambrosia and nectar are depicted as unguents (xiv. 170; xix. 38). In the Odyssey, Calypso is described as having "spread a table with ambrosia and set it by Hermes, and mixed the rosy-red nectar." It is ambiguous whether he means the ambrosia itself is rosy-red, or if he is describing a rosy-red nectar Hermes drinks along with the ambrosia. Later, Circe mentions to Odysseus that a flock of doves are the bringers of ambrosia to Olympus. In the Odyssey (ix.345–359), Polyphemus likens the wine given to him by Odysseus to ambrosia and nectar. One of the impieties of Tantalus, according to Pindar, was that he offered to his guests the ambrosia of the Deathless Ones, a theft akin to that of Prometheus, Karl Kerenyi noted (in Heroes of the Greeks). In the Homeric hymn to Aphrodite, the goddess uses "ambrosial bridal oil that she had ready perfumed." In the story of Eros and Psyche as told by Apuleius, Psyche is given ambrosia upon her completion of the quests set by Aphrodite and her acceptance on Olympus. After she partakes, she and Eros are wed as gods. In the Aeneid, Aeneas encounters his mother in an alternate, or illusory form. When she became her godly form "Her hair's ambrosia breathed a holy fragrance." Lycurgus of Thrace and Ambrosia Lycurgus, king of Thrace, forbade the cult of Dionysus, whom he drove from Thrace, and attacked the gods' entourage when they celebrated the god. Among them was Ambrosia, who turned herself into a grapevine to hide from his wrath. Dionysus, enraged by the king's actions, drove him mad. In his fit of insanity he killed his son, whom he mistook for a stock of ivy, and then himself. See also Elixir of life, a potion sought by alchemy to produce immortality Ichor, blood of the Greek gods, related to ambrosia Iðunn's apples in Norse mythology Manna, food given by God to the Israelites Peaches of Immortality in Chinese mythology Pill of Immortality Silphium Soma (drink), a ritual drink of importance among the early Vedic peoples and Indo-Iranians. References Sources Clay, Jenny Strauss, "Immortal and ageless forever", The Classical Journal 77.2 (December 1981:pp. 112–117). Ruck, Carl A.P. and Danny Staples, The World of Classical Myth 1994, p. 26 et seq. Wright, F. A., "The Food of the Gods", The Classical Review 31.1, (February 1917:4–6). External links Ancient Greek cuisine Fictional food and drink Mount Olympus Mythological medicines and drugs Mythological food and drink
Ambrosia
Amber is fossilized tree resin that has been appreciated for its color and natural beauty since Neolithic times. Much valued from antiquity to the present as a gemstone, amber is made into a variety of decorative objects. Amber is used in jewelry. It has also been used as a healing agent in folk medicine. There are five classes of amber, defined on the basis of their chemical constituents. Because it originates as a soft, sticky tree resin, amber sometimes contains animal and plant material as inclusions. Amber occurring in coal seams is also called resinite, and the term ambrite is applied to that found specifically within New Zealand coal seams. Etymology The English word amber derives from Arabic (ultimately from Middle Persian ambar) via Middle Latin ambar and Middle French ambre. The word was adopted in Middle English in the 14th century as referring to what is now known as ambergris (ambre gris or "grey amber"), a solid waxy substance derived from the sperm whale. In the Romance languages, the sense of the word had come to be extended to Baltic amber (fossil resin) from as early as the late 13th century. At first called white or yellow amber (ambre jaune), this meaning was adopted in English by the early 15th century. As the use of ambergris waned, this became the main sense of the word. The two substances ("yellow amber" and "grey amber") conceivably became associated or confused because they both were found washed up on beaches. Ambergris is less dense than water and floats, whereas amber is too dense to float, though less dense than stone. The classical names for amber, Latin electrum and Ancient Greek (ēlektron), are connected to a term ἠλέκτωρ (ēlektōr) meaning "beaming Sun". According to myth, when Phaëton son of Helios (the Sun) was killed, his mourning sisters became poplar trees, and their tears became elektron, amber. The word elektron gave rise to the words electric, electricity, and their relatives because of amber's ability to bear a charge of static electricity. History Theophrastus discussed amber in the 4th century BCE, as did Pytheas (c. 330 BCE), whose work "On the Ocean" is lost, but was referenced by Pliny the Elder (23 to 79 CE), according to whose The Natural History (in what is also the earliest known mention of the name Germania): Earlier Pliny says that Pytheas refers to a large island—three days' sail from the Scythian coast and called Balcia by Xenophon of Lampsacus (author of a fanciful travel book in Greek)—as Basilia—a name generally equated with Abalus. Given the presence of amber, the island could have been Heligoland, Zealand, the shores of Bay of Gdańsk, the Sambia Peninsula or the Curonian Lagoon, which were historically the richest sources of amber in northern Europe. It is assumed that there were well-established trade routes for amber connecting the Baltic with the Mediterranean (known as the "Amber Road"). Pliny states explicitly that the Germans exported amber to Pannonia, from where the Veneti distributed it onwards. The ancient Italic peoples of southern Italy used to work amber; the National Archaeological Museum of Siritide (Museo Archeologico Nazionale della Siritide) at Policoro in the province of Matera (Basilicata) displays important surviving examples. Amber used in antiquity, as at Mycenae and in the prehistory of the Mediterranean, comes from deposits in Sicily. Pliny also cites the opinion of Nicias ( 470–413 BCE), according to whom amber Besides the fanciful explanations according to which amber is "produced by the Sun", Pliny cites opinions that are well aware of its origin in tree resin, citing the native Latin name of succinum (sūcinum, from sucus "juice"). In Book 37, section XI of Natural History, Pliny wrote: He also states that amber is also found in Egypt and in India, and he even refers to the electrostatic properties of amber, by saying that "in Syria the women make the whorls of their spindles of this substance, and give it the name of harpax [from ἁρπάζω, "to drag"] from the circumstance that it attracts leaves towards it, chaff, and the light fringe of tissues". Pliny says that the German name of amber was glæsum, "for which reason the Romans, when Germanicus Caesar commanded the fleet in those parts, gave to one of these islands the name of Glæsaria, which by the barbarians was known as Austeravia". This is confirmed by the recorded Old High German word glas and by the Old English word glær for "amber" (compare glass). In Middle Low German, amber was known as berne-, barn-, börnstēn (with etymological roots related to "burn" and to "stone"). The Low German term became dominant also in High German by the 18th century, thus modern German Bernstein besides Dutch barnsteen. In the Baltic languages, the Lithuanian term for amber is gintaras and the Latvian dzintars. These words, and the Slavic jantar and Hungarian gyanta ('resin'), are thought to originate from Phoenician jainitar ("sea-resin"). Amber has a long history of use in China, with the first written record from 200 BCE. Early in the 19th century, the first reports of amber found in North America came from discoveries in New Jersey along Crosswicks Creek near Trenton, at Camden, and near Woodbury. Composition and formation Amber is heterogeneous in composition, but consists of several resinous bodies more or less soluble in alcohol, ether and chloroform, associated with an insoluble bituminous substance. Amber is a macromolecule by free radical polymerization of several precursors in the labdane family, e.g. communic acid, cummunol, and biformene. These labdanes are diterpenes (C20H32) and trienes, equipping the organic skeleton with three alkene groups for polymerization. As amber matures over the years, more polymerization takes place as well as isomerization reactions, crosslinking and cyclization. Heated above , amber decomposes, yielding an oil of amber, and leaves a black residue which is known as "amber colophony", or "amber pitch"; when dissolved in oil of turpentine or in linseed oil this forms "amber varnish" or "amber lac". Formation Molecular polymerization, resulting from high pressures and temperatures produced by overlying sediment, transforms the resin first into copal. Sustained heat and pressure drives off terpenes and results in the formation of amber. For this to happen, the resin must be resistant to decay. Many trees produce resin, but in the majority of cases this deposit is broken down by physical and biological processes. Exposure to sunlight, rain, microorganisms (such as bacteria and fungi), and extreme temperatures tends to disintegrate the resin. For the resin to survive long enough to become amber, it must be resistant to such forces or be produced under conditions that exclude them. Botanical origin Fossil resins from Europe fall into two categories, the famous Baltic ambers and another that resembles the Agathis group. Fossil resins from the Americas and Africa are closely related to the modern genus Hymenaea, while Baltic ambers are thought to be fossil resins from plants of the family Sciadopityaceae that once lived in north Europe. Physical attributes Most amber has a hardness between 2.0 and 2.5 on the Mohs scale, a refractive index of 1.5–1.6, a specific gravity between 1.06 and 1.10, and a melting point of 250–300 °C. Inclusions The abnormal development of resin in living trees (succinosis) can result in the formation of amber. Impurities are quite often present, especially when the resin dropped onto the ground, so the material may be useless except for varnish-making. Such impure amber is called firniss. Such inclusion of other substances can cause the amber to have an unexpected color. Pyrites may give a bluish color. Bony amber owes its cloudy opacity to numerous tiny bubbles inside the resin. However, so-called black amber is really only a kind of jet. In darkly clouded and even opaque amber, inclusions can be imaged using high-energy, high-contrast, high-resolution X-rays. Extraction and processing Distribution and mining Amber is globally distributed, mainly in rocks of Cretaceous age or younger. Historically, the coast west of Königsberg in Prussia was the world's leading source of amber. The first mentions of amber deposits here date back to the 12th century. About 90% of the world's extractable amber is still located in that area, which became the Kaliningrad Oblast of Russia in 1946. Pieces of amber torn from the seafloor are cast up by the waves and collected by hand, dredging, or diving. Elsewhere, amber is mined, both in open works and underground galleries. Then nodules of blue earth have to be removed and an opaque crust must be cleaned off, which can be done in revolving barrels containing sand and water. Erosion removes this crust from sea-worn amber. Dominican amber is mined through bell pitting, which is dangerous due to the risk of tunnel collapse. Another important source of amber is Kachin State in northern Myanmar, which has been a major source of amber in China for at least 1800 years. Contemporary mining of this deposit has attracted attention for unsafe working conditions and its role in funding internal conflict in the country. Amber from the Rivne Oblast of Ukraine, referred to as Rovno amber, is mined illegally by organised crime groups, who deforest the surrounding areas and pump water into the sediments to extract the amber, causing severe environmental deterioration. Treatment The Vienna amber factories, which use pale amber to manufacture pipes and other smoking tools, turn it on a lathe and polish it with whitening and water or with rotten stone and oil. The final luster is given by friction with flannel. When gradually heated in an oil-bath, amber becomes soft and flexible. Two pieces of amber may be united by smearing the surfaces with linseed oil, heating them, and then pressing them together while hot. Cloudy amber may be clarified in an oil-bath, as the oil fills the numerous pores to which the turbidity is due. Small fragments, formerly thrown away or used only for varnish, are now used on a large scale in the formation of "ambroid" or "pressed amber". The pieces are carefully heated with exclusion of air and then compressed into a uniform mass by intense hydraulic pressure, the softened amber being forced through holes in a metal plate. The product is extensively used for the production of cheap jewelry and articles for smoking. This pressed amber yields brilliant interference colors in polarized light. Amber has often been imitated by other resins like copal and kauri gum, as well as by celluloid and even glass. Baltic amber is sometimes colored artificially, but also called "true amber". Appearance Amber occurs in a range of different colors. As well as the usual yellow-orange-brown that is associated with the color "amber", amber itself can range from a whitish color through a pale lemon yellow, to brown and almost black. Other uncommon colors include red amber (sometimes known as "cherry amber"), green amber, and even blue amber, which is rare and highly sought after. Yellow amber is a hard fossil resin from evergreen trees, and despite the name it can be translucent, yellow, orange, or brown colored. Known to the Iranians by the Pahlavi compound word kah-ruba (from kah "straw" plus rubay "attract, snatch", referring to its electrical properties), which entered Arabic as kahraba' or kahraba (which later became the Arabic word for electricity, كهرباء kahrabā'), it too was called amber in Europe (Old French and Middle English ambre). Found along the southern shore of the Baltic Sea, yellow amber reached the Middle East and western Europe via trade. Its coastal acquisition may have been one reason yellow amber came to be designated by the same term as ambergris. Moreover, like ambergris, the resin could be burned as an incense. The resin's most popular use was, however, for ornamentation—easily cut and polished, it could be transformed into beautiful jewelry. Much of the most highly prized amber is transparent, in contrast to the very common cloudy amber and opaque amber. Opaque amber contains numerous minute bubbles. This kind of amber is known as "bony amber". Although all Dominican amber is fluorescent, the rarest Dominican amber is blue amber. It turns blue in natural sunlight and any other partially or wholly ultraviolet light source. In long-wave UV light it has a very strong reflection, almost white. Only about is found per year, which makes it valuable and expensive. Sometimes amber retains the form of drops and stalactites, just as it exuded from the ducts and receptacles of the injured trees. It is thought that, in addition to exuding onto the surface of the tree, amber resin also originally flowed into hollow cavities or cracks within trees, thereby leading to the development of large lumps of amber of irregular form. Classification Amber can be classified into several forms. Most fundamentally, there are two types of plant resin with the potential for fossilization. Terpenoids, produced by conifers and angiosperms, consist of ring structures formed of isoprene (C5H8) units. Phenolic resins are today only produced by angiosperms, and tend to serve functional uses. The extinct medullosans produced a third type of resin, which is often found as amber within their veins. The composition of resins is highly variable; each species produces a unique blend of chemicals which can be identified by the use of pyrolysis–gas chromatography–mass spectrometry. The overall chemical and structural composition is used to divide ambers into five classes. There is also a separate classification of amber gemstones, according to the way of production. Class I This class is by far the most abundant. It comprises labdatriene carboxylic acids such as communic or ozic acids. It is further split into three sub-classes. Classes Ia and Ib utilize regular labdanoid diterpenes (e.g. communic acid, communol, biformenes), while Ic uses enantio labdanoids (ozic acid, ozol, enantio biformenes). Ia Class Ia includes Succinite (= 'normal' Baltic amber) and Glessite. They have a communic acid base, and they also include much succinic acid. Baltic amber yields on dry distillation succinic acid, the proportion varying from about 3% to 8%, and being greatest in the pale opaque or bony varieties. The aromatic and irritating fumes emitted by burning amber are mainly due to this acid. Baltic amber is distinguished by its yield of succinic acid, hence the name succinite. Succinite has a hardness between 2 and 3, which is rather greater than that of many other fossil resins. Its specific gravity varies from 1.05 to 1.10. It can be distinguished from other ambers via IR spectroscopy due to a specific carbonyl absorption peak. IR spectroscopy can detect the relative age of an amber sample. Succinic acid may not be an original component of amber, but rather a degradation product of abietic acid. Ib Like class Ia ambers, these are based on communic acid; however, they lack succinic acid. Ic This class is mainly based on enantio-labdatrienonic acids, such as ozic and zanzibaric acids. Its most familiar representative is Dominican amber. Dominican amber differentiates itself from Baltic amber by being mostly transparent and often containing a higher number of fossil inclusions. This has enabled the detailed reconstruction of the ecosystem of a long-vanished tropical forest. Resin from the extinct species Hymenaea protera is the source of Dominican amber and probably of most amber found in the tropics. It is not "succinite" but "retinite". Class II These ambers are formed from resins with a sesquiterpenoid base, such as cadinene. Class III These ambers are polystyrenes. Class IV Class IV is something of a catch-all: its ambers are not polymerized, but mainly consist of cedrene-based sesquiterpenoids. Class V Class V resins are considered to be produced by a pine or pine relative. They comprise a mixture of diterpinoid resins and n-alkyl compounds. Their main variety is Highgate copalite. Geological record The oldest amber recovered dates to the Upper Carboniferous period (). Its chemical composition makes it difficult to match the amber to its producers – it is most similar to the resins produced by flowering plants; however, there are no flowering plant fossils known from before the Cretaceous, and they were not common until the Late Cretaceous. Amber becomes abundant long after the Carboniferous, in the Early Cretaceous, , when it is found in association with insects. The oldest amber with arthropod inclusions comes from the Late Triassic (late Carnian 230 Ma) of Italy, where four microscopic (0.2–0.1 mm) mites, Triasacarus, Ampezzoa, Minyacarus and Cheirolepidoptus, and a poorly preserved nematoceran fly were found in millimetre-sized droplets of amber. The oldest amber with significant numbers of arthropod inclusions comes from Lebanon. This amber, referred to as Lebanese amber, is roughly 125–135 million years old, is considered of high scientific value, providing evidence of some of the oldest sampled ecosystems. In Lebanon, more than 450 outcrops of Lower Cretaceous amber were discovered by Dany Azar, a Lebanese paleontologist and entomologist. Among these outcrops, 20 have yielded biological inclusions comprising the oldest representatives of several recent families of terrestrial arthropods. Even older, Jurassic amber has been found recently in Lebanon as well. Many remarkable insects and spiders were recently discovered in the amber of Jordan including the oldest zorapterans, clerid beetles, umenocoleid roaches, and achiliid planthoppers. The most important amber from the Cretaceous is the Burmese amber from the Hukawng Valley in northern Myanmar, and is the only commercially exploited Cretaceous amber. Uranium–lead dating of zircon crystals associated with the deposit have given an estimated depositional age of approximately 99 million years ago. Over 1300 species have been described from the amber, with over 300 in 2019 alone. Baltic amber or succinite (historically documented as Prussian amber) is found as irregular nodules in marine glauconitic sand, known as blue earth, occurring in Upper Eocene strata of Sambia in Prussia (in historical sources also referred to as Glaesaria). After 1945, this territory around Königsberg was turned into Kaliningrad Oblast, Russia, where amber is now systematically mined. It appears, however, to have been partly derived from older Eocene deposits and it occurs also as a derivative phase in later formations, such as glacial drift. Relics of an abundant flora occur as inclusions trapped within the amber while the resin was yet fresh, suggesting relations with the flora of Eastern Asia and the southern part of North America. Heinrich Göppert named the common amber-yielding pine of the Baltic forests Pinites succiniter, but as the wood does not seem to differ from that of the existing genus it has been also called Pinus succinifera. It is improbable, however, that the production of amber was limited to a single species; and indeed a large number of conifers belonging to different genera are represented in the amber-flora. Paleontological significance Amber is a unique preservational mode, preserving otherwise unfossilizable parts of organisms; as such it is helpful in the reconstruction of ecosystems as well as organisms; the chemical composition of the resin, however, is of limited utility in reconstructing the phylogenetic affinity of the resin producer. Amber sometimes contains animals or plant matter that became caught in the resin as it was secreted. Insects, spiders and even their webs, annelids, frogs, crustaceans, bacteria and amoebae, marine microfossils, wood, flowers and fruit, hair, feathers and other small organisms have been recovered in Cretaceous ambers (deposited c. ). The preservation of prehistoric organisms in amber forms a key plot point in Michael Crichton's 1990 novel Jurassic Park and the 1993 movie adaptation by Steven Spielberg. In the story, scientists are able to extract the preserved blood of dinosaurs from prehistoric mosquitoes trapped in amber, from which they genetically clone living dinosaurs. Scientifically this is as yet impossible, since no amber with fossilized mosquitoes has ever yielded preserved blood. Amber is, however, conducive to preserving DNA, since it dehydrates and thus stabilizes organisms trapped inside. One projection in 1999 estimated that DNA trapped in amber could last up to 100 million years, far beyond most estimates of around 1 million years in the most ideal conditions, although a later 2013 study was unable to extract DNA from insects trapped in much more recent Holocene copal. In 1938, 12-year-old David Attenborough (brother of Richard who played John Hammond in Jurassic Park) was given a piece of amber containing prehistoric creatures from his adoptive sister; some sixty years later, it would be the focus of his 2004 BBC documentary The Amber Time Machine. Use Amber has been used since prehistory (Solutrean) in the manufacture of jewelry and ornaments, and also in folk medicine. Jewelry Amber has been used as jewelry since the Stone Age, from 13,000 years ago. Amber ornaments have been found in Mycenaean tombs and elsewhere across Europe.<ref>[http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8456512 Curt W. Beck, Anthony Harding and Helen Hughes-Brock, "Amber in the Mycenaean World" The Annual of the British School at Athens, vol. 69 (November 1974), pp. 145-172. DOI:10.1017/S0068245400005505] </ref> To this day it is used in the manufacture of smoking and glassblowing mouthpieces. Amber's place in culture and tradition lends it a tourism value; Palanga Amber Museum is dedicated to the fossilized resin. Historical medicinal uses Amber has long been used in folk medicine for its purported healing properties. Amber and extracts were used from the time of Hippocrates in ancient Greece for a wide variety of treatments through the Middle Ages and up until the early twentieth century. Traditional Chinese medicine uses amber to "tranquilize the mind". With children Amber necklaces are a traditional European remedy for colic or teething pain due to the purported analgesic properties of succinic acid, although there is no evidence that this is an effective remedy or delivery method. The American Academy of Pediatrics and the FDA have warned strongly against their use, as they present both a choking and a strangulation hazard. Scent of amber and amber perfumery In ancient China, it was customary to burn amber during large festivities. If amber is heated under the right conditions, oil of amber is produced, and in past times this was combined carefully with nitric acid to create "artificial musk" – a resin with a peculiar musky odor. Although when burned, amber does give off a characteristic "pinewood" fragrance, modern products, such as perfume, do not normally use actual amber because fossilized amber produces very little scent. In perfumery, scents referred to as "amber" are often created and patentedPerfume compositions and perfume articles containing one isomer of an octahydrotetramethyl acetonaphthone, John B. Hall, Rumson; James Milton Sanders, Eatontown , Publication Date: 30 December 1975 to emulate the opulent golden warmth of the fossil. The modern name for amber is thought to come from the Arabic word, ambar, meaning ambergris. Ambergris is the waxy aromatic substance created in the intestines of sperm whales and was used in making perfumes both in ancient times as well as modern. The scent of amber was originally derived from emulating the scent of ambergris and/or the plant resin labdanum, but due to the endangered species status of the sperm whale the scent of amber is now largely derived from labdanum. The term "amber" is loosely used to describe a scent that is warm, musky, rich and honey-like, and also somewhat earthy. It can be synthetically created or derived from natural resins. When derived from natural resins it is most often created out of labdanum. Benzoin is usually part of the recipe. Vanilla and cloves are sometimes used to enhance the aroma. "Amber" perfumes may be created using combinations of labdanum, benzoin resin, copal (itself a type of tree resin used in incense manufacture), vanilla, Dammara resin and/or synthetic materials. Imitation Imitation made in natural resins Young resins used as imitations: Kauri resin from Agathis australis trees in New Zealand. The copals (subfossil resins). The African and American (Colombia) copals from Leguminosae trees family (genus Hymenaea). Amber of the Dominican or Mexican type (Class I of fossil resins). Copals from Manilia (Indonesia) and from New Zealand from trees of the genus Agathis'' (family Araucariaceae) Other fossil resins: burmite in Burma, rumenite in Romania, and simetite in Sicily. Other natural resins — cellulose or chitin, etc. Imitations made of plastics Plastics used as imitations: Stained glass (inorganic material) and other ceramic materials Celluloid Cellulose nitrate (first obtained in 1833) — a product of treatment of cellulose with nitration mixture. Acetylcellulose (not in the use at present) Galalith or "artificial horn" (condensation product of casein and formaldehyde), other trade names: Alladinite, Erinoid, Lactoid. Casein — a conjugated protein forming from the casein precursor – caseinogen. Resolane (phenolic resins or phenoplasts, not in the use at present) Bakelite resine (resol, phenolic resins), product from Africa are known under the misleading name "African amber". Carbamide resins — melamine, formaldehyde and urea-formaldehyde resins. Epoxy novolac (phenolic resins), unofficial name "antique amber", not in the use at present Polyesters (Polish amber imitation) with styrene. For example, unsaturated polyester resins (polymals) are produced by Chemical Industrial Works "Organika" in Sarzyna, Poland; estomal are produced by Laminopol firm. Polybern or sticked amber is artificial resins the curled chips are obtained, whereas in the case of amber – small scraps. "African amber" (polyester, synacryl is then probably other name of the same resine) are produced by Reichhold firm; Styresol trade mark or alkid resin (used in Russia, Reichhold, Inc. patent, 1948. Polyethylene Epoxy resins Polystyrene and polystyrene-like polymers (vinyl polymers). The resins of acrylic type (vinyl polymers), especially polymethyl methacrylate PMMA (trade mark Plexiglass, metaplex). See also Ammolite List of types of amber Fossilized tree Pearl Poly(methyl methacrylate) Precious coral References Bibliography External links Farlang many full text historical references on Amber Theophrastus, George Frederick Kunz, and special on Baltic amber. IPS Publications on amber inclusions International Paleoentomological Society: Scientific Articles on amber and its inclusions Webmineral on Amber Physical properties and mineralogical information Mindat Amber Image and locality information on amber NY Times 40 million year old extinct bee in Dominican amber Fossil resins Amorphous solids Traditional medicine
Amber
The terms (AD) and before Christ (BC) are used to label or number years in the Julian and Gregorian calendars. The term is Medieval Latin and means "in the year of the Lord", but is often presented using "our Lord" instead of "the Lord", taken from the full original phrase "anno Domini nostri Jesu Christi", which translates to "in the year of our Lord Jesus Christ". This calendar era is based on the traditionally reckoned year of the conception or birth of Jesus, with AD counting years from the start of this epoch and BC denoting years before the start of the era. There is no year zero in this scheme; thus the year AD 1 immediately follows the year 1 BC. This dating system was devised in 525 by Dionysius Exiguus of Scythia Minor, but was not widely used until the 9th century. Traditionally, English follows Latin usage by placing the "AD" abbreviation before the year number, though it is also found after the year. In contrast, BC is always placed after the year number (for example: AD , but 68 BC), which preserves syntactic order. The abbreviation AD is also widely used after the number of a century or millennium, as in "fourth century AD" or "second millennium AD" (although conservative usage formerly rejected such expressions). Because BC is the English abbreviation for Before Christ, it is sometimes incorrectly concluded that AD means After Death, i.e., after the death of Jesus. However, this would mean that the approximate 33 years commonly associated with the life of Jesus would be included in neither the BC nor the AD time scales. Terminology that is viewed by some as being more neutral and inclusive of non-Christian people is to call this the Current or Common Era (abbreviated as CE), with the preceding years referred to as Before the Common or Current Era (BCE). Astronomical year numbering and ISO 8601 avoid words or abbreviations related to Christianity, but use the same numbers for AD years (but not for BC years in the case of astronomical years; e.g., 1 BC is year 0, 45 BC is year −44). History The Anno Domini dating system was devised in 525 by Dionysius Exiguus to enumerate the years in his Easter table. His system was to replace the Diocletian era that had been used in an old Easter table, as he did not wish to continue the memory of a tyrant who persecuted Christians. The last year of the old table, Diocletian Anno Martyrium 247, was immediately followed by the first year of his table, Anno Domini 532. When Dionysius devised his table, Julian calendar years were identified by naming the consuls who held office that year— Dionysius himself stated that the "present year" was "the consulship of Probus Junior", which was 525 years "since the incarnation of our Lord Jesus Christ". Thus, Dionysius implied that Jesus' incarnation occurred 525 years earlier, without stating the specific year during which His birth or conception occurred. "However, nowhere in his exposition of his table does Dionysius relate his epoch to any other dating system, whether consulate, Olympiad, year of the world, or regnal year of Augustus; much less does he explain or justify the underlying date." Bonnie J. Blackburn and Leofranc Holford-Strevens briefly present arguments for 2 BC, 1 BC, or AD 1 as the year Dionysius intended for the Nativity or incarnation. Among the sources of confusion are: In modern times, incarnation is synonymous with the conception, but some ancient writers, such as Bede, considered incarnation to be synonymous with the Nativity. The civil or consular year began on 1 January, but the Diocletian year began on 29 August (30 August in the year before a Julian leap year). There were inaccuracies in the lists of consuls. There were confused summations of emperors' regnal years. It is not known how Dionysius established the year of Jesus's birth. Two major theories are that Dionysius based his calculation on the Gospel of Luke, which states that Jesus was "about thirty years old" shortly after "the fifteenth year of the reign of Tiberius Caesar", and hence subtracted thirty years from that date, or that Dionysius counted back 532 years from the first year of his new table. It has also been speculated by Georges Declercq that Dionysius' desire to replace Diocletian years with a calendar based on the incarnation of Christ was intended to prevent people from believing the imminent end of the world. At the time, it was believed by some that the resurrection of the dead and end of the world would occur 500 years after the birth of Jesus. The old Anno Mundi calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that, based on the Anno Mundi calendar, Jesus was born in the year 5500 (5500 years after the world was created) with the year 6000 of the Anno Mundi calendar marking the end of the world. Anno Mundi 6000 (approximately AD 500) was thus equated with the end of the world but this date had already passed in the time of Dionysius. The "Historia Brittonum" attributed to Nennius written in the 9th century makes extensive use of the Anno Passionis (AP) dating system which was in common use as well as the newer AD dating system. The AP dating system took its start from 'The Year of The Passion'. It is generally accepted by experts there is a 27-year difference between AP and AD reference. Popularization The Anglo-Saxon historian Saint (Venerable) Bede, who was familiar with the work of Dionysius Exiguus, used Anno Domini dating in his Ecclesiastical History of the English People, which he completed in AD 731. In the History he also used the Latin phrase ante [...] incarnationis dominicae tempus anno sexagesimo ("in the sixtieth year before the time of the Lord's incarnation"), which is equivalent to the English "before Christ", to identify years before the first year of this era. Both Dionysius and Bede regarded Anno Domini as beginning at the incarnation of Jesus Christ, but "the distinction between Incarnation and Nativity was not drawn until the late 9th century, when in some places the Incarnation epoch was identified with Christ's conception, i. e., the Annunciation on March 25" ("Annunciation style" dating). On the continent of Europe, Anno Domini was introduced as the era of choice of the Carolingian Renaissance by the English cleric and scholar Alcuin in the late eighth century. Its endorsement by Emperor Charlemagne and his successors popularizing the use of the epoch and spreading it throughout the Carolingian Empire ultimately lies at the core of the system's prevalence. According to the Catholic Encyclopedia, popes continued to date documents according to regnal years for some time, but usage of AD gradually became more common in Catholic countries from the 11th to the 14th centuries. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius. Eastern Orthodox countries only began to adopt AD instead of the Byzantine calendar in 1700 when Russia did so, with others adopting it in the 19th and 20th centuries. Although Anno Domini was in widespread use by the 9th century, the term "Before Christ" (or its equivalent) did not become common until much later. Bede used the expression "anno [...] ante incarnationem Dominicam" (in the year before the incarnation of the Lord) twice. "Anno ante Christi nativitatem" (in the year before the birth of Christ) is found in 1474 in a work by a German monk. In 1627, the French Jesuit theologian Denis Pétau (Dionysius Petavius in Latin), with his work De doctrina temporum, popularized the usage ante Christum (Latin for "Before Christ") to mark years prior to AD. New year When the reckoning from Jesus' incarnation began replacing the previous dating systems in western Europe, various people chose different Christian feast days to begin the year: Christmas, Annunciation, or Easter. Thus, depending on the time and place, the year number changed on different days in the year, which created slightly different styles in chronology: From 25 March 753 AUC (today in 1 BC), i.e., notionally from the incarnation of Jesus. That first "Annunciation style" appeared in Arles at the end of the 9th century then spread to Burgundy and northern Italy. It was not commonly used and was called calculus pisanus since it was adopted in Pisa and survived there till 1750. From 25 December 753 AUC (today in 1 BC), i.e., notionally from the birth of Jesus. It was called "Nativity style" and had been spread by Bede together with the Anno Domini in the early Middle Ages. That reckoning of the Year of Grace from Christmas was used in France, England and most of western Europe (except Spain) until the 12th century (when it was replaced by Annunciation style) and in Germany until the second quarter of the 13th century. From 25 March 754 AUC (today in AD 1). That second "Annunciation style" may have originated in Fleury Abbey in the early 11th century, but it was spread by the Cistercians. Florence adopted that style in opposition to that of Pisa, so it got the name of calculus florentinus. It soon spread in France and also in England where it became common in the late 12th century and lasted until 1752. From Easter, starting in 754 AUC (AD 1). That mos gallicanus (French custom) bound to a moveable feast was introduced in France by king Philip Augustus (r. 1180–1223), maybe to establish a new style in the provinces reconquered from England. However, it never spread beyond the ruling élite. With these various styles, the same day could, in some cases, be dated in 1099, 1100 or 1101. Birth date of Jesus The date of birth of Jesus of Nazareth is not stated in the gospels or in any secular text, but most scholars assume a date of birth between 6 BC and 4 BC. The historical evidence is too fragmentary to allow a definitive dating, but the date is estimated through two different approaches—one by analyzing references to known historical events mentioned in the Nativity accounts in the Gospels of Luke and Matthew and the second by working backwards from the estimation of the start of the ministry of Jesus. Other Christian and European eras During the first six centuries of what would come to be known as the Christian era, European countries used various systems to count years. Systems in use included consular dating, imperial regnal year dating, and Creation dating. Although the last non-imperial consul, Basilius, was appointed in 541 by Emperor Justinian I, later emperors through to Constans II (641–668) were appointed consuls on the first of January after their accession. All of these emperors, except Justinian, used imperial post-consular years for the years of their reign, along with their regnal years. Long unused, this practice was not formally abolished until Novell XCIV of the law code of Leo VI did so in 888. Another calculation had been developed by the Alexandrian monk Annianus around the year AD 400, placing the Annunciation on 25 March AD 9 (Julian)—eight to ten years after the date that Dionysius was to imply. Although this incarnation was popular during the early centuries of the Byzantine Empire, years numbered from it, an Era of Incarnation, were exclusively used and are still used in Ethiopia. This accounts for the seven- or eight-year discrepancy between the Gregorian and Ethiopian calendars. Byzantine chroniclers like Maximus the Confessor, George Syncellus, and Theophanes dated their years from Annianus' creation of the world. This era, called Anno Mundi, "year of the world" (abbreviated AM), by modern scholars, began its first year on 25 March 5492 BC. Later Byzantine chroniclers used Anno Mundi years from 1 September 5509 BC, the Byzantine Era. No single Anno Mundi epoch was dominant throughout the Christian world. Eusebius of Caesarea in his Chronicle used an era beginning with the birth of Abraham, dated in 2016 BC (AD 1 = 2017 Anno Abrahami). Spain and Portugal continued to date by the Spanish Era (also called Era of the Caesars), which began counting from 38 BC, well into the Middle Ages. In 1422, Portugal became the last Catholic country to adopt the Anno Domini system. The Era of Martyrs, which numbered years from the accession of Diocletian in 284, who launched the most severe persecution of Christians, was used by the Church of Alexandria and is still used, officially, by the Coptic Orthodox and Coptic Catholic churches. It was also used by the Ethiopian church. Another system was to date from the crucifixion of Jesus, which as early as Hippolytus and Tertullian was believed to have occurred in the consulate of the Gemini (AD 29), which appears in some medieval manuscripts. CE and BCE Alternative names for the Anno Domini era include vulgaris aerae (found 1615 in Latin), "Vulgar Era" (in English, as early as 1635), "Christian Era" (in English, in 1652), "Common Era" (in English, 1708), and "Current Era". Since 1856, the alternative abbreviations CE and BCE (sometimes written C.E. and B.C.E.) are sometimes used in place of AD and BC. The "Common/Current Era" ("CE") terminology is often preferred by those who desire a term that does not explicitly make religious references but still uses the same estimated date of Christ's birth as the dividing point. For example, Cunningham and Starr (1998) write that "B.C.E./C.E. […] do not presuppose faith in Christ and hence are more appropriate for interfaith dialog than the conventional B.C./A.D." Upon its foundation, the Republic of China adopted the Minguo Era but used the Western calendar for international purposes. The translated term was (). Later, in 1949, the People's Republic of China adopted () for all purposes domestic and foreign. No year zero: start and end of a century In the AD year numbering system, whether applied to the Julian or Gregorian calendars, AD 1 is immediately preceded by 1 BC, with nothing in between them (there was no year zero). There are debates as to whether a new decade, century, or millennium begins on a year ending in zero or one. For computational reasons, astronomical year numbering and the ISO 8601 standard designate years so that AD 1 = year 1, 1 BC = year 0, 2 BC = year −1, etc. In common usage, ancient dates are expressed in the Julian calendar, but ISO 8601 uses the Gregorian calendar and astronomers may use a variety of time scales depending on the application. Thus dates using the year 0 or negative years may require further investigation before being converted to BC or AD. See also Before Present Holocene calendar Notes References Citations Sources Bede. (731). Historiam ecclesiasticam gentis Anglorum. Retrieved 2007-12-07. Corrected reprinting of original 1999 edition. (despite beginning with 2, it is English) Declercq, G. "Dionysius Exiguus and the Introduction of the Christian Era". Sacris Erudiri 41 (2002): 165–246. An annotated version of part of Anno Domini. Doggett. (1992). "Calendars" (Ch. 12), in P. Kenneth Seidelmann (Ed.) Explanatory supplement to the astronomical almanac. Sausalito, CA: University Science Books. . Patrick, J. (1908). "General Chronology". In The Catholic Encyclopedia. New York: Robert Appleton Company. Retrieved 2008-07-16 from New Advent: Catholic Encyclopedia: General Chronology External links Calendar Converter 6th-century Christianity Calendar eras Christian terminology Chronology Latin religious words and phrases Timelines of Christianity
Anno Domini
APL (named after the book A Programming Language) is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming, and computer math packages. It has also inspired several other programming languages. History Mathematical notation A mathematical notation for manipulating arrays was developed by Kenneth E. Iverson, starting in 1957 at Harvard University. In 1960, he began work for IBM where he developed this notation with Adin Falkoff and published it in his book A Programming Language in 1962. The preface states its premise: This notation was used inside IBM for short research reports on computer systems, such as the Burroughs B5000 and its stack mechanism when stack machines versus register machines were being evaluated by IBM for upcoming computers. Iverson also used his notation in a draft of the chapter A Programming Language, written for a book he was writing with Fred Brooks, Automatic Data Processing, which would be published in 1963. In 1979, Iverson received the Turing Award for his work on APL. Development into a computer programming language As early as 1962, the first attempt to use the notation to describe a complete computer system happened after Falkoff discussed with William C. Carter his work to standardize the instruction set for the machines that later became the IBM System/360 family. In 1963, Herbert Hellerman, working at the IBM Systems Research Institute, implemented a part of the notation on an IBM 1620 computer, and it was used by students in a special high school course on calculating transcendental functions by series summation. Students tested their code in Hellerman's lab. This implementation of a part of the notation was called Personalized Array Translator (PAT). In 1963, Falkoff, Iverson, and Edward H. Sussenguth Jr., all working at IBM, used the notation for a formal description of the IBM System/360 series machine architecture and functionality, which resulted in a paper published in IBM Systems Journal in 1964. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, an educational company bought by IBM in 1964. Lawrence asked Iverson and his group to help use the language as a tool to develop and use computers in education. After Lawrence M. Breed and Philip S. Abrams of Stanford University joined the team at IBM Research, they continued their prior work on an implementation programmed in FORTRAN IV for a part of the notation which had been done for the IBM 7090 computer running on the IBSYS operating system. This work was finished in late 1965 and later named IVSYS (for Iverson system). The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report, "An Interpreter for Iverson Notation" in 1966, the academic aspect of this was formally supervised by Niklaus Wirth. Like Hellerman's PAT system earlier, this implementation did not include the APL character set but used special English reserved words for functions and operators. The system was later adapted for a time-sharing system and, by November 1966, it had been reprogrammed for the IBM System/360 Model 50 computer running in a time-sharing mode and was used internally at IBM. Hardware A key development in the ability to use APL effectively, before the wide use of cathode ray tube (CRT) terminals, was the development of a special IBM Selectric typewriter interchangeable typing element with all the special APL characters on it. This was used on paper printing terminal workstations using the Selectric typewriter and typing element mechanism, such as the IBM 1050 and IBM 2741 terminal. Keycaps could be placed over the normal keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could type in and see proper APL characters as used in Iverson's notation and not be forced to use awkward English keyword representations of them. Falkoff and Iverson had the special APL Selectric typing elements, 987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typing element for the APL character set. Many APL symbols, even with the APL characters on the Selectric typing element, still had to be typed in by over-striking two extant element characters. An example is the grade up character, which had to be made from a delta (shift-H) and a Sheffer stroke (shift-M). This was necessary because the APL character set was much larger than the 88 characters allowed on the typing element, even when letters were restricted to upper-case (capitals). Commercial availability The first APL interactive login and creation of an APL workspace was in 1966 by Larry Breed using an IBM 1050 terminal at the IBM Mohansic Labs near Thomas J. Watson Research Center, the home of APL, in Yorktown Heights, New York. IBM was chiefly responsible for introducing APL to the marketplace. The first publicly available version of APL was released in 1968 for the IBM 1130. IBM provided APL\1130 for free but without liability or support. It would run in as little as 8k 16-bit words of memory, and used a dedicated 1 megabyte hard disk. APL gained its foothold on mainframe timesharing systems from the late 1960s through the early 1980s, in part because it would support multiple users on lower-specification systems that had no dynamic address translation hardware. Additional improvements in performance for selected IBM System/370 mainframe systems included the APL Assist Microcode in which some support for APL execution was included in the processor's firmware, as distinct from being implemented entirely by higher-level software. Somewhat later, as suitably performing hardware was finally growing available in the mid- to late-1980s, many users migrated their applications to the personal computer environment. Early IBM APL interpreters for IBM 360 and IBM 370 hardware implemented their own multi-user management instead of relying on the host services, thus they were their own timesharing systems. First introduced for use at IBM in 1966, the APL\360 system was a multi-user interpreter. The ability to programmatically communicate with the operating system for information and setting interpreter system variables was done through special privileged "I-beam" functions, using both monadic and dyadic operations. In 1973, IBM released APL.SV, which was a continuation of the same product, but which offered shared variables as a means to access facilities outside of the APL system, such as operating system files. In the mid-1970s, the IBM mainframe interpreter was even adapted for use on the IBM 5100 desktop computer, which had a small CRT and an APL keyboard, when most other small computers of the time only offered BASIC. In the 1980s, the VSAPL program product enjoyed wide use with Conversational Monitor System (CMS), Time Sharing Option (TSO), VSPC, MUSIC/SP, and CICS users. In 1973–1974, Patrick E. Hagerty directed the implementation of the University of Maryland APL interpreter for the 1100 line of the Sperry UNIVAC 1100/2200 series mainframe computers. At the time, Sperry had nothing. In 1974, student Alan Stebbens was assigned the task of implementing an internal function. Xerox APL was available from June 1975 for Xerox 560 and Sigma 6, 7, and 9 mainframes running CP-V and for Honeywell CP-6. In the 1960s and 1970s, several timesharing firms arose that sold APL services using modified versions of the IBM APL\360 interpreter. In North America, the better-known ones were IP Sharp Associates, Scientific Time Sharing Corporation (STSC), Time Sharing Resources (TSR), and The Computer Company (TCC). CompuServe also entered the market in 1978 with an APL Interpreter based on a modified version of Digital Equipment Corp and Carnegie Mellon's, which ran on DEC's KI and KL 36-bit machines. CompuServe's APL was available both to its commercial market and the consumer information service. With the advent first of less expensive mainframes such as the IBM 4300, and later the personal computer, by the mid-1980s, the timesharing industry was all but gone. Sharp APL was available from IP Sharp Associates, first as a timesharing service in the 1960s, and later as a program product starting around 1979. Sharp APL was an advanced APL implementation with many language extensions, such as packages (the ability to put one or more objects into a single variable), file system, nested arrays, and shared variables. APL interpreters were available from other mainframe and mini-computer manufacturers also, notably Burroughs, Control Data Corporation (CDC), Data General, Digital Equipment Corporation (DEC), Harris, Hewlett-Packard (HP), Siemens, Xerox and others. Garth Foster of Syracuse University sponsored regular meetings of the APL implementers' community at Syracuse's Minnowbrook Conference Center in Blue Mountain Lake, New York. In later years, Eugene McDonnell organized similar meetings at the Asilomar Conference Grounds near Monterey, California, and at Pajaro Dunes near Watsonville, California. The SIGAPL special interest group of the Association for Computing Machinery continues to support the APL community. Microcomputers On microcomputers, which became available from the mid 1970s onwards, BASIC became the dominant programming language. Nevertheless, some microcomputers provided APL instead - the first being the Intel 8008-based MCM/70 which was released in 1974 and which was primarily used in education. Another machine of this time was the VideoBrain Family Computer, released in 1977, which was supplied with its dialect of APL called APL/S. The Commodore SuperPET, introduced in 1981, included an APL interpreter developed by the University of Waterloo. In 1976, Bill Gates claimed in his Open Letter to Hobbyists that Microsoft Corporation was implementing APL for the Intel 8080 and Motorola 6800 but had "very little incentive to make [it] available to hobbyists" because of software piracy. It was never released. APL2 Starting in the early 1980s, IBM APL development, under the leadership of Jim Brown, implemented a new version of the APL language that contained as its primary enhancement the concept of nested arrays, where an array can contain other arrays, and new language features which facilitated integrating nested arrays into program workflow. Ken Iverson, no longer in control of the development of the APL language, left IBM and joined I. P. Sharp Associates, where one of his major contributions was directing the evolution of Sharp APL to be more in accord with his vision. APL2 was first released for CMS and TSO in 1984. The APL2 Workstation edition (Windows, OS/2, AIX, Linux, and Solaris) followed later. As other vendors were busy developing APL interpreters for new hardware, notably Unix-based microcomputers, APL2 was almost always the standard chosen for new APL interpreter developments. Even today, most APL vendors or their users cite APL2 compatibility, as a selling point for those products. IBM cites its use for problem solving, system design, prototyping, engineering and scientific computations, expert systems, for teaching mathematics and other subjects, visualization and database access. Modern implementations Various implementations of APL by APLX, Dyalog, et al., include extensions for object-oriented programming, support for .NET Framework, XML-array conversion primitives, graphing, operating system interfaces, and lambda calculus expressions. Derivative languages APL has formed the basis of, or influenced, the following languages: A and A+, an alternative APL, the latter with graphical extensions. FP, a functional programming language. Ivy, an interpreter for an APL-like language developed by Rob Pike, and which uses ASCII as input. J, which was also designed by Iverson, and which uses ASCII with digraphs instead of special symbols. K, a proprietary variant of APL developed by Arthur Whitney. LYaPAS, a Soviet extension to APL. MATLAB, a numerical computation tool. Nial, a high-level array programming language with a functional programming notation. Polymorphic Programming Language, an interactive, extensible language with a similar base language. S, a statistical programming language (usually now seen in the open-source version known as R). Speakeasy, a numerical computing interactive environment. Wolfram Language, the programming language of Mathematica. Language characteristics Character set APL has been criticized and praised for its choice of a unique, non-standard character set. Some who learn it become ardent adherents. In the 1960s and 1970s, few terminal devices or even displays could reproduce the APL character set. The most popular ones employed the IBM Selectric print mechanism used with a special APL type element. One of the early APL line terminals (line-mode operation only, not full screen) was the Texas Instruments TI Model 745 (circa 1977) with the full APL character set which featured half and full duplex telecommunications modes, for interacting with an APL time-sharing service or remote mainframe to run a remote computer job, called an RJE. Over time, with the universal use of high-quality graphic displays, printing devices and Unicode support, the APL character font problem has largely been eliminated. However, entering APL characters requires the use of input method editors, keyboard mappings, virtual/on-screen APL symbol sets, or easy-reference printed keyboard cards which can frustrate beginners accustomed to other programming languages. With beginners who have no prior experience with other programming languages, a study involving high school students found that typing and using APL characters did not hinder the students in any measurable way. In defense of APL, it requires fewer characters to type, and keyboard mappings become memorized over time. Special APL keyboards are also made and in use today, as are freely downloadable fonts for operating systems such as Microsoft Windows. The reported productivity gains assume that one spends enough time working in the language to make it worthwhile to memorize the symbols, their semantics, and keyboard mappings, not to mention a substantial number of idioms for common tasks. Design Unlike traditionally structured programming languages, APL code is typically structured as chains of monadic or dyadic functions, and operators acting on arrays. APL has many nonstandard primitives (functions and operators) that are indicated by a single symbol or a combination of a few symbols. All primitives are defined to have the same precedence, and always associate to the right. Thus, APL is read or best understood from right-to-left. Early APL implementations (circa 1970 or so) had no programming loop-flow control structures, such as do or while loops, and if-then-else constructs. Instead, they used array operations, and use of structured programming constructs was often not necessary, since an operation could be performed on a full array in one statement. For example, the iota function (ι) can replace for-loop iteration: ιN when applied to a scalar positive integer yields a one-dimensional array (vector), 1 2 3 ... N. More recent implementations of APL generally include comprehensive control structures, so that data structure and program control flow can be clearly and cleanly separated. The APL environment is called a workspace. In a workspace the user can define programs and data, i.e., the data values exist also outside the programs, and the user can also manipulate the data without having to define a program. In the examples below, the APL interpreter first types six spaces before awaiting the user's input. Its own output starts in column one. The user can save the workspace with all values, programs, and execution status. APL uses a set of non-ASCII symbols, which are an extension of traditional arithmetic and algebraic notation. Having single character names for single instruction, multiple data (SIMD) vector functions is one way that APL enables compact formulation of algorithms for data transformation such as computing Conway's Game of Life in one line of code. In nearly all versions of APL, it is theoretically possible to express any computable function in one expression, that is, in one line of code. Because of the unusual character set, many programmers use special keyboards with APL keytops to write APL code. Although there are various ways to write APL code using only ASCII characters, in practice it is almost never done. (This may be thought to support Iverson's thesis about notation as a tool of thought.) Most if not all modern implementations use standard keyboard layouts, with special mappings or input method editors to access non-ASCII characters. Historically, the APL font has been distinctive, with uppercase italic alphabetic characters and upright numerals and symbols. Most vendors continue to display the APL character set in a custom font. Advocates of APL claim that the examples of so-called write-only code (badly written and almost incomprehensible code) are almost invariably examples of poor programming practice or novice mistakes, which can occur in any language. Advocates also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology. They also may claim that because it is compact and terse, APL lends itself well to larger-scale software development and complexity, because the number of lines of code can be reduced greatly. Many APL advocates and practitioners also view standard programming languages such as COBOL and Java as being comparatively tedious. APL is often found where time-to-market is important, such as with trading systems. Terminology APL makes a clear distinction between functions and operators. Functions take arrays (variables or constants or expressions) as arguments, and return arrays as results. Operators (similar to higher-order functions) take functions or arrays as arguments, and derive related functions. For example, the sum function is derived by applying the reduction operator to the addition function. Applying the same reduction operator to the maximum function (which returns the larger of two numbers) derives a function which returns the largest of a group (vector) of numbers. In the J language, Iverson substituted the terms verb for function and adverb or conjunction for operator. APL also identifies those features built into the language, and represented by a symbol, or a fixed combination of symbols, as primitives. Most primitives are either functions or operators. Coding APL is largely a process of writing non-primitive functions and (in some versions of APL) operators. However a few primitives are considered to be neither functions nor operators, most noticeably assignment. Some words used in APL literature have meanings that differ from those in both mathematics and the generality of computer science. Syntax APL has explicit representations of functions, operators, and syntax, thus providing a basis for the clear and explicit statement of extended facilities in the language, and tools to experiment on them. Examples Hello, world This displays "Hello, world": 'Hello, world'A design theme in APL is to define default actions in some cases that would produce syntax errors in most other programming languages. The 'Hello, world' string constant above displays, because display is the default action on any expression for which no action is specified explicitly (e.g. assignment, function parameter). Exponentiation Another example of this theme is that exponentiation in APL is written as , which indicates raising 2 to the power 3 (this would be written as in some other languages and in FORTRAN and Python). Many languages use to signify multiplication, as in , but APL chooses to use . However, if no base is specified (as with the statement in APL, or in other languages), most programming languages one would see this as a syntax error. APL, however, assumes the missing base to be the natural logarithm constant e, and interprets as . Simple statistics Suppose that is an array of numbers. Then gives its average. Reading right-to-left, gives the number of elements in X, and since is a dyadic operator, the term to its left is required as well. It is surrounded by parentheses since otherwise X would be taken (so that the summation would be of — each element of X divided by the number of elements in X), and gives the sum of the elements of X. Building on this, the following expression computes standard deviation: Naturally, one would define this expression as a function for repeated use rather than rewriting it each time. Further, since assignment is an operator, it can appear within an expression, so the following would place suitable values into T, AV and SD: Pick 6 lottery numbers This following immediate-mode expression generates a typical set of Pick 6 lottery numbers: six pseudo-random integers ranging from 1 to 40, guaranteed non-repeating, and displays them sorted in ascending order: x[⍋x←6?40] The above does a lot, concisely, although it may seem complex to a new APLer. It combines the following APL functions (also called primitives and glyphs): The first to be executed (APL executes from rightmost to leftmost) is dyadic function ? (named deal when dyadic) that returns a vector consisting of a select number (left argument: 6 in this case) of random integers ranging from 1 to a specified maximum (right argument: 40 in this case), which, if said maximum ≥ vector length, is guaranteed to be non-repeating; thus, generate/create 6 random integers ranging from 1-40. This vector is then assigned (←) to the variable x, because it is needed later. This vector is then sorted in ascending order by a monadic ⍋ function, which has as its right argument everything to the right of it up to the next unbalanced close-bracket or close-parenthesis. The result of ⍋ is the indices that will put its argument into ascending order. Then the output of ⍋ is used to index the variable x, which we saved earlier for this purpose, thereby selecting its items in ascending sequence. Since there is no function to the left of the left-most x to tell APL what to do with the result, it simply outputs it to the display (on a single line, separated by spaces) without needing any explicit instruction to do that. ? also has a monadic equivalent called roll, which simply returns one random integer between 1 and its sole operand [to the right of it], inclusive. Thus, a role-playing game program might use the expression ?20 to roll a twenty-sided die. Prime numbers The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is (in Big O notation). (~R∊R∘.×R)/R←1↓⍳R Executed from right to left, this means: Iota ⍳ creates a vector containing integers from 1 to R (if R= 6 at the start of the program, ⍳R is 1 2 3 4 5 6) Drop first element of this vector (↓ function), i.e., 1. So 1↓⍳R is 2 3 4 5 6 Set R to the new vector (←, assignment primitive), i.e., 2 3 4 5 6 The / replicate operator is dyadic (binary) and the interpreter first evaluates its left argument (fully in parentheses): Generate outer product of R multiplied by R, i.e., a matrix that is the multiplication table of R by R (°.× operator), i.e., Build a vector the same length as R with 1 in each place where the corresponding number in R is in the outer product matrix (∈, set inclusion or element of or Epsilon operator), i.e., 0 0 1 0 1 Logically negate (not) values in the vector (change zeros to ones and ones to zeros) (∼, logical not or Tilde operator), i.e., 1 1 0 1 0 Select the items in R for which the corresponding element is 1 (/ replicate operator), i.e., 2 3 5 (Note, this assumes the APL origin is 1, i.e., indices start with 1. APL can be set to use 0 as the origin, so that ι6 is 0 1 2 3 4 5, which is convenient for some calculations.) Sorting The following expression sorts a word list stored in matrix X according to word length: X[⍋X+.≠' ';] Game of Life The following function "life", written in Dyalog APL, takes a boolean matrix and calculates the new generation according to Conway's Game of Life. It demonstrates the power of APL to implement a complex algorithm in very little code, but it is also very hard to follow unless one has advanced knowledge of APL. life ← {⊃1 ⍵ ∨.∧ 3 4 = +/ +⌿ ¯1 0 1 ∘.⊖ ¯1 0 1 ⌽¨ ⊂⍵} HTML tags removal In the following example, also Dyalog, the first line assigns some HTML code to a variable txt and then uses an APL expression to remove all the HTML tags (explanation): txt←'<html><body><p>This is <em>emphasized</em> text.</p></body></html>' {⍵ /⍨ ~{⍵∨≠\⍵}⍵∊'<>'} txt This is emphasized text. Naming APL derives its name from the initials of Iverson's book A Programming Language, even though the book describes Iverson's mathematical notation, rather than the implemented programming language described in this article. The name is used only for actual implementations, starting with APL\360. Adin Falkoff coined the name in 1966 during the implementation of APL\360 at IBM: APL is occasionally re-interpreted as Array Programming Language or Array Processing Language, thereby making APL into a backronym. Logo There has always been cooperation between APL vendors, and joint conferences were held on a regular basis from 1969 until 2010. At such conferences, APL merchandise was often handed out, featuring APL motifs or collection of vendor logos. Common were apples (as a pun on the similarity in pronunciation of apple and APL) and the code snippet which are the symbols produced by the classic APL keyboard layout when holding the APL modifier key and typing "APL". Despite all these community efforts, no universal vendor-agnostic logo for the programming language emerged. As popular programming languages increasingly have established recognisable logos, Fortran getting one in 2020, British APL Association launched a campaign in the second half of 2021, to establish such a logo for APL. Use APL is used for many purposes including financial and insurance applications, artificial intelligence, neural networks and robotics. It has been argued that APL is a calculation tool and not a programming language; its symbolic nature and array capabilities have made it popular with domain experts and data scientists who do not have or require the skills of a computer programmer. APL is well suited to image manipulation and computer animation, where graphic transformations can be encoded as matrix multiplications. One of the first commercial computer graphics houses, Digital Effects, produced an APL graphics product named Visions, which was used to create television commercials and animation for the 1982 film Tron. Latterly, the Stormwind boating simulator uses APL to implement its core logic, its interfacing to the rendering pipeline middleware and a major part of its physics engine. Today, APL remains in use in a wide range of commercial and scientific applications, for example investment management, asset management, health care, and DNA profiling, and by hobbyists. Notable implementations APL\360 The first implementation of APL using recognizable APL symbols was APL\360 which ran on the IBM System/360, and was completed in November 1966 though at that time remained in use only within IBM. In 1973 its implementors, Larry Breed, Dick Lathwell and Roger Moore, were awarded the Grace Murray Hopper Award from the Association for Computing Machinery (ACM). It was given "for their work in the design and implementation of APL\360, setting new standards in simplicity, efficiency, reliability and response time for interactive systems." In 1975, the IBM 5100 microcomputer offered APL\360 as one of two built-in ROM-based interpreted languages for the computer, complete with a keyboard and display that supported all the special symbols used in the language. Significant developments to APL\360 included CMS/APL, which made use of the virtual storage capabilities of CMS and APLSV, which introduced shared variables, system variables and system functions. It was subsequently ported to the IBM System/370 and VSPC platforms until its final release in 1983, after which it was replaced by APL2. APL\1130 In 1968, APL\1130 became the first publicly available APL system, created by IBM for the IBM 1130. It became the most popular IBM Type-III Library software that IBM released. APL*Plus and Sharp APL APL*Plus and Sharp APL are versions of APL\360 with added business-oriented extensions such as data formatting and facilities to store APL arrays in external files. They were jointly developed by two companies, employing various members of the original IBM APL\360 development team. The two companies were I. P. Sharp Associates (IPSA), an APL\360 services company formed in 1964 by Ian Sharp, Roger Moore and others, and STSC, a time-sharing and consulting service company formed in 1969 by Lawrence Breed and others. Together the two developed APL*Plus and thereafter continued to work together but develop APL separately as APL*Plus and Sharp APL. STSC ported APL*Plus to many platforms with versions being made for the VAX 11, PC and UNIX, whereas IPSA took a different approach to the arrival of the personal computer and made Sharp APL available on this platform using additional PC-XT/360 hardware. In 1993, Soliton Incorporated was formed to support Sharp APL and it developed Sharp APL into SAX (Sharp APL for Unix). , APL*Plus continues as APL2000 APL+Win. In 1985, Ian Sharp, and Dan Dyer of STSC, jointly received the Kenneth E. Iverson Award for Outstanding Contribution to APL. APL2 APL2 was a significant re-implementation of APL by IBM which was developed from 1971 and first released in 1984. It provides many additions to the language, of which the most notable is nested (non-rectangular) array support. The entire APL2 Products and Services Team was awarded the Iverson Award in 2007. In 2021, IBM sold APL2 to Log-On Software, who develop and sell the product as Log-On APL2. APLGOL In 1972, APLGOL was released as an experimental version of APL that added structured programming language constructs to the language framework. New statements were added for interstatement control, conditional statement execution, and statement structuring, as well as statements to clarify the intent of the algorithm. It was implemented for Hewlett-Packard in 1977. Dyalog APL Dyalog APL was first released by British company Dyalog Ltd. in 1983 and, , is available for AIX, Linux (including on the Raspberry Pi), macOS and Microsoft Windows platforms. It is based on APL2, with extensions to support object-oriented programming and functional programming. Licences are free for personal/non-commercial use. In 1995, two of the development team - John Scholes and Peter Donnelly - were awarded the Iverson Award for their work on the interpreter. Gitte Christensen and Morten Kromberg were joint recipients of the Iverson Award in 2016. NARS2000 NARS2000 is an open-source APL interpreter written by Bob Smith, a prominent APL developer and implementor from STSC in the 1970s and 1980s. NARS2000 contains advanced features and new datatypes and runs natively on Microsoft Windows, and other platforms under Wine. It is named after a development tool from the 1980s, NARS (Nested Arrays Research System). APLX APLX is a cross-platform dialect of APL, based on APL2 and with several extensions, which was first released by British company MicroAPL in 2002. Although no longer in development or on commercial sale it is now available free of charge from Dyalog. GNU APL GNU APL is a free implementation of Extended APL as specified in ISO/IEC 13751:2001 and is thus an implementation of APL2. It runs on Linux (including on the Raspberry Pi), macOS, several BSD dialects, and on Windows (either using Cygwin for full support of all its system functions or as a native 64-bit Windows binary with some of its system functions missing). GNU APL uses Unicode internally and can be scripted. It was written by Jürgen Sauermann. Richard Stallman, founder of the GNU Project, was an early adopter of APL, using it to write a text editor as a high school student in the summer of 1969. Interpretation and compilation of APL APL is traditionally an interpreted language, having language characteristics such as weak variable typing not well suited to compilation. However, with arrays as its core data structure it provides opportunities for performance gains through parallelism, parallel computing, massively parallel applications, and very-large-scale integration (VLSI), and from the outset APL has been regarded as a high-performance language - for example, it was noted for the speed with which it could perform complicated matrix operations "because it operates on arrays and performs operations like matrix inversion internally". Nevertheless, APL is rarely purely interpreted and compilation or partial compilation techniques that are, or have been, used include the following: Idiom recognition Most APL interpreters support idiom recognition and evaluate common idioms as single operations. For example, by evaluating the idiom BV/⍳⍴A as a single operation (where BV is a Boolean vector and A is an array), the creation of two intermediate arrays is avoided. Optimised bytecode Weak typing in APL means that a name may reference an array (of any datatype), a function or an operator. In general, the interpreter cannot know in advance which form it will be and must therefore perform analysis, syntax checking etc. at run-time. However, in certain circumstances, it is possible to deduce in advance what type a name is expected to reference and then generate bytecode which can be executed with reduced run-time overhead. This bytecode can also be optimised using compilation techniques such as constant folding or common subexpression elimination. The interpreter will execute the bytecode when present and when any assumptions which have been made are met. Dyalog APL includes support for optimised bytecode. Compilation Compilation of APL has been the subject of research and experiment since the language first became available; the first compiler is considered to be the Burroughs APL-700 which was released around 1971. In order to be able to compile APL, language limitations have to be imposed. APEX is a research APL compiler which was written by Robert Bernecky and is available under the GNU Public License. The STSC APL Compiler is a hybrid of a bytecode optimiser and a compiler - it enables compilation of functions to machine code provided that its sub-functions and globals are declared, but the interpreter is still used as a runtime library and to execute functions which do not meet the compilation requirements. Standards APL has been standardized by the American National Standards Institute (ANSI) working group X3J10 and International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC Joint Technical Committee 1 Subcommittee 22 Working Group 3. The Core APL language is specified in ISO 8485:1989, and the Extended APL language is specified in ISO/IEC 13751:2001. References Further reading An APL Machine (1970 Stanford doctoral dissertation by Philip Abrams) A Personal History Of APL (1982 article by Michael S. Montalbano) A Programming Language by Kenneth E. Iverson APL in Exposition by Kenneth E. Iverson Brooks, Frederick P.; Kenneth Iverson (1965). Automatic Data Processing, System/360 Edition. . History of Programming Languages, chapter 14 Video The Origins of APL - a 1974 talk show style interview with the original developers of APL. APL demonstration - a 1975 live demonstration of APL by Professor Bob Spence, Imperial College London. Conway's Game Of Life in APL - a 2009 tutorial by John Scholes of Dyalog Ltd. which implements Conway's Game of Life in a single line of APL. 50 Years of APL - a 2009 introduction to APL by Graeme Robertson. External links Online resources TryAPL.org, an online APL primer APL Wiki APL2C, a source of links to APL compilers Providers Log-On APL2 Dyalog APL APLX APL2000 NARS2000 GNU APL OpenAPL User groups and societies Finland: Finnish APL Association (FinnAPL) France: APL et J Germany: APL-Germany e.V. Japan: Japan APL Association (JAPLA) Sweden: Swedish APL User Group (SwedAPL) Switzerland: Swiss APL User Group (SAUG) United Kingdom: The British APL Association United States: ACM SIGPLAN chapter on Array Programming Languages (SIGAPL) .NET programming languages APL programming language family Array programming languages Command shells Dynamic programming languages Dynamically typed programming languages Functional languages IBM software Programming languages created in 1964 Programming languages with an ISO standard Programming languages
APL (programming language)
The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which succeeded in preparing and landing the first humans on the Moon from 1968 to 1972. It was first conceived during Dwight D. Eisenhower's administration as a three-person spacecraft to follow the one-person Project Mercury, which put the first Americans in space. Apollo was later dedicated to President John F. Kennedy's national goal for the 1960s of "landing a man on the Moon and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third US human spaceflight program to fly, preceded by the two-person Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo. Kennedy's goal was accomplished on the Apollo 11 mission when astronauts Neil Armstrong and Buzz Aldrin landed their Apollo Lunar Module (LM) on July 20, 1969, and walked on the lunar surface, while Michael Collins remained in lunar orbit in the command and service module (CSM), and all three landed safely on Earth on July 24. Five subsequent Apollo missions also landed astronauts on the Moon, the last, Apollo 17, in December 1972. In these six spaceflights, twelve people walked on the Moon. Apollo ran from 1961 to 1972, with the first crewed flight in 1968. It encountered a major setback in 1967 when an Apollo 1 cabin fire killed the entire crew during a prelaunch test. After the first successful landing, sufficient flight hardware remained for nine follow-on landings with a plan for extended lunar geological and astrophysical exploration. Budget cuts forced the cancellation of three of these. Five of the remaining six missions achieved successful landings, but the Apollo 13 landing was prevented by an oxygen tank explosion in transit to the Moon, which destroyed the service module's capability to provide electrical power, crippling the CSM's propulsion and life support systems. The crew returned to Earth safely by using the lunar module as a "lifeboat" for these functions. Apollo used the Saturn family of rockets as launch vehicles, which were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three crewed missions in 1973–1974, and the Apollo–Soyuz Test Project, a joint United States-Soviet Union low Earth orbit mission in 1975. Apollo set several major human spaceflight milestones. It stands alone in sending crewed missions beyond low Earth orbit. Apollo 8 was the first crewed spacecraft to orbit another celestial body, and Apollo 11 was the first crewed spacecraft to land humans on one. Overall the Apollo program returned of lunar rocks and soil to Earth, greatly contributing to the understanding of the Moon's composition and geological history. The program laid the foundation for NASA's subsequent human spaceflight capability, and funded construction of its Johnson Space Center and Kennedy Space Center. Apollo also spurred advances in many areas of technology incidental to rocketry and human spaceflight, including avionics, telecommunications, and computers. Background Origin and spacecraft feasibility studies The Apollo program was conceived during the Eisenhower administration in early 1960, as a follow-up to Project Mercury. While the Mercury capsule could support only one astronaut on a limited Earth orbital mission, Apollo would carry three. Possible missions included ferrying crews to a space station, circumlunar flights, and eventual crewed lunar landings. The program was named after Apollo, the Greek god of light, music, and the Sun, by NASA manager Abe Silverstein, who later said, "I was naming the spacecraft like I'd name my baby." Silverstein chose the name at home one evening, early in 1960, because he felt "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program." In July 1960, NASA Deputy Administrator Hugh L. Dryden announced the Apollo program to industry representatives at a series of Space Task Group conferences. Preliminary specifications were laid out for a spacecraft with a mission module cabin separate from the command module (piloting and reentry cabin), and a propulsion and equipment module. On August 30, a feasibility study competition was announced, and on October 25, three study contracts were awarded to General Dynamics/Convair, General Electric, and the Glenn L. Martin Company. Meanwhile, NASA performed its own in-house spacecraft design studies led by Maxime Faget, to serve as a gauge to judge and monitor the three industry designs. Political pressure builds In November 1960, John F. Kennedy was elected president after a campaign that promised American superiority over the Soviet Union in the fields of space exploration and missile defense. Up to the election of 1960, Kennedy had been speaking out against the "missile gap" that he and many other senators felt had developed between the Soviet Union and the United States due to the inaction of President Eisenhower. Beyond military power, Kennedy used aerospace technology as a symbol of national prestige, pledging to make the US not "first but, first and, first if, but first period". Despite Kennedy's rhetoric, he did not immediately come to a decision on the status of the Apollo program once he became president. He knew little about the technical details of the space program, and was put off by the massive financial commitment required by a crewed Moon landing. When Kennedy's newly appointed NASA Administrator James E. Webb requested a 30 percent budget increase for his agency, Kennedy supported an acceleration of NASA's large booster program but deferred a decision on the broader issue. On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person to fly in space, reinforcing American fears about being left behind in a technological competition with the Soviet Union. At a meeting of the US House Committee on Science and Astronautics one day after Gagarin's flight, many congressmen pledged their support for a crash program aimed at ensuring that America would catch up. Kennedy was circumspect in his response to the news, refusing to make a commitment on America's response to the Soviets. On April 20, Kennedy sent a memo to Vice President Lyndon B. Johnson, asking Johnson to look into the status of America's space program, and into programs that could offer NASA the opportunity to catch up. Johnson responded approximately one week later, concluding that "we are neither making maximum effort nor achieving results necessary if this country is to reach a position of leadership." His memo concluded that a crewed Moon landing was far enough in the future that it was likely the United States would achieve it first. On May 25, 1961, twenty days after the first US crewed spaceflight Freedom 7, Kennedy proposed the crewed Moon landing in a Special Message to the Congress on Urgent National Needs: NASA expansion At the time of Kennedy's proposal, only one American had flown in space—less than a month earlier—and NASA had not yet sent an astronaut into orbit. Even some NASA employees doubted whether Kennedy's ambitious goal could be met. By 1963, Kennedy even came close to agreeing to a joint US-USSR Moon mission, to eliminate duplication of effort. With the clear goal of a crewed landing replacing the more nebulous goals of space stations and circumlunar flights, NASA decided that, in order to make progress quickly, it would discard the feasibility study designs of Convair, GE, and Martin, and proceed with Faget's command and service module design. The mission module was determined to be useful only as an extra room, and therefore unnecessary. They used Faget's design as the specification for another competition for spacecraft procurement bids in October 1961. On November 28, 1961, it was announced that North American Aviation had won the contract, although its bid was not rated as good as Martin's. Webb, Dryden and Robert Seamans chose it in preference due to North American's longer association with NASA and its predecessor. Landing humans on the Moon by the end of 1969 required the most sudden burst of technological creativity, and the largest commitment of resources ($25 billion; $ in US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities. On July 1, 1960, NASA established the Marshall Space Flight Center (MSFC) in Huntsville, Alabama. MSFC designed the heavy lift-class Saturn launch vehicles, which would be required for Apollo. Manned Spacecraft Center It became clear that managing the Apollo program would exceed the capabilities of Robert R. Gilruth's Space Task Group, which had been directing the nation's crewed space program from NASA's Langley Research Center. So Gilruth was given authority to grow his organization into a new NASA center, the Manned Spacecraft Center (MSC). A site was chosen in Houston, Texas, on land donated by Rice University, and Administrator Webb announced the conversion on September 19, 1961. It was also clear NASA would soon outgrow its practice of controlling missions from its Cape Canaveral Air Force Station launch facilities in Florida, so a new Mission Control Center would be included in the MSC. In September 1962, by which time two Project Mercury astronauts had orbited the Earth, Gilruth had moved his organization to rented space in Houston, and construction of the MSC facility was under way, Kennedy visited Rice to reiterate his challenge in a famous speech: The MSC was completed in September 1963. It was renamed by the US Congress in honor of Lyndon Johnson soon after his death in 1973. Launch Operations Center It also became clear that Apollo would outgrow the Canaveral launch facilities in Florida. The two newest launch complexes were already being built for the Saturn I and IB rockets at the northernmost end: LC-34 and LC-37. But an even bigger facility would be needed for the mammoth rocket required for the crewed lunar mission, so land acquisition was started in July 1961 for a Launch Operations Center (LOC) immediately north of Canaveral at Merritt Island. The design, development and construction of the center was conducted by Kurt H. Debus, a member of Dr. Wernher von Braun's original V-2 rocket engineering team. Debus was named the LOC's first Director. Construction began in November 1962. Following Kennedy's death, President Johnson issued an executive order on November 29, 1963, to rename the LOC and Cape Canaveral in honor of Kennedy. The LOC included Launch Complex 39, a Launch Control Center, and a Vertical Assembly Building (VAB). in which the space vehicle (launch vehicle and spacecraft) would be assembled on a mobile launcher platform and then moved by a crawler-transporter to one of several launch pads. Although at least three pads were planned, only two, designated AandB, were completed in October 1965. The LOC also included an Operations and Checkout Building (OCB) to which Gemini and Apollo spacecraft were initially received prior to being mated to their launch vehicles. The Apollo spacecraft could be tested in two vacuum chambers capable of simulating atmospheric pressure at altitudes up to , which is nearly a vacuum. Organization Administrator Webb realized that in order to keep Apollo costs under control, he had to develop greater project management skills in his organization, so he recruited Dr. George E. Mueller for a high management job. Mueller accepted, on the condition that he have a say in NASA reorganization necessary to effectively administer Apollo. Webb then worked with Associate Administrator (later Deputy Administrator) Seamans to reorganize the Office of Manned Space Flight (OMSF). On July 23, 1963, Webb announced Mueller's appointment as Deputy Associate Administrator for Manned Space Flight, to replace then Associate Administrator D. Brainerd Holmes on his retirement effective September 1. Under Webb's reorganization, the directors of the Manned Spacecraft Center (Gilruth), Marshall Space Flight Center (von Braun), and the Launch Operations Center (Debus) reported to Mueller. Based on his industry experience on Air Force missile projects, Mueller realized some skilled managers could be found among high-ranking officers in the U.S. Air Force, so he got Webb's permission to recruit General Samuel C. Phillips, who gained a reputation for his effective management of the Minuteman program, as OMSF program controller. Phillips's superior officer Bernard A. Schriever agreed to loan Phillips to NASA, along with a staff of officers under him, on the condition that Phillips be made Apollo Program Director. Mueller agreed, and Phillips managed Apollo from January 1964, until it achieved the first human landing in July 1969, after which he returned to Air Force duty. Choosing a mission mode Once Kennedy had defined a goal, the Apollo mission planners were faced with the challenge of designing a spacecraft that could meet it while minimizing risk to human life, cost, and demands on technology and astronaut skill. Four possible mission modes were considered: Direct Ascent: The spacecraft would be launched as a unit and travel directly to the lunar surface, without first going into lunar orbit. A Earth return ship would land all three astronauts atop a descent propulsion stage, which would be left on the Moon. This design would have required development of the extremely powerful Saturn C-8 or Nova launch vehicle to carry a payload to the Moon. Earth Orbit Rendezvous (EOR): Multiple rocket launches (up to 15 in some plans) would carry parts of the Direct Ascent spacecraft and propulsion units for translunar injection (TLI). These would be assembled into a single spacecraft in Earth orbit. Lunar Surface Rendezvous: Two spacecraft would be launched in succession. The first, an automated vehicle carrying propellant for the return to Earth, would land on the Moon, to be followed some time later by the crewed vehicle. Propellant would have to be transferred from the automated vehicle to the crewed vehicle. Lunar Orbit Rendezvous (LOR): This turned out to be the winning configuration, which achieved the goal with Apollo 11 on July 24, 1969: a single Saturn V launched a spacecraft that was composed of a Apollo command and service module which remained in orbit around the Moon and a two-stage Apollo Lunar Module spacecraft which was flown by two astronauts to the surface, flown back to dock with the command module and was then discarded. Landing the smaller spacecraft on the Moon, and returning an even smaller part () to lunar orbit, minimized the total mass to be launched from Earth, but this was the last method initially considered because of the perceived risk of rendezvous and docking. In early 1961, direct ascent was generally the mission mode in favor at NASA. Many engineers feared that rendezvous and docking, maneuvers that had not been attempted in Earth orbit, would be nearly impossible in lunar orbit. LOR advocates including John Houbolt at Langley Research Center emphasized the important weight reductions that were offered by the LOR approach. Throughout 1960 and 1961, Houbolt campaigned for the recognition of LOR as a viable and practical option. Bypassing the NASA hierarchy, he sent a series of memos and reports on the issue to Associate Administrator Robert Seamans; while acknowledging that he spoke "somewhat as a voice in the wilderness", Houbolt pleaded that LOR should not be discounted in studies of the question. Seamans's establishment of an ad hoc committee headed by his special technical assistant Nicholas E. Golovin in July 1961, to recommend a launch vehicle to be used in the Apollo program, represented a turning point in NASA's mission mode decision. This committee recognized that the chosen mode was an important part of the launch vehicle choice, and recommended in favor of a hybrid EOR-LOR mode. Its consideration of LOR—as well as Houbolt's ceaseless work—played an important role in publicizing the workability of the approach. In late 1961 and early 1962, members of the Manned Spacecraft Center began to come around to support LOR, including the newly hired deputy director of the Office of Manned Space Flight, Joseph Shea, who became a champion of LOR. The engineers at Marshall Space Flight Center (MSFC), which had much to lose from the decision, took longer to become convinced of its merits, but their conversion was announced by Wernher von Braun at a briefing on June 7, 1962. But even after NASA reached internal agreement, it was far from smooth sailing. Kennedy's science advisor Jerome Wiesner, who had expressed his opposition to human spaceflight to Kennedy before the President took office, and had opposed the decision to land people on the Moon, hired Golovin, who had left NASA, to chair his own "Space Vehicle Panel", ostensibly to monitor, but actually to second-guess NASA's decisions on the Saturn V launch vehicle and LOR by forcing Shea, Seamans, and even Webb to defend themselves, delaying its formal announcement to the press on July 11, 1962, and forcing Webb to still hedge the decision as "tentative". Wiesner kept up the pressure, even making the disagreement public during a two-day September visit by the President to Marshall Space Flight Center. Wiesner blurted out "No, that's no good" in front of the press, during a presentation by von Braun. Webb jumped in and defended von Braun, until Kennedy ended the squabble by stating that the matter was "still subject to final review". Webb held firm and issued a request for proposal to candidate Lunar Excursion Module (LEM) contractors. Wiesner finally relented, unwilling to settle the dispute once and for all in Kennedy's office, because of the President's involvement with the October Cuban Missile Crisis, and fear of Kennedy's support for Webb. NASA announced the selection of Grumman as the LEM contractor in November 1962. Space historian James Hansen concludes that: The LOR method had the advantage of allowing the lander spacecraft to be used as a "lifeboat" in the event of a failure of the command ship. Some documents prove this theory was discussed before and after the method was chosen. In 1964 an MSC study concluded, "The LM [as lifeboat]... was finally dropped, because no single reasonable CSM failure could be identified that would prohibit use of the SPS." Ironically, just such a failure happened on Apollo 13 when an oxygen tank explosion left the CSM without electrical power. The lunar module provided propulsion, electrical power and life support to get the crew home safely. Spacecraft Faget's preliminary Apollo design employed a cone-shaped command module, supported by one of several service modules providing propulsion and electrical power, sized appropriately for the space station, cislunar, and lunar landing missions. Once Kennedy's Moon landing goal became official, detailed design began of a command and service module (CSM) in which the crew would spend the entire direct-ascent mission and lift off from the lunar surface for the return trip, after being soft-landed by a larger landing propulsion module. The final choice of lunar orbit rendezvous changed the CSM's role to the translunar ferry used to transport the crew, along with a new spacecraft, the Lunar Excursion Module (LEM, later shortened to LM (Lunar Module) but still pronounced ) which would take two individuals to the lunar surface and return them to the CSM. Command and service module The command module (CM) was the conical crew cabin, designed to carry three astronauts from launch to lunar orbit and back to an Earth ocean landing. It was the only component of the Apollo spacecraft to survive without major configuration changes as the program evolved from the early Apollo study designs. Its exterior was covered with an ablative heat shield, and had its own reaction control system (RCS) engines to control its attitude and steer its atmospheric entry path. Parachutes were carried to slow its descent to splashdown. The module was tall, in diameter, and weighed approximately . A cylindrical service module (SM) supported the command module, with a service propulsion engine and an RCS with propellants, and a fuel cell power generation system with liquid hydrogen and liquid oxygen reactants. A high-gain S-band antenna was used for long-distance communications on the lunar flights. On the extended lunar missions, an orbital scientific instrument package was carried. The service module was discarded just before reentry. The module was long and in diameter. The initial lunar flight version weighed approximately fully fueled, while a later version designed to carry a lunar orbit scientific instrument package weighed just over . North American Aviation won the contract to build the CSM, and also the second stage of the Saturn V launch vehicle for NASA. Because the CSM design was started early before the selection of lunar orbit rendezvous, the service propulsion engine was sized to lift the CSM off the Moon, and thus was oversized to about twice the thrust required for translunar flight. Also, there was no provision for docking with the lunar module. A 1964 program definition study concluded that the initial design should be continued as Block I which would be used for early testing, while Block II, the actual lunar spacecraft, would incorporate the docking equipment and take advantage of the lessons learned in Block I development. Apollo Lunar Module The Apollo Lunar Module (LM) was designed to descend from lunar orbit to land two astronauts on the Moon and take them back to orbit to rendezvous with the command module. Not designed to fly through the Earth's atmosphere or return to Earth, its fuselage was designed totally without aerodynamic considerations and was of an extremely lightweight construction. It consisted of separate descent and ascent stages, each with its own engine. The descent stage contained storage for the descent propellant, surface stay consumables, and surface exploration equipment. The ascent stage contained the crew cabin, ascent propellant, and a reaction control system. The initial LM model weighed approximately , and allowed surface stays up to around 34 hours. An extended lunar module weighed over , and allowed surface stays of more than three days. The contract for design and construction of the lunar module was awarded to Grumman Aircraft Engineering Corporation, and the project was overseen by Thomas J. Kelly. Launch vehicles Before the Apollo program began, Wernher von Braun and his team of rocket engineers had started work on plans for very large launch vehicles, the Saturn series, and the even larger Nova series. In the midst of these plans, von Braun was transferred from the Army to NASA and was made Director of the Marshall Space Flight Center. The initial direct ascent plan to send the three-person Apollo command and service module directly to the lunar surface, on top of a large descent rocket stage, would require a Nova-class launcher, with a lunar payload capability of over . The June 11, 1962, decision to use lunar orbit rendezvous enabled the Saturn V to replace the Nova, and the MSFC proceeded to develop the Saturn rocket family for Apollo. Since Apollo, like Mercury, used more than one launch vehicle for space missions, NASA used spacecraft-launch vehicle combination series numbers: AS-10x for Saturn I, AS-20x for Saturn IB, and AS-50x for Saturn V (compare Mercury-Redstone 3, Mercury-Atlas 6) to designate and plan all missions, rather than numbering them sequentially as in Project Gemini. This was changed by the time human flights began. Little Joe II Since Apollo, like Mercury, would require a launch escape system (LES) in case of a launch failure, a relatively small rocket was required for qualification flight testing of this system. A rocket bigger than the Little Joe used by Mercury would be required, so the Little Joe II was built by General Dynamics/Convair. After an August 1963 qualification test flight, four LES test flights (A-001 through 004) were made at the White Sands Missile Range between May 1964 and January 1966. Saturn I Saturn I, the first US heavy lift launch vehicle, was initially planned to launch partially equipped CSMs in low Earth orbit tests. The S-I first stage burned RP-1 with liquid oxygen (LOX) oxidizer in eight clustered Rocketdyne H-1 engines, to produce of thrust. The S-IV second stage used six liquid hydrogen-fueled Pratt & Whitney RL-10 engines with of thrust. The S-V third stage flew inactively on Saturn I four times. The first four Saturn I test flights were launched from LC-34, with only the first stage live, carrying dummy upper stages filled with water. The first flight with a live S-IV was launched from LC-37. This was followed by five launches of boilerplate CSMs (designated AS-101 through AS-105) into orbit in 1964 and 1965. The last three of these further supported the Apollo program by also carrying Pegasus satellites, which verified the safety of the translunar environment by measuring the frequency and severity of micrometeorite impacts. In September 1962, NASA planned to launch four crewed CSM flights on the Saturn I from late 1965 through 1966, concurrent with Project Gemini. The payload capacity would have severely limited the systems which could be included, so the decision was made in October 1963 to use the uprated Saturn IB for all crewed Earth orbital flights. Saturn IB The Saturn IB was an upgraded version of the Saturn I. The S-IB first stage increased the thrust to by uprating the H-1 engine. The second stage replaced the S-IV with the S-IVB-200, powered by a single J-2 engine burning liquid hydrogen fuel with LOX, to produce of thrust. A restartable version of the S-IVB was used as the third stage of the Saturn V. The Saturn IB could send over into low Earth orbit, sufficient for a partially fueled CSM or the LM. Saturn IB launch vehicles and flights were designated with an AS-200 series number, "AS" indicating "Apollo Saturn" and the "2" indicating the second member of the Saturn rocket family. Saturn V Saturn V launch vehicles and flights were designated with an AS-500 series number, "AS" indicating "Apollo Saturn" and the "5" indicating Saturn V. The three-stage Saturn V was designed to send a fully fueled CSM and LM to the Moon. It was in diameter and stood tall with its lunar payload. Its capability grew to for the later advanced lunar landings. The S-IC first stage burned RP-1/LOX for a rated thrust of , which was upgraded to . The second and third stages burned liquid hydrogen; the third stage was a modified version of the S-IVB, with thrust increased to and capability to restart the engine for translunar injection after reaching a parking orbit. Astronauts NASA's director of flight crew operations during the Apollo program was Donald K. "Deke" Slayton, one of the original Mercury Seven astronauts who was medically grounded in September 1962 due to a heart murmur. Slayton was responsible for making all Gemini and Apollo crew assignments. Thirty-two astronauts were assigned to fly missions in the Apollo program. Twenty-four of these left Earth's orbit and flew around the Moon between December 1968 and December 1972 (three of them twice). Half of the 24 walked on the Moon's surface, though none of them returned to it after landing once. One of the moonwalkers was a trained geologist. Of the 32, Gus Grissom, Ed White, and Roger Chaffee were killed during a ground test in preparation for the Apollo 1 mission. The Apollo astronauts were chosen from the Project Mercury and Gemini veterans, plus from two later astronaut groups. All missions were commanded by Gemini or Mercury veterans. Crews on all development flights (except the Earth orbit CSM development flights) through the first two landings on Apollo 11 and Apollo 12, included at least two (sometimes three) Gemini veterans. Dr. Harrison Schmitt, a geologist, was the first NASA scientist astronaut to fly in space, and landed on the Moon on the last mission, Apollo 17. Schmitt participated in the lunar geology training of all of the Apollo landing crews. NASA awarded all 32 of these astronauts its highest honor, the Distinguished Service Medal, given for "distinguished service, ability, or courage", and personal "contribution representing substantial progress to the NASA mission". The medals were awarded posthumously to Grissom, White, and Chaffee in 1969, then to the crews of all missions from Apollo 8 onward. The crew that flew the first Earth orbital test mission Apollo 7, Walter M. Schirra, Donn Eisele, and Walter Cunningham, were awarded the lesser NASA Exceptional Service Medal, because of discipline problems with the flight director's orders during their flight. In October 2008, the NASA Administrator decided to award them the Distinguished Service Medals. For Schirra and Eisele, this was posthumously. Lunar mission profile The first lunar landing mission was planned to proceed as follows: Profile variations The first three lunar missions (Apollo 8, Apollo 10, and Apollo 11) used a free return trajectory, keeping a flight path coplanar with the lunar orbit, which would allow a return to Earth in case the SM engine failed to make lunar orbit insertion. Landing site lighting conditions on later missions dictated a lunar orbital plane change, which required a course change maneuver soon after TLI, and eliminated the free-return option. After Apollo 12 placed the second of several seismometers on the Moon, the jettisoned LM ascent stages on Apollo 12 and later missions were deliberately crashed on the Moon at known locations to induce vibrations in the Moon's structure. The only exceptions to this were the Apollo 13 LM which burned up in the Earth's atmosphere, and Apollo 16, where a loss of attitude control after jettison prevented making a targeted impact. As another active seismic experiment, the S-IVBs on Apollo 13 and subsequent missions were deliberately crashed on the Moon instead of being sent to solar orbit. Starting with Apollo 13, descent orbit insertion was to be performed using the service module engine instead of the LM engine, in order to allow a greater fuel reserve for landing. This was actually done for the first time on Apollo 14, since the Apollo 13 mission was aborted before landing. Development history Uncrewed flight tests Two Block I CSMs were launched from LC-34 on suborbital flights in 1966 with the Saturn IB. The first, AS-201 launched on February 26, reached an altitude of and splashed down downrange in the Atlantic Ocean. The second, AS-202 on August 25, reached altitude and was recovered downrange in the Pacific Ocean. These flights validated the service module engine and the command module heat shield. A third Saturn IB test, AS-203 launched from pad 37, went into orbit to support design of the S-IVB upper stage restart capability needed for the Saturn V. It carried a nose cone instead of the Apollo spacecraft, and its payload was the unburned liquid hydrogen fuel, the behavior of which engineers measured with temperature and pressure sensors, and a TV camera. This flight occurred on July 5, before AS-202, which was delayed because of problems getting the Apollo spacecraft ready for flight. Preparation for crewed flight Two crewed orbital Block I CSM missions were planned: AS-204 and AS-205. The Block I crew positions were titled Command Pilot, Senior Pilot, and Pilot. The Senior Pilot would assume navigation duties, while the Pilot would function as a systems engineer. The astronauts would wear a modified version of the Gemini spacesuit. After an uncrewed LM test flight AS-206, a crew would fly the first Block II CSM and LM in a dual mission known as AS-207/208, or AS-278 (each spacecraft would be launched on a separate Saturn IB). The Block II crew positions were titled Commander, Command Module Pilot, and Lunar Module Pilot. The astronauts would begin wearing a new Apollo A6L spacesuit, designed to accommodate lunar extravehicular activity (EVA). The traditional visor helmet was replaced with a clear "fishbowl" type for greater visibility, and the lunar surface EVA suit would include a water-cooled undergarment. Deke Slayton, the grounded Mercury astronaut who became director of flight crew operations for the Gemini and Apollo programs, selected the first Apollo crew in January 1966, with Grissom as Command Pilot, White as Senior Pilot, and rookie Donn F. Eisele as Pilot. But Eisele dislocated his shoulder twice aboard the KC135 weightlessness training aircraft, and had to undergo surgery on January 27. Slayton replaced him with Chaffee. NASA announced the final crew selection for AS-204 on March 21, 1966, with the backup crew consisting of Gemini veterans James McDivitt and David Scott, with rookie Russell L. "Rusty" Schweickart. Mercury/Gemini veteran Wally Schirra, Eisele, and rookie Walter Cunningham were announced on September 29 as the prime crew for AS-205. In December 1966, the AS-205 mission was canceled, since the validation of the CSM would be accomplished on the 14-day first flight, and AS-205 would have been devoted to space experiments and contribute no new engineering knowledge about the spacecraft. Its Saturn IB was allocated to the dual mission, now redesignated AS-205/208 or AS-258, planned for August 1967. McDivitt, Scott and Schweickart were promoted to the prime AS-258 crew, and Schirra, Eisele and Cunningham were reassigned as the Apollo1 backup crew. Program delays The spacecraft for the AS-202 and AS-204 missions were delivered by North American Aviation to the Kennedy Space Center with long lists of equipment problems which had to be corrected before flight; these delays caused the launch of AS-202 to slip behind AS-203, and eliminated hopes the first crewed mission might be ready to launch as soon as November 1966, concurrently with the last Gemini mission. Eventually, the planned AS-204 flight date was pushed to February 21, 1967. North American Aviation was prime contractor not only for the Apollo CSM, but for the SaturnV S-II second stage as well, and delays in this stage pushed the first uncrewed SaturnV flight AS-501 from late 1966 to November 1967. (The initial assembly of AS-501 had to use a dummy spacer spool in place of the stage.) The problems with North American were severe enough in late 1965 to cause Manned Space Flight Administrator George Mueller to appoint program director Samuel Phillips to head a "tiger team" to investigate North American's problems and identify corrections. Phillips documented his findings in a December 19 letter to NAA president Lee Atwood, with a strongly worded letter by Mueller, and also gave a presentation of the results to Mueller and Deputy Administrator Robert Seamans. Meanwhile, Grumman was also encountering problems with the Lunar Module, eliminating hopes it would be ready for crewed flight in 1967, not long after the first crewed CSM flights. Apollo 1 fire Grissom, White, and Chaffee decided to name their flight Apollo1 as a motivational focus on the first crewed flight. They trained and conducted tests of their spacecraft at North American, and in the altitude chamber at the Kennedy Space Center. A "plugs-out" test was planned for January, which would simulate a launch countdown on LC-34 with the spacecraft transferring from pad-supplied to internal power. If successful, this would be followed by a more rigorous countdown simulation test closer to the February 21 launch, with both spacecraft and launch vehicle fueled. The plugs-out test began on the morning of January 27, 1967, and immediately was plagued with problems. First, the crew noticed a strange odor in their spacesuits which delayed the sealing of the hatch. Then, communications problems frustrated the astronauts and forced a hold in the simulated countdown. During this hold, an electrical fire began in the cabin and spread quickly in the high pressure, 100% oxygen atmosphere. Pressure rose high enough from the fire that the cabin inner wall burst, allowing the fire to erupt onto the pad area and frustrating attempts to rescue the crew. The astronauts were asphyxiated before the hatch could be opened. NASA immediately convened an accident review board, overseen by both houses of Congress. While the determination of responsibility for the accident was complex, the review board concluded that "deficiencies existed in command module design, workmanship and quality control". At the insistence of NASA Administrator Webb, North American removed Harrison Storms as command module program manager. Webb also reassigned Apollo Spacecraft Program Office (ASPO) Manager Joseph Francis Shea, replacing him with George Low. To remedy the causes of the fire, changes were made in the Block II spacecraft and operational procedures, the most important of which were use of a nitrogen/oxygen mixture instead of pure oxygen before and during launch, and removal of flammable cabin and space suit materials. The Block II design already called for replacement of the Block I plug-type hatch cover with a quick-release, outward opening door. NASA discontinued the crewed Block I program, using the BlockI spacecraft only for uncrewed SaturnV flights. Crew members would also exclusively wear modified, fire-resistant A7L Block II space suits, and would be designated by the Block II titles, regardless of whether a LM was present on the flight or not. Uncrewed Saturn V and LM tests On April 24, 1967, Mueller published an official Apollo mission numbering scheme, using sequential numbers for all flights, crewed or uncrewed. The sequence would start with Apollo 4 to cover the first three uncrewed flights while retiring the Apollo1 designation to honor the crew, per their widows' wishes. In September 1967, Mueller approved a sequence of mission types which had to be successfully accomplished in order to achieve the crewed lunar landing. Each step had to be successfully accomplished before the next ones could be performed, and it was unknown how many tries of each mission would be necessary; therefore letters were used instead of numbers. The A missions were uncrewed Saturn V validation; B was uncrewed LM validation using the Saturn IB; C was crewed CSM Earth orbit validation using the Saturn IB; D was the first crewed CSM/LM flight (this replaced AS-258, using a single Saturn V launch); E would be a higher Earth orbit CSM/LM flight; F would be the first lunar mission, testing the LM in lunar orbit but without landing (a "dress rehearsal"); and G would be the first crewed landing. The list of types covered follow-on lunar exploration to include H lunar landings, I for lunar orbital survey missions, and J for extended-stay lunar landings. The delay in the CSM caused by the fire enabled NASA to catch up on human-rating the LM and SaturnV. Apollo4 (AS-501) was the first uncrewed flight of the SaturnV, carrying a BlockI CSM on November 9, 1967. The capability of the command module's heat shield to survive a trans-lunar reentry was demonstrated by using the service module engine to ram it into the atmosphere at higher than the usual Earth-orbital reentry speed. Apollo 5 (AS-204) was the first uncrewed test flight of the LM in Earth orbit, launched from pad 37 on January 22, 1968, by the Saturn IB that would have been used for Apollo 1. The LM engines were successfully test-fired and restarted, despite a computer programming error which cut short the first descent stage firing. The ascent engine was fired in abort mode, known as a "fire-in-the-hole" test, where it was lit simultaneously with jettison of the descent stage. Although Grumman wanted a second uncrewed test, George Low decided the next LM flight would be crewed. This was followed on April 4, 1968, by Apollo 6 (AS-502) which carried a CSM and a LM Test Article as ballast. The intent of this mission was to achieve trans-lunar injection, followed closely by a simulated direct-return abort, using the service module engine to achieve another high-speed reentry. The Saturn V experienced pogo oscillation, a problem caused by non-steady engine combustion, which damaged fuel lines in the second and third stages. Two S-II engines shut down prematurely, but the remaining engines were able to compensate. The damage to the third stage engine was more severe, preventing it from restarting for trans-lunar injection. Mission controllers were able to use the service module engine to essentially repeat the flight profile of Apollo 4. Based on the good performance of Apollo6 and identification of satisfactory fixes to the Apollo6 problems, NASA declared the SaturnV ready to fly crew, canceling a third uncrewed test. Crewed development missions Apollo 7, launched from LC-34 on October 11, 1968, was the Cmission, crewed by Schirra, Eisele, and Cunningham. It was an 11-day Earth-orbital flight which tested the CSM systems. Apollo 8 was planned to be the D mission in December 1968, crewed by McDivitt, Scott and Schweickart, launched on a SaturnV instead of two Saturn IBs. In the summer it had become clear that the LM would not be ready in time. Rather than waste the Saturn V on another simple Earth-orbiting mission, ASPO Manager George Low suggested the bold step of sending Apollo8 to orbit the Moon instead, deferring the Dmission to the next mission in March 1969, and eliminating the E mission. This would keep the program on track. The Soviet Union had sent two tortoises, mealworms, wine flies, and other lifeforms around the Moon on September 15, 1968, aboard Zond 5, and it was believed they might soon repeat the feat with human cosmonauts. The decision was not announced publicly until successful completion of Apollo 7. Gemini veterans Frank Borman and Jim Lovell, and rookie William Anders captured the world's attention by making ten lunar orbits in 20 hours, transmitting television pictures of the lunar surface on Christmas Eve, and returning safely to Earth. The following March, LM flight, rendezvous and docking were successfully demonstrated in Earth orbit on Apollo 9, and Schweickart tested the full lunar EVA suit with its portable life support system (PLSS) outside the LM. The F mission was successfully carried out on Apollo 10 in May 1969 by Gemini veterans Thomas P. Stafford, John Young and Eugene Cernan. Stafford and Cernan took the LM to within of the lunar surface. The G mission was achieved on Apollo 11 in July 1969 by an all-Gemini veteran crew consisting of Neil Armstrong, Michael Collins and Buzz Aldrin. Armstrong and Aldrin performed the first landing at the Sea of Tranquility at 20:17:40 UTC on July 20, 1969. They spent a total of 21 hours, 36 minutes on the surface, and spent 2hours, 31 minutes outside the spacecraft, walking on the surface, taking photographs, collecting material samples, and deploying automated scientific instruments, while continuously sending black-and-white television back to Earth. The astronauts returned safely on July 24. Production lunar landings In November 1969, Charles “Pete” Conrad became the third person to step onto the Moon, which he did while speaking more informally than had Armstrong: Conrad and rookie Alan L. Bean made a precision landing of Apollo 12 within walking distance of the Surveyor 3 uncrewed lunar probe, which had landed in April 1967 on the Ocean of Storms. The command module pilot was Gemini veteran Richard F. Gordon Jr. Conrad and Bean carried the first lunar surface color television camera, but it was damaged when accidentally pointed into the Sun. They made two EVAs totaling 7hours and 45 minutes. On one, they walked to the Surveyor, photographed it, and removed some parts which they returned to Earth. The contracted batch of 15 Saturn Vs was enough for lunar landing missions through Apollo 20. Shortly after Apollo 11, NASA publicized a preliminary list of eight more planned landing sites after Apollo 12, with plans to increase the mass of the CSM and LM for the last five missions, along with the payload capacity of the Saturn V. These final missions would combine the I and J types in the 1967 list, allowing the CMP to operate a package of lunar orbital sensors and cameras while his companions were on the surface, and allowing them to stay on the Moon for over three days. These missions would also carry the Lunar Roving Vehicle (LRV) increasing the exploration area and allowing televised liftoff of the LM. Also, the Block II spacesuit was revised for the extended missions to allow greater flexibility and visibility for driving the LRV. The success of the first two landings allowed the remaining missions to be crewed with a single veteran as commander, with two rookies. Apollo 13 launched Lovell, Jack Swigert, and Fred Haise in April 1970, headed for the Fra Mauro formation. But two days out, a liquid oxygen tank exploded, disabling the service module and forcing the crew to use the LM as a "lifeboat" to return to Earth. Another NASA review board was convened to determine the cause, which turned out to be a combination of damage of the tank in the factory, and a subcontractor not making a tank component according to updated design specifications. Apollo was grounded again, for the remainder of 1970 while the oxygen tank was redesigned and an extra one was added. Mission cutbacks About the time of the first landing in 1969, it was decided to use an existing Saturn V to launch the Skylab orbital laboratory pre-built on the ground, replacing the original plan to construct it in orbit from several Saturn IB launches; this eliminated Apollo 20. NASA's yearly budget also began to shrink in light of the successful landing, and NASA also had to make funds available for the development of the upcoming Space Shuttle. By 1971, the decision was made to also cancel missions 18 and 19. The two unused Saturn Vs became museum exhibits at the John F. Kennedy Space Center on Merritt Island, Florida, George C. Marshall Space Center in Huntsville, Alabama, Michoud Assembly Facility in New Orleans, Louisiana, and Lyndon B. Johnson Space Center in Houston, Texas. The cutbacks forced mission planners to reassess the original planned landing sites in order to achieve the most effective geological sample and data collection from the remaining four missions. Apollo 15 had been planned to be the last of the H series missions, but since there would be only two subsequent missions left, it was changed to the first of three J missions. Apollo 13's Fra Mauro mission was reassigned to Apollo 14, commanded in February 1971 by Mercury veteran Alan Shepard, with Stuart Roosa and Edgar Mitchell. This time the mission was successful. Shepard and Mitchell spent 33 hours and 31 minutes on the surface, and completed two EVAs totalling 9hours 24 minutes, which was a record for the longest EVA by a lunar crew at the time. In August 1971, just after conclusion of the Apollo 15 mission, President Richard Nixon proposed canceling the two remaining lunar landing missions, Apollo 16 and 17. Office of Management and Budget Deputy Director Caspar Weinberger was opposed to this, and persuaded Nixon to keep the remaining missions. Extended missions Apollo 15 was launched on July 26, 1971, with David Scott, Alfred Worden and James Irwin. Scott and Irwin landed on July 30 near Hadley Rille, and spent just under two days, 19 hours on the surface. In over 18 hours of EVA, they collected about of lunar material. Apollo 16 landed in the Descartes Highlands on April 20, 1972. The crew was commanded by John Young, with Ken Mattingly and Charles Duke. Young and Duke spent just under three days on the surface, with a total of over 20 hours EVA. Apollo 17 was the last of the Apollo program, landing in the Taurus–Littrow region in December 1972. Eugene Cernan commanded Ronald E. Evans and NASA's first scientist-astronaut, geologist Dr. Harrison H. Schmitt. Schmitt was originally scheduled for Apollo 18, but the lunar geological community lobbied for his inclusion on the final lunar landing. Cernan and Schmitt stayed on the surface for just over three days and spent just over 23 hours of total EVA. Canceled missions Several missions were planned for but were canceled before details were finalized. Mission summary Source: Apollo by the Numbers: A Statistical Reference (Orloff 2004) Samples returned The Apollo program returned over of lunar rocks and soil to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979. The rocks collected from the Moon are extremely old compared to rocks found on Earth, as measured by radiometric dating techniques. They range in age from about 3.2 billion years for the basaltic samples derived from the lunar maria, to about 4.6 billion years for samples derived from the highlands crust. As such, they represent samples from a very early period in the development of the Solar System, that are largely absent on Earth. One important rock found during the Apollo Program is dubbed the Genesis Rock, retrieved by astronauts David Scott and James Irwin during the Apollo 15 mission. This anorthosite rock is composed almost exclusively of the calcium-rich feldspar mineral anorthite, and is believed to be representative of the highland crust. A geochemical component called KREEP was discovered by Apollo 12, which has no known terrestrial counterpart. KREEP and the anorthositic samples have been used to infer that the outer portion of the Moon was once completely molten (see lunar magma ocean). Almost all the rocks show evidence of impact process effects. Many samples appear to be pitted with micrometeoroid impact craters, which is never seen on Earth rocks, due to the thick atmosphere. Many show signs of being subjected to high-pressure shock waves that are generated during impact events. Some of the returned samples are of impact melt (materials melted near an impact crater.) All samples returned from the Moon are highly brecciated as a result of being subjected to multiple impact events. Analysis of the composition of the lunar samples supports the giant impact hypothesis, that the Moon was created through impact of a large astronomical body with the Earth. Costs Apollo cost $25.4 billion (or approximately $ in dollars when adjusted for inflation via the GDP deflator index). Of this amount, $20.2 billion ($ adjusted) was spent on the design, development, and production of the Saturn family of launch vehicles, the Apollo spacecraft, spacesuits, scientific experiments, and mission operations. The cost of constructing and operating Apollo-related ground facilities, such as the NASA human spaceflight centers and the global tracking and data acquisition network, added an additional $5.2 billion ($ adjusted). The amount grows to $28 billion ($ adjusted) if the costs for related projects such as Project Gemini and the robotic Ranger, Surveyor, and Lunar Orbiter programs are included. NASA's official cost breakdown, as reported to Congress in the Spring of 1973, is as follows: Accurate estimates of human spaceflight costs were difficult in the early 1960s, as the capability was new and management experience was lacking. Preliminary cost analysis by NASA estimated $7 billion – $12 billion for a crewed lunar landing effort. NASA Administrator James Webb increased this estimate to $20 billion before reporting it to Vice President Johnson in April 1961. Project Apollo was a massive undertaking, representing the largest research and development project in peacetime. At its peak, it employed over 400,000 employees and contractors around the country and accounted for more than half of NASA's total spending in the 1960s. After the first Moon landing, public and political interest waned, including that of President Nixon, who wanted to rein in federal spending. NASA's budget could not sustain Apollo missions which cost, on average, $445 million ($ adjusted) each while simultaneously developing the Space Shuttle. The final fiscal year of Apollo funding was 1973. Apollo Applications Program Looking beyond the crewed lunar landings, NASA investigated several post-lunar applications for Apollo hardware. The Apollo Extension Series (Apollo X) proposed up to 30 flights to Earth orbit, using the space in the Spacecraft Lunar Module Adapter (SLA) to house a small orbital laboratory (workshop). Astronauts would continue to use the CSM as a ferry to the station. This study was followed by design of a larger orbital workshop to be built in orbit from an empty S-IVB Saturn upper stage and grew into the Apollo Applications Program (AAP). The workshop was to be supplemented by the Apollo Telescope Mount, which could be attached to the ascent stage of the lunar module via a rack. The most ambitious plan called for using an empty S-IVB as an interplanetary spacecraft for a Venus fly-by mission. The S-IVB orbital workshop was the only one of these plans to make it off the drawing board. Dubbed Skylab, it was assembled on the ground rather than in space, and launched in 1973 using the two lower stages of a Saturn V. It was equipped with an Apollo Telescope Mount. Skylab's last crew departed the station on February 8, 1974, and the station itself re-entered the atmosphere in 1979. The Apollo–Soyuz program also used Apollo hardware for the first joint nation spaceflight, paving the way for future cooperation with other nations in the Space Shuttle and International Space Station programs. Recent observations In 2008, Japan Aerospace Exploration Agency's SELENE probe observed evidence of the halo surrounding the Apollo 15 Lunar Module blast crater while orbiting above the lunar surface. Beginning in 2009, NASA's robotic Lunar Reconnaissance Orbiter, while orbiting above the Moon, photographed the remnants of the Apollo program left on the lunar surface, and each site where crewed Apollo flights landed. All of the U.S. flags left on the Moon during the Apollo missions were found to still be standing, with the exception of the one left during the Apollo 11 mission, which was blown over during that mission's lift-off from the lunar surface and return to the mission Command Module in lunar orbit; the degree to which these flags retain their original colors remains unknown. In a November 16, 2009, editorial, The New York Times opined: Legacy Science and engineering The Apollo program has been called the greatest technological achievement in human history. Apollo stimulated many areas of technology, leading to over 1,800 spinoff products as of 2015. The flight computer design used in both the lunar and command modules was, along with the Polaris and Minuteman missile systems, the driving force behind early research into integrated circuits (ICs). By 1963, Apollo was using 60 percent of the United States' production of ICs. The crucial difference between the requirements of Apollo and the missile programs was Apollo's much greater need for reliability. While the Navy and Air Force could work around reliability problems by deploying more missiles, the political and financial cost of failure of an Apollo mission was unacceptably high. Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal-oxide-semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). Cultural impact The crew of Apollo 8 sent the first live televised pictures of the Earth and the Moon back to Earth, and read from the creation story in the Book of Genesis, on Christmas Eve 1968. An estimated one-quarter of the population of the world saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon, and an estimated one-fifth of the population of the world watched the live transmission of the Apollo 11 moonwalk. The Apollo program also affected environmental activism in the 1970s due to photos taken by the astronauts. The most well known include Earthrise, taken by William Anders on Apollo 8, and The Blue Marble, taken by the Apollo 17 astronauts. The Blue Marble was released during a surge in environmentalism, and became a symbol of the environmental movement as a depiction of Earth's frailty, vulnerability, and isolation amid the vast expanse of space. According to The Economist, Apollo succeeded in accomplishing President Kennedy's goal of taking on the Soviet Union in the Space Race by accomplishing a singular and significant achievement, to demonstrate the superiority of the free-market system. The publication noted the irony that in order to achieve the goal, the program required the organization of tremendous public resources within a vast, centralized government bureaucracy. Apollo 11 broadcast data restoration project Prior to Apollo 11's 40th anniversary in 2009, NASA searched for the original videotapes of the mission's live televised moonwalk. After an exhaustive three-year search, it was concluded that the tapes had probably been erased and reused. A new digitally remastered version of the best available broadcast television footage was released instead. NASA spinoffs from Apollo NASA spinoffs are dual-purpose technologies created by NASA that have come to help day-to-day life on Earth. Many of these discoveries were made to deal with problems in space. Spinoffs have come out of every NASA mission as well as other discoveries outside of space missions. The following are NASA spinoffs that have come from discoveries from and for the Apollo mission. Cordless power tools NASA started using cordless tools to build the International Space Station in orbit. Today these innovations have led to cordless battery-powered tools used on Earth. Cordless tools have been able to help surgeons in operating rooms greatly because they allow for a greater range of freedom. Fireproof material Following the 1967 Apollo fire, NASA learned that they needed fireproof material to protect astronauts inside the spaceship. NASA developed fireproof material for use on parts of the capsule and on spacesuits. This is important because there is a high percentage of oxygen under great pressure, presenting a fire hazard. The fireproof fabric, called Durette, was created by Monsanto and is now used in firefighting gear. Heart monitors Technology discovered and employed in the Apollo missions led to technology that Medrad used to create an AID implantable automatic pulse generator. This technology is able to monitor heart attacks and can help correct heart malfunctions using small electrical shocks. With heart disease being so common in the United States, heart monitoring is a very important technological advance. Solar panels Solar panels are able to absorb light to create electricity. This technology used discoveries from NASA's Apollo Lunar Module program. Light collected from the panels is transformed into electricity through a semiconductor. Solar panels are now employed in many common applications including outdoor lighting, houses, street lights and portable chargers. In addition to being used on Earth, this technology is still being used in space on the International Space Station. Digital imaging NASA has been able to contribute to creating technology for CAT scans, radiography and MRIs. This technology came from discoveries using digital imaging for NASA's lunar research. CAT scans, radiography and MRIs have made a huge impact in the world of medicine, allowing doctors to see in more detail what is happening inside patients’ bodies. Liquid methane Liquid methane is a fuel which the Apollo program created as a less expensive alternative to traditional oil. It is still used today in rocket launches. Methane must be stored at extremely low temperature to remain liquid, requiring a temperature of . Liquid methane was created by Beech Aircraft Corporation's Boulder Division, and since then the company has been able to convert some cars to run on liquid methane. Depictions on film Documentaries Numerous documentary films cover the Apollo program and the Space Race, including: Footprints on the Moon (1969) Moonwalk One (1970) For All Mankind (1989) Moon Shot (1994 miniseries) "Moon" from the BBC miniseries The Planets (1999) Magnificent Desolation: Walking on the Moon 3D (2005) The Wonder of It All (2007) In the Shadow of the Moon (2007) When We Left Earth: The NASA Missions (2008 miniseries) Moon Machines (2008 miniseries) James May on the Moon (2009) NASA's Story (2009 miniseries) Apollo 11 (2019) Chasing the Moon (2019 miniseries) Docudramas The Apollo program, or certain missions, have been dramatized in Apollo 13 (1995), Apollo 11 (1996), From the Earth to the Moon (1998), The Dish (2000), Space Race (2005), Moonshot (2009), and First Man (2018). Fictional The Apollo program has been the focus of several works of fiction, including: Apollo 18, a 2011 horror movie which was released to negative reviews. For All Mankind, a 2019 TV series depicting an alternate reality in which the Soviet Union was the first country to successfully land a man on the Moon. The rest of the series follows an alternate history of the late 1960s and early 1970s with NASA continuing Apollo missions to the Moon. See also Apollo 11 in popular culture Apollo Lunar Surface Experiments Package Exploration of the Moon List of artificial objects on the Moon List of crewed spacecraft Moon landing conspiracy theories Soviet crewed lunar programs Stolen and missing Moon rocks References Citations Sources Chaikin interviewed all the surviving astronauts and others who worked with the program. Further reading   NASA Report JSC-09423, April 1975 Astronaut Mike Collins autobiography of his experiences as an astronaut, including his flight aboard Apollo 11. Although this book focuses on Apollo 13, it provides a wealth of background information on Apollo technology and procedures. History of the Apollo program from Apollos 1–11, including many interviews with the Apollo astronauts. Gleick, James, "Moon Fever" [review of Oliver Morton, The Moon: A History of the Future; Apollo's Muse: The Moon in the Age of Photography, an exhibition at the Metropolitan Museum of Art, New York City, July 3 – September 22, 2019; Douglas Brinkley, American Moonshot: John F. Kennedy and the Great Space Race; Brandon R. Brown, The Apollo Chronicles: Engineering America's First Moon Missions; Roger D. Launius, Reaching for the Moon: A Short History of the Space Race; Apollo 11, a documentary film directed by Todd Douglas Miller; and Michael Collins, Carrying the Fire: An Astronaut's Journeys (50th Anniversary Edition)], The New York Review of Books, vol. LXVI, no. 13 (15 August 2019), pp. 54–58. Factual, from the standpoint of a flight controller during the Mercury, Gemini, and Apollo space programs. Details the flight of Apollo 13. Tells Grumman's story of building the lunar modules. History of the crewed space program from 1September 1960, to 5January 1968. Account of Deke Slayton's life as an astronaut and of his work as chief of the astronaut office, including selection of Apollo crews.   From origin to November 7, 1962   November 8, 1962 – September 30, 1964   October 1, 1964 – January 20, 1966   January 21, 1966 – July 13, 1974 The history of lunar exploration from a geologist's point of view. External links Apollo program history at NASA's Human Space Flight (HSF) website The Apollo Program at the NASA History Program Office The Apollo Program at the National Air and Space Museum Apollo 35th Anniversary Interactive Feature at NASA (in Flash) Lunar Mission Timeline at the Lunar and Planetary Institute Apollo Collection, The University of Alabama in Huntsville Archives and Special Collections NASA reports Apollo Program Summary Report (PDF), NASA, JSC-09423, April 1975 NASA History Series Publications Project Apollo Drawings and Technical Diagrams at the NASA History Program Office The Apollo Lunar Surface Journal edited by Eric M. Jones and Ken Glover The Apollo Flight Journal by W. David Woods, et al. Multimedia NASA Apollo Program images and videos Apollo Image Archive at Arizona State University Audio recording and transcript of President John F. Kennedy, NASA administrator James Webb, et al., discussing the Apollo agenda (White House Cabinet Room, November 21, 1962) The Project Apollo Archive by Kipp Teague is a large repository of Apollo images, videos, and audio recordings The Project Apollo Archive on Flickr Apollo Image Atlas—almost 25,000 lunar images, Lunar and Planetary Institute 1960s in the United States 1970s in the United States Articles containing video clips Engineering projects Exploration of the Moon Human spaceflight programs NASA programs Space program of the United States
Apollo program
Aspirin, also known as acetylsalicylic acid (ASA), is a medication used to reduce pain, fever, or inflammation. Specific inflammatory conditions which aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever. Aspirin given shortly after a heart attack decreases the risk of death. Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. For pain or fever, effects typically begin within 30 minutes. Aspirin is a nonsteroidal anti-inflammatory drug (NSAID) and works similarly to other NSAIDs but also suppresses the normal functioning of platelets. Aspirin, often used as an analgesic, anti-pyretic and non-steroidal anti-inflammatory drug (NSAID), is able to have an anti-platelet effect by inhibiting the COX activity in the platelet to prevent the production of thromboxane A2 which acts to bind platelets together during coagulation as well as cause vasoconstriction and bronchoconstriction. One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears. A precursor to aspirin found in leaves from the willow tree (genus Salix) has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. For the next 50 years, other chemists established the chemical structure and devised more efficient production methods. Aspirin is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2019, it was the 38th most commonly prescribed medication in the United States, with more than 18million prescriptions. Brand vs. generic name In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and sold it around the world. Aspirin's popularity grew over the first half of the 20th century, leading to competition between many brands and formulations. The word Aspirin was Bayer's brand name; however, their rights to the trademark were lost or sold in many countries. The name is ultimately a blend of the prefix a(cetyl) + spir Spiraea, the meadowsweet plant genus from which the acetylsalicylic acid was originally derived at Bayer + -in, the common chemical suffix. Chemical properties Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate. Like flour mills, factories producing aspirin tablets must control the amount of the powder that becomes airborne inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993. Synthesis The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly demonstrated in undergraduate teaching labs. Reaction mechanism Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids. Physical properties Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance, with a melting point of , and a boiling point of . Its acid dissociation constant (pKa) is 3.5 at . Polymorphism Polymorphism, or the ability of a substance to form more than one crystal structure, is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. For a long time, only one crystal structure for aspirin was known. That aspirin might have a second crystalline form was suspected since the 1960s. The elusive second polymorph was first discovered by Vishweshwar and coworkers in 2005, and fine structural details were given by Bond et al. A new crystal type was found during experiments after co-crystallization of aspirin and levetiracetam from hot acetonitrile. The form II is only stable at 100K and reverts to form I at ambient temperature. In the (unambiguous) form I, two salicylic molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds, and in the newly claimed form II, each salicylic molecule forms the same hydrogen bonds with two neighboring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. Mechanism of action Discovery of the mechanism In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson. Prostaglandins and thromboxanes Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the PTGS enzyme (Suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors. Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, producing an inhibitory effect on platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition. Prostaglandins, local hormones produced in the body, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention to prevent a second acute myocardial infarction. COX-1 and COX-2 inhibition At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified PTGS2 (prostaglandin-endoperoxide synthase 2) produces lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only PTGS2, with the intent to reduce the incidence of gastrointestinal side effects. Several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that PTGS2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express PTGS2, and, by selectively inhibiting PTGS2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as PTGS1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems. Since platelets have no DNA, they are unable to synthesize new PTGS once aspirin has irreversibly inhibited the enzyme, an important difference with reversible inhibitors. Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins, aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin. Additional mechanisms Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signalling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation. Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin. Pharmacokinetics Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The increased pH and larger surface area of the small intestine causes aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and plasma concentrations can continue to rise for up to 24 hours after ingestion. About 50–80% of salicylate in the blood is bound to human serum albumin, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates. As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the acyl glucuronide; the deacetylated conjugate is the phenolic glucuronide. These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important. Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), and acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated. History Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to the use of salicylic tea to reduce fevers around 400 BC, and willow bark preparations were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain, and inflammation in the mid-eighteenth century. By the nineteenth century, pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract. In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the 19th century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally. The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the 20th century leading to fierce competition with the proliferation of aspirin brands and products. Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women. Aspirin sales revived considerably in the last decades of the 20th century, and remain strong in the 21st century with widespread use as a preventive treatment for heart attacks and strokes. Trademark Bayer lost its trademark for Aspirin in the United States in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Today, aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each. Compendial status United States Pharmacopeia British Pharmacopoeia Medical use Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is some evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear. In the United States, low-dose aspirin is deemed reasonable in those between 50 and 70 years old who have a risk of cardiovascular disease over 10%, are not at an increased risk of bleeding, and are otherwise healthy. Pain Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain. Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headache. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning. Fever Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for treatment of fever in children because of the risk of Reye's syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye's syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers. Inflammation Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for treatment of inflammatory diseases, such as rheumatoid arthritis. Heart attacks and strokes Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70. For people who have already had a heart attack or stroke, taking aspirin daily for two years prevented 1 in 50 from having a cardiovascular problem (heart attack, stroke, or death), but also caused non-fatal bleeding problems to occur in 1 of 400 people. Data from early trials of aspirin in primary prevention suggested low dose aspirin is more beneficial for people <70 kg and high dose aspirin is more beneficial for those ≥70 kg. However, more recent trials have suggested lower dose aspirin is not more efficacious in people with a low body weight and more evidence is required to determine the effect of higher dose aspirin in people with a high body weight. The United States Preventive Services Task Force (USPSTF), , recommended initiating low-dose aspirin use for the primary prevention of cardiovascular disease and colon cancer in adults aged 50 to 59 years who have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years. However, in 2021, the USPSTF recommended against the routine use of daily aspirin for primary prevention in adults in their 40s and 50s, citing the fact that the risk of side effects outweighs the potential benefits. Individuals under 60 should first consult a healthcare provider before initiating daily aspirin. In those with no previous history of heart disease, aspirin decreases the risk of a non-fatal myocardial infarction but increases the risk of bleeding and does not change the overall risk of death. Specifically over 5 years it decreased the risk of a cardiovascular event by 1 in 265 and increased the risk of bleeding by 1 in 210. Aspirin appears to offer little benefit to those at lower risk of heart attack or stroke—for instance, those without a history of these events or with pre-existing disease. Some studies recommend aspirin on a case-by-case basis, while others have suggested the risks of other events, such as gastrointestinal bleeding, were enough to outweigh any potential benefit, and recommended against using aspirin for primary prevention entirely. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin. After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). Duration of DAPT was advised in the United States and European Union guidelines after the CURE and PRODIGY studies . In 2020, the systematic review and network meta-analysis from Khan et al. showed promising benefits of short-term (< 6 months) DAPT followed by P2Y12 inhibitors in selected patients, as well as the benefits of extended-term (> 12 months) DAPT in high risk patients. In conclusion, the optimal duration of DAPT after PCIs should be personalized after outweighing each patient's risks of ischemic events and risks of bleeding events with consideration of multiple patient-related and procedure-related factors. Moreover, aspirin should be continued indefinitely after DAPT is complete. Cancer prevention Aspirin may reduce the overall risk of both getting cancer and dying from cancer. There is substantial evidence for lowering the risk of colorectal cancer (CRC), but must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer, breast cancer, and prostate cancer. Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years". A meta-analysis through 2019 said that there was an association between taking aspirin and lower risk of cancer of the colorectum, esophagus, and stomach. In 2021, the U.S. Preventive services Task Force raised questions about the use of aspirin in cancer prevention. It notes the results of the 2018 ASPREE (Aspirin in Reducing Events in the Elderly) Trial, in which the risk of cancer-related death was higher in the aspirin-treated group than in the placebo group. Psychiatry Bipolar disorder Aspirin, along with several other agents with anti-inflammatory properties, has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder in light of the possible role of inflammation in the pathogenesis of severe mental disorders. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain. Dementia There is no evidence that aspirin prevents dementia. Other uses Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment. Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness. Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy. Resistance For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant. A study in 100 Italian people found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant. Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption". Meta-analysis and systematic reviews have concluded that laboratory confirmed aspirin resistance confers increased rates of poorer outcomes in cardiovascular and neurovascular diseases. Although the majority of research conducted has surrounded cardiovascular and neurovascular, there is emerging research into the risk of aspirin resistance after orthopaedic surgery where aspirin is used for venous thromboembolism prophylaxis. Aspirin resistance in orthopaedic surgery, specifically after total hip and knee arthroplasties, is of interest as risk factors for aspirin resistance are also risk factors for venous thromboembolisms and osteoarthritis; the sequalae of requiring a total hip or knee arthroplasty. Some of these risk factors include obesity, advancing age, diabetes mellitus, dyslipidaemia and inflammatory diseases. However, unlike cardiovascular and neurovascular diseases, there is no confirmation on the incidence rates of aspirin resistance in orthopaedic surgery, nor is there confirmation on the clinical implications. Veterinary medicine Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should only be given to animals under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death. Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization. Dosages Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg (or 5 grains) in the United States. Smaller doses are based on these standards, e.g., 75mg and 81mg tablets. The tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required. In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily. March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study said regular low dose (75 or 81mg) aspirin female users had a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause. Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention. In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks. Adverse effects In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. One exception to the recommendation is the use of low-dose 81 mg aspirin at any point in pregnancy under the direction of a health care professional. Contraindications Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. People with kidney disease, hyperuricemia, or gout should not take aspirin because it inhibits the kidneys' ability to excrete uric acid, thus may exacerbate these conditions. Aspirin should not be given to children or adolescents to control cold or influenza symptoms, as this has been linked with Reye's syndrome. Gastrointestinal Aspirin use has been shown to increase the risk of gastrointestinal bleeding. Although some enteric-coated formulations of aspirin are advertised as being "gentle to the stomach", in one study, enteric coating did not seem to reduce this risk. Combining aspirin with other NSAIDs has also been shown to further increase this risk. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding. Blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense. Several trials suggest that the simultaneous use of a COX-2 inhibitor with aspirin may increase the risk of gastrointestinal injury. However, currently available evidence has been unable to prove that this effect is consistently repeatable in everyday clinical practice. More dedicated research is required to provide greater clarity on the subject. Therefore, caution should be exercised if combining aspirin with any "natural" supplements with COX-2-inhibiting properties, such as garlic extracts, curcumin, bilberry, pine bark, ginkgo, fish oil, resveratrol, genistein, quercetin, resorcinol, and others. In addition to enteric coating, "buffering" is the other main method companies have used to try to mitigate the problem of gastrointestinal bleeding. Buffering agents are intended to work by preventing the aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Gas-forming agents in effervescent tablet and powder formulations can also double as a buffering agent, one example being sodium bicarbonate found in Alka-Seltzer. Taking it with vitamin C has been investigated as a method of protecting the stomach lining. Taking equal doses of vitamin C and aspirin may decrease the amount of stomach damage that occurs compared to taking aspirin alone. Retinal vein occlusion It is a widespread habit among eye specialists (ophthalmologists) to prescribe aspirin as an add-on medication for patients with retinal vein occlusion (RVO), such as central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO). The reason of this widespread use is the evidence of its proven effectiveness in major systemic venous thrombotic disorders, and it has been assumed that may be similarly beneficial in various types of retinal vein occlusion. However, a large-scale investigation based on data of nearly 700 patients showed "that aspirin or other antiplatelet aggregating agents or anticoagulants adversely influence the visual outcome in patients with CRVO and hemi-CRVO, without any evidence of protective or beneficial effect". Several expert groups, including the Royal College of Ophthalmologists, recommended against the use of antithrombotic drugs (incl. aspirin) for patients with RVO. Central effects Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, via the action on arachidonic acid and NMDA receptors cascade. Reye's syndrome Reye's syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye's syndrome in people younger than 18 were reported to the U.S. Centers for Disease Control and Prevention. Of these, 93% reported being ill in the three weeks preceding the onset of Reye's syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye's syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye's syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The U.S. Food and Drug Administration now recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor. Skin For a small number of people, taking aspirin can result in symptoms including hives, swelling, and headache. Aspirin can exacerbate symptoms among those with chronic hives, or create acute symptoms of hives. These responses can be due to allergic reactions to aspirin, or more often due to its effect of inhibiting the COX-1 enzyme. Skin reactions may also tie to systemic contraindications, seen with NSAID-precipitated bronchospasm, or those with atopy. Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Earlier findings from two small, low-quality trials suggested a benefit with aspirin (alongside compression therapy) on venous leg ulcer healing time and leg ulcer size, however larger, more recent studies of higher quality have been unable to corroborate these outcomes. As such, further research is required to clarify the role of aspirin in this context. Other adverse effects Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared. Aspirin causes an increased risk of cerebral microbleeds having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches. Such cerebral microbleeds are important, since they often occur prior to ischemic stroke or intracerebral hemorrhage, Binswanger disease, and Alzheimer's disease. A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6). Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronic state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state. Aspirin can cause prolonged bleeding after operations for up to 10 days. In one study, 30 of 6499 people having elective surgery required reoperations to control bleeding. Twenty had diffuse bleeding and 10 had bleeding from a site. Diffuse, but not discrete, bleeding was associated with the preoperative use of aspirin alone or in combination with other NSAIDS in 19 of the 20 diffuse bleeding people. On 9 July 2015, the FDA toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the new warnings. Overdose Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30 to 100mg/l after usual therapeutic doses, 50–300mg/l in people taking high doses and 700–1400mg/l following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate. Interactions Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Other NSAIDs, such as ibuprofen and naproxen, may reduce the antiplatelet effect of aspirin. Although limited evidence suggests this may not result in a reduced cardioprotective effect of aspirin. The pharmacological activity of spironolactone may be reduced by taking aspirin, and it is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C. Research Psychiatry In psychiatric research, aspirin has been investigated as an add-on treatment for different disorders in the context of drug repurposing strategies, considering the role of inflammation in the pathogenesis of severe mental illnesses. Bipolar disorder Aspirin has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain. Infectious diseases Several studies investigated the anti-infective properties of aspirin for bacterial, viral and parasitic infections. Aspirin was demonstrated to limit platelet activation induced by Staphylococcus aureus and Enterococcus faecalis and to reduce streptococcal adhesion to heart valves. In patients with tuberculous meningitis, the addition of aspirin reduced the risk of new cerebral infarction [RR = 0.52 (0.29-0.92)]. A role of aspirin on bacterial and fungal biofilm is also being supported by growing evidence. Cancer prevention Aspirin might weakly reduce the risk of breast cancer per a 2020 meta-analysis. See also List of drugs: As–Az References Further reading McTavish, Jan R. (Fall 1987). "What's in a name? Aspirin and the American Medical Association". Bulletin of the History of Medicine. 61.3: 343–366. . On loss of patent in the USA in 1917. External links 1897 in Germany 1897 in science Acetate esters Acetylsalicylic acids Antiplatelet drugs Bayer brands Brands that became generic Chemical substances for emergency medicine Commercialization of traditional medicines Covalent inhibitors Equine medications German inventions Hepatotoxins Nonsteroidal anti-inflammatory drugs Salicylic acids Salicylyl esters World Health Organization essential medicines Wikipedia medicine articles ready to translate
Aspirin
Acupuncture is a form of alternative medicine and a component of traditional Chinese medicine (TCM) in which thin needles are inserted into the body. Acupuncture is a pseudoscience; the theories and practices of TCM are not based on scientific knowledge, and it has been characterized as quackery. There is a range of acupuncture variants which originated in different philosophies, and techniques vary depending on the country in which it is performed, but can be divided into two main foundational philosophical applications and approaches, the first being the modern standardized form called eight principles TCM and the second an older system that is based on the ancient Taoist Wuxing or better known as the five elements or phases in the West. Acupuncture is most often used to attempt pain relief, though acupuncturists say that it can also be used for a wide range of other conditions. Acupuncture is generally used only in combination with other forms of treatment. The global acupuncture market was worth US$24.55 billion in 2017. The market was led by Europe with a 32.7% share, followed by Asia-Pacific with a 29.4% share and the Americas with a 25.3% share. It is estimated that the industry will reach a market size of $55bn by 2023. The conclusions of trials and systematic reviews of acupuncture are inconsistent, which suggests that it is not effective. An overview of Cochrane reviews found that acupuncture is not effective for a wide range of conditions. A systematic review conducted by medical scientists at the universities of Exeter and Plymouth found little evidence of acupuncture's effectiveness in treating pain. Overall, the evidence suggests that short-term treatment with acupuncture does not produce long-term benefits. Some research results suggest that acupuncture can alleviate some forms of pain, though the majority of research suggests that acupuncture's apparent effects are not caused by the treatment itself. A systematic review concluded that the analgesic effect of acupuncture seemed to lack clinical relevance and could not be clearly distinguished from bias. One meta-analysis found that acupuncture for chronic low back pain was cost-effective as an adjunct to standard care, while a separate systematic review found insufficient evidence for the cost-effectiveness of acupuncture in the treatment of chronic low back pain. Acupuncture is generally safe when done by appropriately trained practitioners using clean needle technique and single-use needles. When properly delivered, it has a low rate of mostly minor adverse effects. Accidents and infections do occur, though, and are associated with neglect on the part of the practitioner, particularly in the application of sterile techniques. A review conducted in 2013 stated that reports of infection transmission increased significantly in the preceding decade. The most frequently reported adverse events were pneumothorax and infections. Since serious adverse events continue to be reported, it is recommended that acupuncturists be trained sufficiently to reduce the risk. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points, and many modern practitioners no longer support the existence of life force energy (qi) or meridians, which was a major part of early belief systems. Acupuncture is believed to have originated around 100 BC in China, around the time The Inner Classic of Huang Di (Huangdi Neijing) was published, though some experts suggest it could have been practiced earlier. Over time, conflicting claims and belief systems emerged about the effect of lunar, celestial and earthly cycles, yin and yang energies, and a body's "rhythm" on the effectiveness of treatment. Acupuncture fluctuated in popularity in China due to changes in the country's political leadership and the preferential use of rationalism or Western medicine. Acupuncture spread first to Korea in the 6th century AD, then to Japan through medical missionaries, and then to Europe, beginning with France. In the 20th century, as it spread to the United States and Western countries, spiritual elements of acupuncture that conflicted with Western beliefs were sometimes abandoned in favor of simply tapping needles into acupuncture points. Clinical practice Acupuncture is a form of alternative medicine. It is used most commonly for pain relief, though it is also used to treat a wide range of conditions. Acupuncture is generally only used in combination with other forms of treatment. For example, the American Society of Anesthesiologists states it may be considered in the treatment for nonspecific, noninflammatory low back pain only in conjunction with conventional therapy. Acupuncture is the insertion of thin needles into the skin. According to the Mayo Foundation for Medical Education and Research (Mayo Clinic), a typical session entails lying still while approximately five to twenty needles are inserted; for the majority of cases, the needles will be left in place for ten to twenty minutes. It can be associated with the application of heat, pressure, or laser light. Classically, acupuncture is individualized and based on philosophy and intuition, and not on scientific research. There is also a non-invasive therapy developed in early 20th century Japan using an elaborate set of instruments other than needles for the treatment of children (shōnishin or shōnihari). Clinical practice varies depending on the country. A comparison of the average number of patients treated per hour found significant differences between China (10) and the United States (1.2). Chinese herbs are often used. There is a diverse range of acupuncture approaches, involving different philosophies. Although various different techniques of acupuncture practice have emerged, the method used in traditional Chinese medicine (TCM) seems to be the most widely adopted in the US. Traditional acupuncture involves needle insertion, moxibustion, and cupping therapy, and may be accompanied by other procedures such as feeling the pulse and other parts of the body and examining the tongue. Traditional acupuncture involves the belief that a "life force" (qi) circulates within the body in lines called meridians. The main methods practiced in the UK are TCM and Western medical acupuncture. The term Western medical acupuncture is used to indicate an adaptation of TCM-based acupuncture which focuses less on TCM. The Western medical acupuncture approach involves using acupuncture after a medical diagnosis. Limited research has compared the contrasting acupuncture systems used in various countries for determining different acupuncture points and thus there is no defined standard for acupuncture points. In traditional acupuncture, the acupuncturist decides which points to treat by observing and questioning the patient to make a diagnosis according to the tradition used. In TCM, the four diagnostic methods are: inspection, auscultation and olfaction, inquiring, and palpation. Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge. Auscultation and olfaction involve listening for particular sounds such as wheezing, and observing body odor. Inquiring involves focusing on the "seven inquiries": chills and fever; perspiration; appetite, thirst and taste; defecation and urination; pain; sleep; and menses and leukorrhea. Palpation is focusing on feeling the body for tender "A-shi" points and feeling the pulse. Needles The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or the needle may be further stimulated by electrical stimulation (electroacupuncture). Acupuncture needles are typically made of stainless steel, making them flexible and preventing them from rusting or breaking. Needles are usually disposed of after each use to prevent contamination. Reusable needles when used should be sterilized between applications. In many areas, only sterile, single-use acupuncture needles are allowed, including the State of California, USA. Needles vary in length between , with shorter needles used near the face and eyes, and longer needles in areas with thicker tissues; needle diameters vary from 0 to 0, with thicker needles used on more robust patients. Thinner needles may be flexible and require tubes for insertion. The tip of the needle should not be made too sharp to prevent breakage, although blunt needles cause more pain. Apart from the usual filiform needle, other needle types include three-edged needles and the Nine Ancient Needles. Japanese acupuncturists use extremely thin needles that are used superficially, sometimes without penetrating the skin, and surrounded by a guide tube (a 17th-century invention adopted in China and the West). Korean acupuncture uses copper needles and has a greater focus on the hand. Needling technique Insertion The skin is sterilized and needles are inserted, frequently with a plastic guide tube. Needles may be manipulated in various ways, including spinning, flicking, or moving up and down relative to the skin. Since most pain is felt in the superficial layers of the skin, a quick insertion of the needle is recommended. Often the needles are stimulated by hand in order to cause a dull, localized, aching sensation that is called de qi, as well as "needle grasp," a tugging feeling felt by the acupuncturist and generated by a mechanical interaction between the needle and skin. Acupuncture can be painful. The skill level of the acupuncturist may influence how painful the needle insertion is, and a sufficiently skilled practitioner may be able to insert the needles without causing any pain. De-qi sensation De-qi (; "arrival of qi") refers to a claimed sensation of numbness, distension, or electrical tingling at the needling site. If these sensations are not observed then inaccurate location of the acupoint, improper depth of needle insertion, inadequate manual manipulation, are blamed. If de-qi is not immediately observed upon needle insertion, various manual manipulation techniques are often applied to promote it (such as "plucking", "shaking" or "trembling"). Once de-qi is observed, techniques might be used which attempt to "influence" the de-qi; for example, by certain manipulation the de-qi can allegedly be conducted from the needling site towards more distant sites of the body. Other techniques aim at "tonifying" () or "sedating" () qi. The former techniques are used in deficiency patterns, the latter in excess patterns. De qi is more important in Chinese acupuncture, while Western and Japanese patients may not consider it a necessary part of the treatment. Related practices Acupressure, a non-invasive form of bodywork, uses physical pressure applied to acupressure points by the hand or elbow, or with various devices. Acupuncture is often accompanied by moxibustion, the burning of cone-shaped preparations of moxa (made from dried mugwort) on or near the skin, often but not always near or on an acupuncture point. Traditionally, acupuncture was used to treat acute conditions while moxibustion was used for chronic diseases. Moxibustion could be direct (the cone was placed directly on the skin and allowed to burn the skin, producing a blister and eventually a scar), or indirect (either a cone of moxa was placed on a slice of garlic, ginger or other vegetable, or a cylinder of moxa was held above the skin, close enough to either warm or burn it). Cupping therapy is an ancient Chinese form of alternative medicine in which a local suction is created on the skin; practitioners believe this mobilizes blood flow in order to promote healing. Tui na is a TCM method of attempting to stimulate the flow of qi by various bare-handed techniques that do not involve needles. Electroacupuncture is a form of acupuncture in which acupuncture needles are attached to a device that generates continuous electric pulses (this has been described as "essentially transdermal electrical nerve stimulation [TENS] masquerading as acupuncture"). Fire needle acupuncture also known as fire needling is a technique which involves quickly inserting a flame-heated needle into areas on the body. Sonopuncture is a stimulation of the body similar to acupuncture using sound instead of needles. This may be done using purpose-built transducers to direct a narrow ultrasound beam to a depth of 6–8 centimetres at acupuncture meridian points on the body. Alternatively, tuning forks or other sound emitting devices are used. Acupuncture point injection is the injection of various substances (such as drugs, vitamins or herbal extracts) into acupoints. This technique combines traditional acupuncture with injection of what is often an effective dose of an approved pharmaceutical drug, and proponents claim that it may be more effective than either treatment alone, especially for the treatment of some kinds of chronic pain. However, a 2016 review found that most published trials of the technique were of poor value due to methodology issues and larger trials would be needed to draw useful conclusions. Auriculotherapy, commonly known as ear acupuncture, auricular acupuncture, or auriculoacupuncture, is considered to date back to ancient China. It involves inserting needles to stimulate points on the outer ear. The modern approach was developed in France during the early 1950s. There is no scientific evidence that it can cure disease; the evidence of effectiveness is negligible. Scalp acupuncture, developed in Japan, is based on reflexological considerations regarding the scalp. Hand acupuncture, developed in Korea, centers around assumed reflex zones of the hand. Medical acupuncture attempts to integrate reflexological concepts, the trigger point model, and anatomical insights (such as dermatome distribution) into acupuncture practice, and emphasizes a more formulaic approach to acupuncture point location. Cosmetic acupuncture is the use of acupuncture in an attempt to reduce wrinkles on the face. Bee venom acupuncture is a treatment approach of injecting purified, diluted bee venom into acupoints. Veterinary acupuncture is the use of acupuncture on domesticated animals. Efficacy Acupuncture has been researched extensively; as of 2013, there were almost 1,500 randomized controlled trials on PubMed with "acupuncture" in the title. The results of reviews of acupuncture's efficacy, however, have been inconclusive. In January 2020, David Gorski analyzed a 2020 review of systematic reviews ("Acupuncture for the Relief of Chronic Pain: A Synthesis of Systematic Reviews") concerning the use of acupuncture to treat chronic pain. Writing in Science-Based Medicine, Gorski said that its findings highlight the conclusion that acupuncture is "a theatrical placebo whose real history has been retconned beyond recognition." He also said this review "reveals the many weaknesses in the design of acupuncture clinical trials". Sham acupuncture and research It is difficult but not impossible to design rigorous research trials for acupuncture. Due to acupuncture's invasive nature, one of the major challenges in efficacy research is in the design of an appropriate placebo control group. For efficacy studies to determine whether acupuncture has specific effects, "sham" forms of acupuncture where the patient, practitioner, and analyst are blinded seem the most acceptable approach. Sham acupuncture uses non-penetrating needles or needling at non-acupuncture points, e.g. inserting needles on meridians not related to the specific condition being studied, or in places not associated with meridians. The under-performance of acupuncture in such trials may indicate that therapeutic effects are due entirely to non-specific effects, or that the sham treatments are not inert, or that systematic protocols yield less than optimal treatment. A 2014 review in Nature Reviews Cancer found that "contrary to the claimed mechanism of redirecting the flow of qi through meridians, researchers usually find that it generally does not matter where the needles are inserted, how often (that is, no dose-response effect is observed), or even if needles are actually inserted. In other words, 'sham' or 'placebo' acupuncture generally produces the same effects as 'real' acupuncture and, in some cases, does better." A 2013 meta-analysis found little evidence that the effectiveness of acupuncture on pain (compared to sham) was modified by the location of the needles, the number of needles used, the experience or technique of the practitioner, or by the circumstances of the sessions. The same analysis also suggested that the number of needles and sessions is important, as greater numbers improved the outcomes of acupuncture compared to non-acupuncture controls. There has been little systematic investigation of which components of an acupuncture session may be important for any therapeutic effect, including needle placement and depth, type and intensity of stimulation, and number of needles used. The research seems to suggest that needles do not need to stimulate the traditionally specified acupuncture points or penetrate the skin to attain an anticipated effect (e.g. psychosocial factors). A response to "sham" acupuncture in osteoarthritis may be used in the elderly, but placebos have usually been regarded as deception and thus unethical. However, some physicians and ethicists have suggested circumstances for applicable uses for placebos such as it might present a theoretical advantage of an inexpensive treatment without adverse reactions or interactions with drugs or other medications. As the evidence for most types of alternative medicine such as acupuncture is far from strong, the use of alternative medicine in regular healthcare can present an ethical question. Using the principles of evidence-based medicine to research acupuncture is controversial, and has produced different results. Some research suggests acupuncture can alleviate pain but the majority of research suggests that acupuncture's effects are mainly due to placebo. Evidence suggests that any benefits of acupuncture are short-lasting. There is insufficient evidence to support use of acupuncture compared to mainstream medical treatments. Acupuncture is not better than mainstream treatment in the long term. The use of acupuncture has been criticized owing to there being little scientific evidence for explicit effects, or the mechanisms for its supposed effectiveness, for any condition that is discernible from placebo. Acupuncture has been called 'theatrical placebo', and David Gorski argues that when acupuncture proponents advocate 'harnessing of placebo effects' or work on developing 'meaningful placebos', they essentially concede it is little more than that. Publication bias Publication bias is cited as a concern in the reviews of randomized controlled trials of acupuncture. A 1998 review of studies on acupuncture found that trials originating in China, Japan, Hong Kong, and Taiwan were uniformly favourable to acupuncture, as were ten out of eleven studies conducted in Russia. A 2011 assessment of the quality of randomized controlled trials on traditional Chinese medicine, including acupuncture, concluded that the methodological quality of most such trials (including randomization, experimental control, and blinding) was generally poor, particularly for trials published in Chinese journals (though the quality of acupuncture trials was better than the trials testing traditional Chinese medicine remedies). The study also found that trials published in non-Chinese journals tended to be of higher quality. Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive. A 2012 review of 88 systematic reviews of acupuncture published in Chinese journals found that less than half of these reviews reported testing for publication bias, and that the majority of these reviews were published in journals with impact factors of zero. A 2015 study comparing pre-registered records of acupuncture trials with their published results found that it was uncommon for such trials to be registered before the trial began. This study also found that selective reporting of results and changing outcome measures to obtain statistically significant results was common in this literature. Scientist and journalist Steven Salzberg identifies acupuncture and Chinese medicine generally as a focus for "fake medical journals" such as the Journal of Acupuncture and Meridian Studies and Acupuncture in Medicine. Specific conditions Pain The conclusions of many trials and numerous systematic reviews of acupuncture are largely inconsistent with each other. A 2011 systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture, and concluded that numerous reviews have shown little convincing evidence that acupuncture is an effective treatment for reducing pain. The same review found that neck pain was one of only four types of pain for which a positive effect was suggested, but cautioned that the primary studies used carried a considerable risk of bias. A 2009 overview of Cochrane reviews found acupuncture is not effective for a wide range of conditions. A 2014 systematic review suggests that the nocebo effect of acupuncture is clinically relevant and that the rate of adverse events may be a gauge of the nocebo effect. A 2012 meta-analysis conducted by the Acupuncture Trialists' Collaboration found "relatively modest" efficacy of acupuncture (in comparison to sham) for the treatment of four different types of chronic pain (back and neck pain, knee osteoarthritis, chronic headache, and shoulder pain) and on that basis concluded that it "is more than a placebo" and a reasonable referral option. Commenting on this meta-analysis, both Edzard Ernst and David Colquhoun said the results were of negligible clinical significance. Ernst later stated that "I fear that, once we manage to eliminate this bias [that operators are not blind] … we might find that the effects of acupuncture exclusively are a placebo response." In 2017, the same research group updated their previous meta-analysis and again found acupuncture to be superior to sham acupuncture for non-specific musculoskeletal pain, osteoarthritis, chronic headache, and shoulder pain. They also found that the effects of acupuncture decreased by about 15% after one year. A 2010 systematic review suggested that acupuncture is more than a placebo for commonly occurring chronic pain conditions, but the authors acknowledged that it is still unknown if the overall benefit is clinically meaningful or cost-effective. A 2010 review found real acupuncture and sham acupuncture produce similar improvements, which can only be accepted as evidence against the efficacy of acupuncture. The same review found limited evidence that real acupuncture and sham acupuncture appear to produce biological differences despite similar effects. A 2009 systematic review and meta-analysis found that acupuncture had a small analgesic effect, which appeared to lack any clinical importance and could not be discerned from bias. The same review found that it remains unclear whether acupuncture reduces pain independent of a psychological impact of the needling ritual. A 2017 systematic review and meta-analysis found that ear acupuncture may be effective at reducing pain within 48 hours of its use, but the mean difference between the acupuncture and control groups was small. Lower back pain A 2013 systematic review found that acupuncture may be effective for nonspecific lower back pain, but the authors noted there were limitations in the studies examined, such as heterogeneity in study characteristics and low methodological quality in many studies. A 2012 systematic review found some supporting evidence that acupuncture was more effective than no treatment for chronic non-specific low back pain; the evidence was conflicting comparing the effectiveness over other treatment approaches. A 2011 systematic review of systematic reviews found that "for chronic low back pain, individualized acupuncture is not better in reducing symptoms than formula acupuncture or sham acupuncture with a toothpick that does not penetrate the skin." A 2010 review found that sham acupuncture was as effective as real acupuncture for chronic low back pain. The specific therapeutic effects of acupuncture were small, whereas its clinically relevant benefits were mostly due to contextual and psychosocial circumstances. Brain imaging studies have shown that traditional acupuncture and sham acupuncture differ in their effect on limbic structures, while at the same time showed equivalent analgesic effects. A 2005 Cochrane review found insufficient evidence to recommend for or against either acupuncture or dry needling for acute low back pain. The same review found low quality evidence for pain relief and improvement compared to no treatment or sham therapy for chronic low back pain only in the short term immediately after treatment. The same review also found that acupuncture is not more effective than conventional therapy and other alternative medicine treatments. A 2017 systematic review and meta-analysis concluded that, for neck pain, acupuncture was comparable in effectiveness to conventional treatment, while electroacupuncture was even more effective in reducing pain than was conventional acupuncture. The same review noted that "It is difficult to draw conclusion [sic] because the included studies have a high risk of bias and imprecision." A 2015 overview of systematic reviews of variable quality showed that acupuncture can provide short-term improvements to people with chronic Low Back Pain. The overview said this was true when acupuncture was used either in isolation or in addition to conventional therapy. A 2017 systematic review for an American College of Physicians clinical practice guideline found low to moderate evidence that acupuncture was effective for chronic low back pain, and limited evidence that it was effective for acute low back pain. The same review found that the strength of the evidence for both conditions was low to moderate. Another 2017 clinical practice guideline, this one produced by the Danish Health Authority, recommended against acupuncture for both recent-onset low back pain and lumbar radiculopathy. Headaches and migraines Two separate 2016 Cochrane reviews found that acupuncture could be useful in the prevention of tension-type headaches and episodic migraines. The 2016 Cochrane review evaluating acupuncture for episodic migraine prevention concluded that true acupuncture had a small effect beyond sham acupuncture and found moderate-quality evidence to suggest that acupuncture is at least similarly effective to prophylactic medications for this purpose. A 2012 review found that acupuncture has demonstrated benefit for the treatment of headaches, but that safety needed to be more fully documented in order to make any strong recommendations in support of its use. Arthritis pain A 2014 review concluded that "current evidence supports the use of acupuncture as an alternative to traditional analgesics in osteoarthritis patients." , a meta-analysis showed that acupuncture may help osteoarthritis pain but it was noted that the effects were insignificant in comparison to sham needles. A 2012 review found "the potential beneficial action of acupuncture on osteoarthritis pain does not appear to be clinically relevant." A 2010 Cochrane review found that acupuncture shows statistically significant benefit over sham acupuncture in the treatment of peripheral joint osteoarthritis; however, these benefits were found to be so small that their clinical significance was doubtful, and "probably due at least partially to placebo effects from incomplete blinding". A 2013 Cochrane review found low to moderate evidence that acupuncture improves pain and stiffness in treating people with fibromyalgia compared with no treatment and standard care. A 2012 review found "there is insufficient evidence to recommend acupuncture for the treatment of fibromyalgia." A 2010 systematic review found a small pain relief effect that was not apparently discernible from bias; acupuncture is not a recommendable treatment for the management of fibromyalgia on the basis of this review. A 2012 review found that the effectiveness of acupuncture to treat rheumatoid arthritis is "sparse and inconclusive." A 2005 Cochrane review concluded that acupuncture use to treat rheumatoid arthritis "has no effect on ESR, CRP, pain, patient's global assessment, number of swollen joints, number of tender joints, general health, disease activity and reduction of analgesics." A 2010 overview of systematic reviews found insufficient evidence to recommend acupuncture in the treatment of most rheumatic conditions, with the exceptions of osteoarthritis, low back pain, and lateral elbow pain. A 2018 systematic review found some evidence that acupuncture could be effective for the treatment of rheumatoid arthritis, but that the evidence was limited because of heterogeneity and methodological flaws in the included studies. Other joint pain A 2014 systematic review found that although manual acupuncture was effective at relieving short-term pain when used to treat tennis elbow, its long-term effect in relieving pain was "unremarkable". A 2007 review found that acupuncture was significantly better than sham acupuncture at treating chronic knee pain; the evidence was not conclusive due to the lack of large, high-quality trials. Post-operative pain and nausea A 2014 overview of systematic reviews found insufficient evidence to suggest that acupuncture is an effective treatment for postoperative nausea and vomiting (PONV) in a clinical setting. A 2013 systematic review concluded that acupuncture might be beneficial in prevention and treatment of PONV. A 2015 Cochrane review found moderate-quality evidence of no difference between stimulation of the P6 acupoint on the wrist and antiemetic drugs for preventing PONV. A new finding of the review was that further comparative trials are futile, based on the conclusions of a trial sequential analysis. Whether combining PC6 acupoint stimulation with antiemetics is effective was inconclusive. A 2014 overview of systematic reviews found insufficient evidence to suggest that acupuncture is effective for surgical or post-operative pain. For the use of acupuncture for post-operative pain, there was contradictory evidence. A 2014 systematic review found supportive but limited evidence for use of acupuncture for acute post-operative pain after back surgery. A 2014 systematic review found that while the evidence suggested acupuncture could be an effective treatment for postoperative gastroparesis, a firm conclusion could not be reached because the trials examined were of low quality. Pain and nausea associated with cancer and cancer treatment A 2015 Cochrane review found that there is insufficient evidence to determine whether acupuncture is an effective treatment for cancer pain in adults. A 2014 systematic review published in the Chinese Journal of Integrative Medicine found that acupuncture may be effective as an adjunctive treatment to palliative care for cancer patients. A 2013 overview of reviews published in the Journal of Multinational Association for Supportive Care in Cancer found evidence that acupuncture could be beneficial for people with cancer-related symptoms, but also identified few rigorous trials and high heterogeneity between trials. A 2012 systematic review of randomised clinical trials published in the same journal found that the number and quality of RCTs for using acupuncture in the treatment of cancer pain was too low to draw definite conclusions. A 2014 systematic review reached inconclusive results with regard to the effectiveness of acupuncture for treating cancer-related fatigue. A 2013 systematic review found that acupuncture is an acceptable adjunctive treatment for chemotherapy-induced nausea and vomiting, but that further research with a low risk of bias is needed. A 2013 systematic review found that the quantity and quality of available RCTs for analysis were too low to draw valid conclusions for the effectiveness of acupuncture for cancer-related fatigue. Sleep Several meta-analytic and systematic reviews suggest that acupuncture alleviates sleep disturbance, particularly insomnia. However, reviewers caution that this evidence should be considered preliminary due to publication bias, problems with research methodology, small sample sizes, and heterogeneity. Other conditions For the following conditions, the Cochrane Collaboration or other reviews have concluded there is no strong evidence of benefit: Moxibustion and cupping A 2010 overview of systematic reviews found that moxibustion was effective for several conditions but the primary studies were of poor quality, so there persists ample uncertainty, which limits the conclusiveness of their findings. Safety Adverse events Acupuncture is generally safe when administered by an experienced, appropriately trained practitioner using clean-needle technique and sterile single-use needles. When improperly delivered it can cause adverse effects. Accidents and infections are associated with infractions of sterile technique or neglect on the part of the practitioner. To reduce the risk of serious adverse events after acupuncture, acupuncturists should be trained sufficiently. People with serious spinal disease, such as cancer or infection, are not good candidates for acupuncture. Contraindications to acupuncture (conditions that should not be treated with acupuncture) include coagulopathy disorders (e.g. hemophilia and advanced liver disease), warfarin use, severe psychiatric disorders (e.g. psychosis), and skin infections or skin trauma (e.g. burns). Further, electroacupuncture should be avoided at the spot of implanted electrical devices (such as pacemakers). A 2011 systematic review of systematic reviews (internationally and without language restrictions) found that serious complications following acupuncture continue to be reported. Between 2000 and 2009, ninety-five cases of serious adverse events, including five deaths, were reported. Many such events are not inherent to acupuncture but are due to malpractice of acupuncturists. This might be why such complications have not been reported in surveys of adequately trained acupuncturists. Most such reports originate from Asia, which may reflect the large number of treatments performed there or a relatively higher number of poorly trained Asian acupuncturists. Many serious adverse events were reported from developed countries. These included Australia, Austria, Canada, Croatia, France, Germany, Ireland, the Netherlands, New Zealand, Spain, Sweden, Switzerland, the UK, and the US. The number of adverse effects reported from the UK appears particularly unusual, which may indicate less under-reporting in the UK than other countries. Reports included 38 cases of infections and 42 cases of organ trauma. The most frequent adverse events included pneumothorax, and bacterial and viral infections. A 2013 review found (without restrictions regarding publication date, study type or language) 295 cases of infections; mycobacterium was the pathogen in at least 96%. Likely sources of infection include towels, hot packs or boiling tank water, and reusing reprocessed needles. Possible sources of infection include contaminated needles, reusing personal needles, a person's skin containing mycobacterium, and reusing needles at various sites in the same person. Although acupuncture is generally considered a safe procedure, a 2013 review stated that the reports of infection transmission increased significantly in the prior decade, including those of mycobacterium. Although it is recommended that practitioners of acupuncture use disposable needles, the reuse of sterilized needles is still permitted. It is also recommended that thorough control practices for preventing infection be implemented and adapted. English-language A 2013 systematic review of the English-language case reports found that serious adverse events associated with acupuncture are rare, but that acupuncture is not without risk. Between 2000 and 2011 the English-language literature from 25 countries and regions reported 294 adverse events. The majority of the reported adverse events were relatively minor, and the incidences were low. For example, a prospective survey of 34,000 acupuncture treatments found no serious adverse events and 43 minor ones, a rate of 1.3 per 1000 interventions. Another survey found there were 7.1% minor adverse events, of which 5 were serious, amid 97,733 acupuncture patients. The most common adverse effect observed was infection (e.g. mycobacterium), and the majority of infections were bacterial in nature, caused by skin contact at the needling site. Infection has also resulted from skin contact with unsterilized equipment or with dirty towels in an unhygienic clinical setting. Other adverse complications included five reported cases of spinal cord injuries (e.g. migrating broken needles or needling too deeply), four brain injuries, four peripheral nerve injuries, five heart injuries, seven other organ and tissue injuries, bilateral hand edema, epithelioid granuloma, pseudolymphoma, argyria, pustules, pancytopenia, and scarring due to hot-needle technique. Adverse reactions from acupuncture, which are unusual and uncommon in typical acupuncture practice, included syncope, galactorrhoea, bilateral nystagmus, pyoderma gangrenosum, hepatotoxicity, eruptive lichen planus, and spontaneous needle migration. A 2013 systematic review found 31 cases of vascular injuries caused by acupuncture, three resulting in death. Two died from pericardial tamponade and one was from an aortoduodenal fistula. The same review found vascular injuries were rare, bleeding and pseudoaneurysm were most prevalent. A 2011 systematic review (without restriction in time or language), aiming to summarize all reported case of cardiac tamponade after acupuncture, found 26 cases resulting in 14 deaths, with little doubt about causality in most fatal instances. The same review concluded cardiac tamponade was a serious, usually fatal, though theoretically avoidable complication following acupuncture, and urged training to minimize risk. A 2012 review found a number of adverse events were reported after acupuncture in the UK's National Health Service (NHS) but most (95%) were not severe, though miscategorization and under-reporting may alter the total figures. From January 2009 to December 2011, 468 safety incidents were recognized within the NHS organizations. The adverse events recorded included retained needles (31%), dizziness (30%), loss of consciousness/unresponsive (19%), falls (4%), bruising or soreness at needle site (2%), pneumothorax (1%) and other adverse side effects (12%). Acupuncture practitioners should know, and be prepared to be responsible for, any substantial harm from treatments. Some acupuncture proponents argue that the long history of acupuncture suggests it is safe. However, there is an increasing literature on adverse events (e.g. spinal-cord injury). Acupuncture seems to be safe in people getting anticoagulants, assuming needles are used at the correct location and depth. Studies are required to verify these findings. The evidence suggests that acupuncture might be a safe option for people with allergic rhinitis. Chinese, Korean, and Japanese-language A 2010 systematic review of the Chinese-language literature found numerous acupuncture-related adverse events, including pneumothorax, fainting, subarachnoid hemorrhage, and infection as the most frequent, and cardiovascular injuries, subarachnoid hemorrhage, pneumothorax, and recurrent cerebral hemorrhage as the most serious, most of which were due to improper technique. Between 1980 and 2009, the Chinese-language literature reported 479 adverse events. Prospective surveys show that mild, transient acupuncture-associated adverse events ranged from 6.71% to 15%. In a study with 190,924 patients, the prevalence of serious adverse events was roughly 0.024%. Another study showed a rate of adverse events requiring specific treatment of 2.2%, 4,963 incidences among 229,230 patients. Infections, mainly hepatitis, after acupuncture are reported often in English-language research, though are rarely reported in Chinese-language research, making it plausible that acupuncture-associated infections have been underreported in China. Infections were mostly caused by poor sterilization of acupuncture needles. Other adverse events included spinal epidural hematoma (in the cervical, thoracic and lumbar spine), chylothorax, injuries of abdominal organs and tissues, injuries in the neck region, injuries to the eyes, including orbital hemorrhage, traumatic cataract, injury of the oculomotor nerve and retinal puncture, hemorrhage to the cheeks and the hypoglottis, peripheral motor-nerve injuries and subsequent motor dysfunction, local allergic reactions to metal needles, stroke, and cerebral hemorrhage after acupuncture. A causal link between acupuncture and the adverse events cardiac arrest, pyknolepsy, shock, fever, cough, thirst, aphonia, leg numbness, and sexual dysfunction remains uncertain. The same review concluded that acupuncture can be considered inherently safe when practiced by properly trained practitioners, but the review also stated there is a need to find effective strategies to minimize the health risks. Between 1999 and 2010, the Korean-language literature contained reports of 1104 adverse events. Between the 1980s and 2002, the Japanese-language literature contained reports of 150 adverse events. Children and pregnancy Although acupuncture has been practiced for thousands of years in China, its use in pediatrics in the United States did not become common until the early 2000s. In 2007, the National Health Interview Survey (NHIS) conducted by the National Center For Health Statistics (NCHS) estimated that approximately 150,000 children had received acupuncture treatment for a variety of conditions. In 2008 a study determined that the use of acupuncture-needle treatment on children was "questionable" due to the possibility of adverse side-effects and the pain manifestation differences in children versus adults. The study also includes warnings against practicing acupuncture on infants, as well as on children who are over-fatigued, very weak, or have over-eaten. When used on children, acupuncture is considered safe when administered by well-trained, licensed practitioners using sterile needles; however, a 2011 review found there was limited research to draw definite conclusions about the overall safety of pediatric acupuncture. The same review found 279 adverse events, 25 of them serious. The adverse events were mostly mild in nature (e.g. bruising or bleeding). The prevalence of mild adverse events ranged from 10.1% to 13.5%, an estimated 168 incidences among 1,422 patients. On rare occasions adverse events were serious (e.g. cardiac rupture or hemoptysis); many might have been a result of substandard practice. The incidence of serious adverse events was 5 per one million, which included children and adults. When used during pregnancy, the majority of adverse events caused by acupuncture were mild and transient, with few serious adverse events. The most frequent mild adverse event was needling or unspecified pain, followed by bleeding. Although two deaths (one stillbirth and one neonatal death) were reported, there was a lack of acupuncture-associated maternal mortality. Limiting the evidence as certain, probable or possible in the causality evaluation, the estimated incidence of adverse events following acupuncture in pregnant women was 131 per 10,000. Although acupuncture is not contraindicated in pregnant women, some specific acupuncture points are particularly sensitive to needle insertion; these spots, as well as the abdominal region, should be avoided during pregnancy. Moxibustion and cupping Four adverse events associated with moxibustion were bruising, burns and cellulitis, spinal epidural abscess, and large superficial basal cell carcinoma. Ten adverse events were associated with cupping. The minor ones were keloid scarring, burns, and bullae; the serious ones were acquired hemophilia A, stroke following cupping on the back and neck, factitious panniculitis, reversible cardiac hypertrophy, and iron deficiency anemia. Cost-effectiveness A 2013 meta-analysis found that acupuncture for chronic low back pain was cost-effective as a complement to standard care, but not as a substitute for standard care except in cases where comorbid depression presented. The same meta-analysis found there was no difference between sham and non-sham acupuncture. A 2011 systematic review found insufficient evidence for the cost-effectiveness of acupuncture in the treatment of chronic low back pain. A 2010 systematic review found that the cost-effectiveness of acupuncture could not be concluded. A 2012 review found that acupuncture seems to be cost-effective for some pain conditions. Risk of forgoing conventional medical care As with other alternative medicines, unethical or naïve practitioners may induce patients to exhaust financial resources by pursuing ineffective treatment. Professional ethics codes set by accrediting organizations such as the National Certification Commission for Acupuncture and Oriental Medicine require practitioners to make "timely referrals to other health care professionals as may be appropriate." Stephen Barrett states that there is a "risk that an acupuncturist whose approach to diagnosis is not based on scientific concepts will fail to diagnose a dangerous condition". Conceptual basis Traditional Acupuncture is a substantial part of traditional Chinese medicine (TCM). Early acupuncture beliefs relied on concepts that are common in TCM, such as a life force energy called qi. Qi was believed to flow from the body's primary organs (zang-fu organs) to the "superficial" body tissues of the skin, muscles, tendons, bones, and joints, through channels called meridians. Acupuncture points where needles are inserted are mainly (but not always) found at locations along the meridians. Acupuncture points not found along a meridian are called extraordinary points and those with no designated site are called "A-shi" points. In TCM, disease is generally perceived as a disharmony or imbalance in energies such as yin, yang, qi, xuĕ, zàng-fǔ, meridians, and of the interaction between the body and the environment. Therapy is based on which "pattern of disharmony" can be identified. For example, some diseases are believed to be caused by meridians being invaded with an excess of wind, cold, and damp. In order to determine which pattern is at hand, practitioners examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing, or the sound of the voice. TCM and its concept of disease does not strongly differentiate between the cause and effect of symptoms. Purported scientific basis Scientific research has not supported the existence of qi, meridians, or yin and yang. A Nature editorial described TCM as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Quackwatch states that "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care." Some modern practitioners support the use of acupuncture to treat pain, but have abandoned the use of qi, meridians, yin, yang and other mystical energies as an explanatory frameworks. The use of qi as an explanatory framework has been decreasing in China, even as it becomes more prominent during discussions of acupuncture in the US. Academic discussions of acupuncture still make reference to pseudoscientific concepts such as qi and meridians despite the lack of scientific evidence. Many within the scientific community consider attempts to rationalize acupuncture in science to be quackery and pseudoscience. Academics Massimo Pigliucci and Maarten Boudry describe it as a "borderlands science" lying between science and pseudoscience. Many acupuncturists attribute pain relief to the release of endorphins when needles penetrate, but no longer support the idea that acupuncture can affect a disease. It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals, but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. Human tests to determine whether electrical continuity was significantly different near meridians than other places in the body have been inconclusive. Some studies suggest acupuncture causes a series of events within the central nervous system, and that it is possible to inhibit acupuncture's analgesic effects with the opioid antagonist naloxone. Mechanical deformation of the skin by acupuncture needles appears to result in the release of adenosine. The anti-nociceptive effect of acupuncture may be mediated by the adenosine A1 receptor. A 2014 review in Nature Reviews Cancer analyzed mouse studies that suggested acupuncture relieves pain via the local release of adenosine, which then triggered nearby A1 receptors. The review found that in those studies, because acupuncture "caused more tissue damage and inflammation relative to the size of the animal in mice than in humans, such studies unnecessarily muddled a finding that local inflammation can result in the local release of adenosine with analgesic effect." History Origins Acupuncture, along with moxibustion, is one of the oldest practices of traditional Chinese medicine. Most historians believe the practice began in China, though there are some conflicting narratives on when it originated. Academics David Ramey and Paul Buell said the exact date acupuncture was founded depends on the extent to which dating of ancient texts can be trusted and the interpretation of what constitutes acupuncture. Scholars note that acupressure therapy was prevalent in India. Once Buddhism spread to China, the acupressure therapy was also integrated into common medical practice in China and it came to be known as acupuncture. Scholars note these similarities because the major points of Indian acupressure and Chinese acupuncture are similar to each other. According to an article in Rheumatology, the first documentation of an "organized system of diagnosis and treatment" for acupuncture was in ‘Inner Classic of Huang Di (Huangdi Neijing) from about 100 BC. Gold and silver needles found in the tomb of Liu Sheng from around 100 BC are believed to be the earliest archaeological evidence of acupuncture, though it is unclear if that was their purpose. According to Plinio Prioreschi, the earliest known historical record of acupuncture is the Shiji ("Records of the Grand Historian"), written by a historian around 100 BC. It is believed that this text was documenting what was established practice at that time. Alternate theories The 5,000-year-old mummified body of Ötzi the Iceman was found with 15 groups of tattoos, many of which were located at points on the body where acupuncture needles are used for abdominal or lower back problems. Evidence from the body suggests Otzi suffered from these conditions. This has been cited as evidence that practices similar to acupuncture may have been practised elsewhere in Eurasia during the early Bronze Age; however, The Oxford Handbook of the History of Medicine calls this theory "speculative". It is considered unlikely that acupuncture was practised before 2000 BC. Acupuncture may have been practised during the Neolithic era, near the end of the Stone Age, using sharpened stones called Bian shi. Many Chinese texts from later eras refer to sharp stones called "plen", which means "stone probe", that may have been used for acupuncture purposes. The ancient Chinese medical text, Huangdi Neijing, indicates that sharp stones were believed at-the-time to cure illnesses at or near the body's surface, perhaps because of the short depth a stone could penetrate. However, it is more likely that stones were used for other medical purposes, such as puncturing a growth to drain its pus. The Mawangdui texts, which are believed to be from the 2nd century BC, mention the use of pointed stones to open abscesses, and moxibustion, but not for acupuncture. It is also speculated that these stones may have been used for bloodletting, due to the ancient Chinese belief that illnesses were caused by demons within the body that could be killed or released. It is likely bloodletting was an antecedent to acupuncture. According to historians Lu Gwei-djen and Joseph Needham, there is substantial evidence that acupuncture may have begun around 600 BC. Some hieroglyphs and pictographs from that era suggests acupuncture and moxibustion were practised. However, historians Lu and Needham said it was unlikely a needle could be made out of the materials available in China during this time period. It is possible that bronze was used for early acupuncture needles. Tin, copper, gold and silver are also possibilities, though they are considered less likely, or to have been used in fewer cases. If acupuncture was practised during the Shang dynasty (1766 to 1122 BC), organic materials like thorns, sharpened bones, or bamboo may have been used. Once methods for producing steel were discovered, it would replace all other materials, since it could be used to create a very fine, but sturdy needles. Lu and Needham noted that all the ancient materials that could have been used for acupuncture and which often produce archaeological evidence, such as sharpened bones, bamboo or stones, were also used for other purposes. An article in Rheumatology said that the absence of any mention of acupuncture in documents found in the tomb of Mawangdui from 198 BC suggest that acupuncture was not practised by that time. Belief systems Several different and sometimes conflicting belief systems emerged regarding acupuncture. This may have been the result of competing schools of thought. Some ancient texts referred to using acupuncture to cause bleeding, while others mixed the ideas of blood-letting and spiritual ch'i energy. Over time, the focus shifted from blood to the concept of puncturing specific points on the body, and eventually to balancing Yin and Yang energies as well. According to David Ramey, no single "method or theory" was ever predominantly adopted as the standard. At the time, scientific knowledge of medicine was not yet developed, especially because in China dissection of the deceased was forbidden, preventing the development of basic anatomical knowledge. It is not certain when specific acupuncture points were introduced, but the autobiography of Bian Que from around 400–500 BC references inserting needles at designated areas. Bian Que believed there was a single acupuncture point at the top of one's skull that he called the point "of the hundred meetings." Texts dated to be from 156–186 BC document early beliefs in channels of life force energy called meridians that would later be an element in early acupuncture beliefs. Ramey and Buell said the "practice and theoretical underpinnings" of modern acupuncture were introduced in The Yellow Emperor's Classic (Huangdi Neijing) around 100 BC. It introduced the concept of using acupuncture to manipulate the flow of life energy (qi) in a network of meridian (channels) in the body. The network concept was made up of acu-tracts, such as a line down the arms, where it said acupoints were located. Some of the sites acupuncturists use needles at today still have the same names as those given to them by the Yellow Emperor's Classic. Numerous additional documents were published over the centuries introducing new acupoints. By the 4th century AD, most of the acupuncture sites in use today had been named and identified. Early development in China Establishment and growth In the first half of the 1st century AD, acupuncturists began promoting the belief that acupuncture's effectiveness was influenced by the time of day or night, the lunar cycle, and the season. The Science of the Yin-Yang Cycles (Yün Chhi Hsüeh) was a set of beliefs that curing diseases relied on the alignment of both heavenly (tian) and earthly (di) forces that were attuned to cycles like that of the sun and moon. There were several different belief systems that relied on a number of celestial and earthly bodies or elements that rotated and only became aligned at certain times. According to Needham and Lu, these "arbitrary predictions" were depicted by acupuncturists in complex charts and through a set of special terminology. Acupuncture needles during this period were much thicker than most modern ones and often resulted in infection. Infection is caused by a lack of sterilization, but at that time it was believed to be caused by use of the wrong needle, or needling in the wrong place, or at the wrong time. Later, many needles were heated in boiling water, or in a flame. Sometimes needles were used while they were still hot, creating a cauterizing effect at the injection site. Nine needles were recommended in the Chen Chiu Ta Chheng from 1601, which may have been because of an ancient Chinese belief that nine was a magic number. Other belief systems were based on the idea that the human body operated on a rhythm and acupuncture had to be applied at the right point in the rhythm to be effective. In some cases a lack of balance between Yin and Yang were believed to be the cause of disease. In the 1st century AD, many of the first books about acupuncture were published and recognized acupuncturist experts began to emerge. The Zhen Jiu Jia Yi Jing, which was published in the mid-3rd century, became the oldest acupuncture book that is still in existence in the modern era. Other books like the Yu Kuei Chen Ching, written by the Director of Medical Services for China, were also influential during this period, but were not preserved. In the mid 7th century, Sun Simiao published acupuncture-related diagrams and charts that established standardized methods for finding acupuncture sites on people of different sizes and categorized acupuncture sites in a set of modules. Acupuncture became more established in China as improvements in paper led to the publication of more acupuncture books. The Imperial Medical Service and the Imperial Medical College, which both supported acupuncture, became more established and created medical colleges in every province. The public was also exposed to stories about royal figures being cured of their diseases by prominent acupuncturists. By time The Great Compendium of Acupuncture and Moxibustion was published during the Ming dynasty (1368–1644 AD), most of the acupuncture practices used in the modern era had been established. Decline By the end of the Song dynasty (1279 AD), acupuncture had lost much of its status in China. It became rarer in the following centuries, and was associated with less prestigious professions like alchemy, shamanism, midwifery and moxibustion. Additionally, by the 18th century, scientific rationality was becoming more popular than traditional superstitious beliefs. By 1757 a book documenting the history of Chinese medicine called acupuncture a "lost art". Its decline was attributed in part to the popularity of prescriptions and medications, as well as its association with the lower classes. In 1822, the Chinese Emperor signed a decree excluding the practice of acupuncture from the Imperial Medical Institute. He said it was unfit for practice by gentlemen-scholars. In China acupuncture was increasingly associated with lower-class, illiterate practitioners. It was restored for a time, but banned again in 1929 in favor of science-based Western medicine. Although acupuncture declined in China during this time period, it was also growing in popularity in other countries. International expansion Korea is believed to be the first country in Asia that acupuncture spread to outside of China. Within Korea there is a legend that acupuncture was developed by emperor Dangun, though it is more likely to have been brought into Korea from a Chinese colonial prefecture in 514 AD. Acupuncture use was commonplace in Korea by the 6th century. It spread to Vietnam in the 8th and 9th centuries. As Vietnam began trading with Japan and China around the 9th century, it was influenced by their acupuncture practices as well. China and Korea sent "medical missionaries" that spread traditional Chinese medicine to Japan, starting around 219 AD. In 553, several Korean and Chinese citizens were appointed to re-organize medical education in Japan and they incorporated acupuncture as part of that system. Japan later sent students back to China and established acupuncture as one of five divisions of the Chinese State Medical Administration System. Acupuncture began to spread to Europe in the second half of the 17th century. Around this time the surgeon-general of the Dutch East India Company met Japanese and Chinese acupuncture practitioners and later encouraged Europeans to further investigate it. He published the first in-depth description of acupuncture for the European audience and created the term "acupuncture" in his 1683 work De Acupunctura. France was an early adopter among the West due to the influence of Jesuit missionaries, who brought the practice to French clinics in the 16th century. The French doctor Louis Berlioz (the father of the composer Hector Berlioz) is usually credited with being the first to experiment with the procedure in Europe in 1810, before publishing his findings in 1816. By the 19th century, acupuncture had become commonplace in many areas of the world. Americans and Britons began showing interest in acupuncture in the early 19th century, although interest waned by mid-century. Western practitioners abandoned acupuncture's traditional beliefs in spiritual energy, pulse diagnosis, and the cycles of the moon, sun or the body's rhythm. Diagrams of the flow of spiritual energy, for example, conflicted with the West's own anatomical diagrams. It adopted a new set of ideas for acupuncture based on tapping needles into nerves. In Europe it was speculated that acupuncture may allow or prevent the flow of electricity in the body, as electrical pulses were found to make a frog's leg twitch after death. The West eventually created a belief system based on Travell trigger points that were believed to inhibit pain. They were in the same locations as China's spiritually identified acupuncture points, but under a different nomenclature. The first elaborate Western treatise on acupuncture was published in 1683 by Willem ten Rhijne. Modern era In China, the popularity of acupuncture rebounded in 1949 when Mao Zedong took power and sought to unite China behind traditional cultural values. It was also during this time that many Eastern medical practices were consolidated under the name traditional Chinese medicine (TCM). New practices were adopted in the 20th century, such as using a cluster of needles, electrified needles, or leaving needles inserted for up to a week. A lot of emphasis developed on using acupuncture on the ear. Acupuncture research organizations such as the International Society of Acupuncture were founded in the 1940s and 1950s and acupuncture services became available in modern hospitals. China, where acupuncture was believed to have originated, was increasingly influenced by Western medicine. Meanwhile, acupuncture grew in popularity in the US. The US Congress created the Office of Alternative Medicine in 1992 and the National Institutes of Health (NIH) declared support for acupuncture for some conditions in November 1997. In 1999, the National Center for Complementary and Alternative Medicine was created within the NIH. Acupuncture became the most popular alternative medicine in the US. Politicians from the Chinese Communist Party said acupuncture was superstitious and conflicted with the party's commitment to science. Communist Party Chairman Mao Zedong later reversed this position, arguing that the practice was based on scientific principles. In 1971, New York Times reporter James Reston published an article on his acupuncture experiences in China, which led to more investigation of and support for acupuncture. The US President Richard Nixon visited China in 1972. During one part of the visit, the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients. One patient receiving open heart surgery while awake was ultimately found to have received a combination of three powerful sedatives as well as large injections of a local anesthetic into the wound. After the National Institute of Health expressed support for acupuncture for a limited number of conditions, adoption in the US grew further. In 1972 the first legal acupuncture center in the US was established in Washington DC and in 1973 the American Internal Revenue Service allowed acupuncture to be deducted as a medical expense. In 2006, a BBC documentary Alternative Medicine filmed a patient undergoing open heart surgery allegedly under acupuncture-induced anesthesia. It was later revealed that the patient had been given a cocktail of anesthetics. In 2010, UNESCO inscribed "acupuncture and moxibustion of traditional Chinese medicine" on the UNESCO Intangible Cultural Heritage List following China's nomination. Adoption Acupuncture is most heavily practiced in China and is popular in the US, Australia, and Europe. In Switzerland, acupuncture has become the most frequently used alternative medicine since 2004. In the United Kingdom, a total of 4 million acupuncture treatments were administered in 2009. Acupuncture is used in most pain clinics and hospices in the UK. An estimated 1 in 10 adults in Australia used acupuncture in 2004. In Japan, it is estimated that 25 percent of the population will try acupuncture at some point, though in most cases it is not covered by public health insurance. Users of acupuncture in Japan are more likely to be elderly and to have a limited education. Approximately half of users surveyed indicated a likelihood to seek such remedies in the future, while 37% did not. Less than one percent of the US population reported having used acupuncture in the early 1990s. By the early 2010s, more than 14 million Americans reported having used acupuncture as part of their health care. In the US, acupuncture is increasingly () used at academic medical centers, and is usually offered through CAM centers or anesthesia and pain management services. Examples include those at Harvard University, Stanford University, Johns Hopkins University, and UCLA. The use of acupuncture in Germany increased by 20% in 2007, after the German acupuncture trials supported its efficacy for certain uses. In 2011, there were more than one million users, and insurance companies have estimated that two-thirds of German users are women. As a result of the trials, German public health insurers began to cover acupuncture for chronic low back pain and osteoarthritis of the knee, but not tension headache or migraine. This decision was based in part on socio-political reasons. Some insurers in Germany chose to stop reimbursement of acupuncture because of the trials. For other conditions, insurers in Germany were not convinced that acupuncture had adequate benefits over usual care or sham treatments. Highlighting the results of the placebo group, researchers refused to accept a placebo therapy as efficient. Regulation There are various government and trade association regulatory bodies for acupuncture in the United Kingdom, the United States, Saudi Arabia, Australia, New Zealand, Japan, Canada, and in European countries and elsewhere. The World Health Organization recommends that before being licensed or certified, an acupuncturist receive 200 hours of specialized training if they are a physician and 2,500 hours for non-physicians; many governments have adopted similar standards. In Hong Kong, the practice of acupuncture is regulated by the Chinese Medicine Council that was formed in 1999 by the Legislative Council. It includes a licensing exam and registration, as well as degree courses approved by the board. Canada has acupuncture licensing programs in the provinces of British Columbia, Ontario, Alberta and Quebec; standards set by the Chinese Medicine and Acupuncture Association of Canada are used in provinces without government regulation. Regulation in the US began in the 1970s in California, which was eventually followed by every state but Wyoming and Idaho. Licensing requirements vary greatly from state to state. The needles used in acupuncture are regulated in the US by the Food and Drug Administration. In some states acupuncture is regulated by a board of medical examiners, while in others by the board of licensing, health or education. In Japan, acupuncturists are licensed by the Minister of Health, Labour and Welfare after passing an examination and graduating from a technical school or university. In Australia, the Chinese Medicine Board of Australia regulates acupuncture, among other Chinese medical traditions, and restricts the use of titles like 'acupuncturist' to registered practitioners only. The practice of Acupuncture in New Zealand in 1990 acupuncture was included into the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists to provide subsidised care and treatment to citizens, residents, and temporary visitors for work or sports related injuries that occurred within the country of New Zealand.The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ, and The New Zealand Acupuncture Standards Authority. At least 28 countries in Europe have professional associations for acupuncturists. In France, the Académie Nationale de Médecine (National Academy of Medicine) has regulated acupuncture since 1955. See also Auriculotherapy Baunscheidtism Colorpuncture Dry needling List of acupuncture points List of ineffective cancer treatments – Includes moxibustion Moxibustion Oriental Medicine Pharmacopuncture Pressure point Regulation of acupuncture Bibliography Notes References Further reading External links Alternative medicine Chinese inventions Energy therapies Pain management Pseudoscience Traditional Chinese medicine
Acupuncture
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963. The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem. In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts. Definition Consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab , and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex. The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII). We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed. Any string s has at least one description. For example, the second string above is output by the program: function GenerateString2() return "4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" whereas the first string is output by the (much shorter) pseudo-code: function GenerateString1() return "ab" × 16 If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically, K(s) = |d(s)|. The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem). Invariance theorem Informal treatment There are some description languages which are optimal, in the following sense: given any description of an object in a description language, said description may be used in the optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, nor the object being described. Here is an example of an optimal description language. A description will have two parts: The first part describes another description language. The second part is a description of the object in that language. In more technical terms, the first part of a description is a computer program (specifically: a compiler for the object's language, written in the description language), with the second part being the input to that computer program which produces the object as output. The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead. Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The total length of this new description D′ is (approximately): |D′ | = |P| + |D| The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant. A more formal treatment Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that ∀s. −c ≤ K1(s) − K2(s) ≤ c. Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s K1(s) ≤ K2(s) + c. Now, suppose there is a program in the language L1 which acts as an interpreter for L2: function InterpretLanguage(string p) where p is a program in L2. The interpreter is characterized by the following property: Running InterpretLanguage on input p returns the result of running p. Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of The length of the program InterpretLanguage, which we can take to be the constant c. The length of P which by definition is K2(s). This proves the desired upper bound. History and context Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures). The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control. Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers. The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information. When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "...to everyone who has, more will be given..." There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974). An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov. Basic results In the following discussion, let K(s) be the complexity of the string s. It is not hard to see that the minimal description of a string cannot be too much larger than the string itself — the program GenerateString2 above that outputs s is a fixed amount larger than s. Theorem: There is a constant c such that ∀s. K(s) ≤ |s| + c. Uncomputability of Kolmogorov complexity A naive attempt at a program to compute K At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following: function KolmogorovComplexity(string s) for i = 1 to infinity: for each string p of length exactly i if isValidProgram(p) and evaluate(p) == s return i This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches the length of the program is returned. However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem. What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following. Formal proof of uncomputability of K Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each natural number n, there is a string s with K(s) ≥ n. Proof: Otherwise all of the infinitely many possible finite strings could be generated by the finitely many programs with a complexity below n bits. Theorem: K is not a computable function. In other words, there is no program which takes any string s as input and produces the integer K(s) as output. The following indirect proof uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of bits. Assume for contradiction there is a program function KolmogorovComplexity(string s) which takes as input a string s and returns K(s). All programs are of finite length so, for sake of proof simplicity, assume it to be bits. Now, consider the following program of length bits: function GenerateComplexString() for i = 1 to infinity: for each string s of length exactly i if KolmogorovComplexity(s) ≥ 8000000000 return s Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least bits, i.e. a string that cannot be produced by any program shorter than bits. However, the overall length of the above program that produced s is only bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.) The above proof uses a contradiction similar to that of the Berry paradox: "The smallest positive integer that cannot be defined in fewer than twenty English words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent. There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler. Chain rule for Kolmogorov complexity The chain rule for Kolmogorov complexity states that K(X,Y) ≤ K(X) + K(Y|X) + O(log(K(X,Y))). It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity. Compression It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string – concretely, the size of a self-extracting archive in the given language. A string s is compressible by a number c if it has a description whose length does not exceed |s| − c bits. This is equivalent to saying that K(s) ≤ |s| − c. Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n − 1 shorter strings, that is, strings of length less than n, (i.e. with length 0, 1, ..., n − 1). For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2−n to each string of length n. Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least 1 − 2−c+1 + 2−n. To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series: 1 + 2 + 22 + ... + 2n − c = 2n−c+1 − 1. There remain at least 2n − 2n−c+1 + 1 bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n. Chaitin's incompleteness theorem By the above theorem (), most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property: If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved based on a Gödel numbering. Theorem: There exists a constant L (which only depends on S and on the choice of description language) such that there does not exist a string s for which the statementK(s) ≥ L       (as formalized in S) can be proven within S. Proof Idea: The proof of this result is modeled on a self-referential construction used in Berry's paradox. We firstly obtain an program which enumerates the proofs within S and we specify a procedure P which takes as an input an integer L and prints the strings x which are within proofs within S of the statement K(x) ≥ L. By then setting L to greater than the length of this procedure P, we have that the required length of a program to print x as stated in K(x) ≥ L as being at least L is then less than the amount L since the string x was printed by the procedure P. This is a contradiction. So it is not possible for the proof system S to prove K(x) ≥ L for L arbitrarily large, in particular, for L larger than the length of the procedure P, (which is finite). Proof: We can find an effective enumeration of all the formal proofs in S by some procedure function NthProof(int n) which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a procedure function NthProofProvesComplexityFormula(int n) which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by procedure: function StringNthProof(int n) function ComplexityLowerBoundNthProof(int n) Consider the following procedure: function GenerateProvablyComplexString(int n) for i = 1 to infinity: if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n return StringNthProof(i) Given an n, this procedure tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n; if no such proof exists, it loops forever. Finally, consider the program consisting of all these procedure definitions, and a main call: GenerateProvablyComplexString(n0) where the constant n0 will be determined later on. The overall program length can be expressed as U+log2(n0), where U is some constant and log2(n0) represents the length of the integer value n0, under the reasonable assumption that it is encoded in binary digits. We will choose n0 to be greater than the program length, that is, such that n0 > U+log2(n0). This is clearly true for n0 sufficiently large, because the left hand side grows linearly in n0 whilst the right hand side grows logarithmically in n0 up to the fixed constant U. Then no proof of the form "K(s)≥L" with L≥n0 can be obtained in S, as can be seen by an indirect argument: If ComplexityLowerBoundNthProof(i) could return a value ≥n0, then the loop inside GenerateProvablyComplexString would eventually terminate, and that procedure would return a string s such that This is a contradiction, Q.E.D. As a consequence, the above program, with the chosen value of n0, must loop forever. Similar ideas are used to prove the properties of Chaitin's constant. Minimum message length The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity). Kolmogorov randomnessKolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length. Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself). This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough — there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity. Relation to entropy For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality holds for almost all . It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source. Conditional versions The conditional Kolmogorov complexity of two strings is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure. There is also a length-conditional complexity , which is the complexity of x given the length of x'' as known/input. See also Important publications in algorithmic information theory Berry paradox Code golf Data compression Descriptive complexity theory Grammar induction Inductive inference Kolmogorov structure function Levenshtein distance Solomonoff's theory of inductive inference Sample entropy Notes References Further reading External links The Legacy of Andrei Nikolaevich Kolmogorov Chaitin's online publications Solomonoff's IDSIA page Generalizations of algorithmic information by J. Schmidhuber Tromp's lambda calculus computer model offers a concrete definition of K()] Universal AI based on Kolmogorov Complexity by M. Hutter: David Dowe's Minimum Message Length (MML) and Occam's razor pages. Computability theory Descriptive complexity Measures of complexity
Kolmogorov complexity
Ḥasan Ibn al-Haytham, Latinized as Alhazen (; full name ; ), was a Muslim Arab mathematician, astronomer, and physicist of the Islamic Golden Age. Referred to as "the father of modern optics", he made significant contributions to the principles of optics and visual perception in particular. His most influential work is titled Kitāb al-Manāẓir (Arabic: , "Book of Optics"), written during 1011–1021, which survived in a Latin edition. A polymath, he also wrote on philosophy, theology and medicine. Ibn al-Haytham was the first to explain that vision occurs when light reflects from an object and then passes to one's eyes. He was also the first to demonstrate that vision occurs in the brain, rather than in the eyes. Ibn al-Haytham was an early proponent of the concept that a hypothesis must be supported by experiments based on confirmable procedures or mathematical evidence—an early pioneer in the scientific method five centuries before Renaissance scientists. On account of this, he is sometimes described as the world's "first true scientist". Born in Basra, he spent most of his productive period in the Fatimid capital of Cairo and earned his living authoring various treatises and tutoring members of the nobilities. Ibn al-Haytham is sometimes given the byname al-Baṣrī after his birthplace, or al-Miṣrī ("of Egypt"). Al-Haytham was dubbed the "Second Ptolemy" by Abu'l-Hasan Bayhaqi and "The Physicist" by John Peckham. Ibn al-Haytham paved the way for the modern science of physical optics. Biography Ibn al-Haytham (Alhazen) was born c. 965 to an Arab family in Basra, Iraq, which was at the time part of the Buyid emirate. His initial influences were in the study of religion and service to the community. At the time, the society had a number of conflicting views of religion that he ultimately sought to step aside from religion. This led to him delving into the study of mathematics and science. He held a position with the title vizier in his native Basra, and made a name for himself for his knowledge of applied mathematics. As he claimed to be able to regulate the flooding of the Nile, he was invited to Fatimid Caliph by al-Hakim in order to realise a hydraulic project at Aswan. However, Ibn al-Haytham was forced to concede the impracticability of his project. Upon his return to Cairo, he was given an administrative post. After he proved unable to fulfill this task as well, he contracted the ire of the caliph Al-Hakim bi-Amr Allah, and is said to have been forced into hiding until the caliph's death in 1021, after which his confiscated possessions were returned to him. Legend has it that Alhazen feigned madness and was kept under house arrest during this period. During this time, he wrote his influential Book of Optics. Alhazen continued to live in Cairo, in the neighborhood of the famous University of al-Azhar, and lived from the proceeds of his literary production until his death in c. 1040. (A copy of Apollonius' Conics, written in Ibn al-Haytham's own handwriting exists in Aya Sofya: (MS Aya Sofya 2762, 307 fob., dated Safar 415 a.h. [1024]).) Among his students were Sorkhab (Sohrab), a Persian from Semnan, and Abu al-Wafa Mubashir ibn Fatek, an Egyptian prince. Book of Optics Alhazen's most famous work is his seven-volume treatise on optics Kitab al-Manazir (Book of Optics), written from 1011 to 1021. Optics was translated into Latin by an unknown scholar at the end of the 12th century or the beginning of the 13th century. This work enjoyed a great reputation during the Middle Ages. The Latin version of De aspectibus was translated at the end of the 14th century into Italian vernacular, under the title De li aspecti. It was printed by Friedrich Risner in 1572, with the title Opticae thesaurus: Alhazeni Arabis libri septem, nuncprimum editi; Eiusdem liber De Crepusculis et nubium ascensionibus (English: Treasury of Optics: seven books by the Arab Alhazen, first edition; by the same, on twilight and the height of clouds). Risner is also the author of the name variant "Alhazen"; before Risner he was known in the west as Alhacen. Works by Alhazen on geometric subjects were discovered in the Bibliothèque nationale in Paris in 1834 by E. A. Sedillot. In all, A. Mark Smith has accounted for 18 full or near-complete manuscripts, and five fragments, which are preserved in 14 locations, including one in the Bodleian Library at Oxford, and one in the library of Bruges. Theory of optics Two major theories on vision prevailed in classical antiquity. The first theory, the emission theory, was supported by such thinkers as Euclid and Ptolemy, who believed that sight worked by the eye emitting rays of light. The second theory, the intromission theory supported by Aristotle and his followers, had physical forms entering the eye from an object. Previous Islamic writers (such as al-Kindi) had argued essentially on Euclidean, Galenist, or Aristotelian lines. The strongest influence on the Book of Optics was from Ptolemy's Optics, while the description of the anatomy and physiology of the eye was based on Galen's account. Alhazen's achievement was to come up with a theory that successfully combined parts of the mathematical ray arguments of Euclid, the medical tradition of Galen, and the intromission theories of Aristotle. Alhazen's intromission theory followed al-Kindi (and broke with Aristotle) in asserting that "from each point of every colored body, illuminated by any light, issue light and color along every straight line that can be drawn from that point". This left him with the problem of explaining how a coherent image was formed from many independent sources of radiation; in particular, every point of an object would send rays to every point on the eye. What Alhazen needed was for each point on an object to correspond to one point only on the eye. He attempted to resolve this by asserting that the eye would only perceive perpendicular rays from the object—for any one point on the eye, only the ray that reached it directly, without being refracted by any other part of the eye, would be perceived. He argued, using a physical analogy, that perpendicular rays were stronger than oblique rays: in the same way that a ball thrown directly at a board might break the board, whereas a ball thrown obliquely at the board would glance off, perpendicular rays were stronger than refracted rays, and it was only perpendicular rays which were perceived by the eye. As there was only one perpendicular ray that would enter the eye at any one point, and all these rays would converge on the centre of the eye in a cone, this allowed him to resolve the problem of each point on an object sending many rays to the eye; if only the perpendicular ray mattered, then he had a one-to-one correspondence and the confusion could be resolved. He later asserted (in book seven of the Optics) that other rays would be refracted through the eye and perceived as if perpendicular. His arguments regarding perpendicular rays do not clearly explain why only perpendicular rays were perceived; why would the weaker oblique rays not be perceived more weakly? His later argument that refracted rays would be perceived as if perpendicular does not seem persuasive. However, despite its weaknesses, no other theory of the time was so comprehensive, and it was enormously influential, particularly in Western Europe. Directly or indirectly, his De Aspectibus (Book of Optics) inspired much activity in optics between the 13th and 17th centuries. Kepler's later theory of the retinal image (which resolved the problem of the correspondence of points on an object and points in the eye) built directly on the conceptual framework of Alhazen. Although only one commentary on Alhazen's optics has survived the Islamic Middle Ages, Geoffrey Chaucer mentions the work in The Canterbury Tales: "They spoke of Alhazen and Vitello, And Aristotle, who wrote, in their lives, On strange mirrors and optical instruments." Ibn al-Haytham was known for his contributions to Optics specifically thereof vision and theory of light. He assumed ray of light was radiated from specific points on the surface. Possibility of light propagation suggest that light was independent of vision. Light also moves at a very fast speed. Alhazen showed through experiment that light travels in straight lines, and carried out various experiments with lenses, mirrors, refraction, and reflection. His analyses of reflection and refraction considered the vertical and horizontal components of light rays separately. Alhazen studied the process of sight, the structure of the eye, image formation in the eye, and the visual system. Ian P. Howard argued in a 1996 Perception article that Alhazen should be credited with many discoveries and theories previously attributed to Western Europeans writing centuries later. For example, he described what became in the 19th century Hering's law of equal innervation. He wrote a description of vertical horopters 600 years before Aguilonius that is actually closer to the modern definition than Aguilonius's—and his work on binocular disparity was repeated by Panum in 1858. Craig Aaen-Stockdale, while agreeing that Alhazen should be credited with many advances, has expressed some caution, especially when considering Alhazen in isolation from Ptolemy, with whom Alhazen was extremely familiar. Alhazen corrected a significant error of Ptolemy regarding binocular vision, but otherwise his account is very similar; Ptolemy also attempted to explain what is now called Hering's law. In general, Alhazen built on and expanded the optics of Ptolemy. In a more detailed account of Ibn al-Haytham's contribution to the study of binocular vision based on Lejeune and Sabra, Raynaud showed that the concepts of correspondence, homonymous and crossed diplopia were in place in Ibn al-Haytham's optics. But contrary to Howard, he explained why Ibn al-Haytham did not give the circular figure of the horopter and why, by reasoning experimentally, he was in fact closer to the discovery of Panum's fusional area than that of the Vieth-Müller circle. In this regard, Ibn al-Haytham's theory of binocular vision faced two main limits: the lack of recognition of the role of the retina, and obviously the lack of an experimental investigation of ocular tracts. Alhazen's most original contribution was that, after describing how he thought the eye was anatomically constructed, he went on to consider how this anatomy would behave functionally as an optical system. His understanding of pinhole projection from his experiments appears to have influenced his consideration of image inversion in the eye, which he sought to avoid. He maintained that the rays that fell perpendicularly on the lens (or glacial humor as he called it) were further refracted outward as they left the glacial humor and the resulting image thus passed upright into the optic nerve at the back of the eye. He followed Galen in believing that the lens was the receptive organ of sight, although some of his work hints that he thought the retina was also involved. Alhazen's synthesis of light and vision adhered to the Aristotelian scheme, exhaustively describing the process of vision in a logical, complete fashion. Scientific method An aspect associated with Alhazen's optical research is related to systemic and methodological reliance on experimentation (i'tibar)(Arabic: إعتبار) and controlled testing in his scientific inquiries. Moreover, his experimental directives rested on combining classical physics (ilm tabi'i) with mathematics (ta'alim; geometry in particular). This mathematical-physical approach to experimental science supported most of his propositions in Kitab al-Manazir (The Optics; De aspectibus or Perspectivae) and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics (the study of the reflection and refraction of light, respectively). According to Matthias Schramm, Alhazen "was the first to make a systematic use of the method of varying the experimental conditions in a constant and uniform manner, in an experiment showing that the intensity of the light-spot formed by the projection of the moonlight through two small apertures onto a screen diminishes constantly as one of the apertures is gradually blocked up." G. J. Toomer expressed some skepticism regarding Schramm's view, partly because at the time (1964) the Book of Optics had not yet been fully translated from Arabic, and Toomer was concerned that without context, specific passages might be read anachronistically. While acknowledging Alhazen's importance in developing experimental techniques, Toomer argued that Alhazen should not be considered in isolation from other Islamic and ancient thinkers. Toomer concluded his review by saying that it would not be possible to assess Schramm's claim that Ibn al-Haytham was the true founder of modern physics without translating more of Alhazen's work and fully investigating his influence on later medieval writers. Alhazen's problem His work on catoptrics in Book V of the Book of Optics contains a discussion of what is now known as Alhazen's problem, first formulated by Ptolemy in 150 AD. It comprises drawing lines from two points in the plane of a circle meeting at a point on the circumference and making equal angles with the normal at that point. This is equivalent to finding the point on the edge of a circular billiard table at which a player must aim a cue ball at a given point to make it bounce off the table edge and hit another ball at a second given point. Thus, its main application in optics is to solve the problem, "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to an equation of the fourth degree. This eventually led Alhazen to derive a formula for the sum of fourth powers, where previously only the formulas for the sums of squares and cubes had been stated. His method can be readily generalized to find the formula for the sum of any integral powers, although he did not himself do this (perhaps because he only needed the fourth power to calculate the volume of the paraboloid he was interested in). He used his result on sums of integral powers to perform what would now be called an integration, where the formulas for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. Alhazen eventually solved the problem using conic sections and a geometric proof. His solution was extremely long and complicated and may not have been understood by mathematicians reading him in Latin translation. Later mathematicians used Descartes' analytical methods to analyse the problem. An algebraic solution to the problem was finally found in 1965 by Jack M. Elkin, an actuarian. Other solutions were discovered in 1989, by Harald Riede and in 1997 by the Oxford mathematician Peter M. Neumann. Recently, Mitsubishi Electric Research Laboratories (MERL) researchers solved the extension of Alhazen's problem to general rotationally symmetric quadric mirrors including hyperbolic, parabolic and elliptical mirrors. Camera Obscura The camera obscura was known to the ancient Chinese, and was described by the Han Chinese polymathic genius Shen Kuo in his scientific book Dream Pool Essays, published in the year 1088 C.E. Aristotle had discussed the basic principle behind it in his Problems, but Alhazen's work also contained the first clear description, outside of China, of camera obscura in the areas of the Middle East, Europe, Africa and India. and early analysis of the device. Ibn al-Haytham used a camera obscura mainly to observe a partial solar eclipse. In his essay, Ibn al-Haytham writes that he observed the sickle-like shape of the sun at the time of an eclipse. The introduction reads as follows: "The image of the sun at the time of the eclipse, unless it is total, demonstrates that when its light passes through a narrow, round hole and is cast on a plane opposite to the hole it takes on the form of a moonsickle." It is admitted that his findings solidified the importance in the history of the camera obscura but this treatise is important in many other respects. Ancient optics and medieval optics were divided into optics and burning mirrors. Optics proper mainly focused on the study of vision, while burning mirrors focused on the properties of light and luminous rays. On the shape of the eclipse is probably one of the first attempts made by Ibn al-Haytham to articulate these two sciences. Very often Ibn al-Haytham's discoveries benefited from the intersection of mathematical and experimental contributions. This is the case with On the shape of the eclipse. Besides the fact that this treatise allowed more people to study partial eclipses of the sun, it especially allowed to better understand how the camera obscura works. This treatise is a physico-mathematical study of image formation inside the camera obscura. Ibn al-Haytham takes an experimental approach, and determines the result by varying the size and the shape of the aperture, the focal length of the camera, the shape and intensity of the light source. In his work he explains the inversion of the image in the camera obscura, the fact that the image is similar to the source when the hole is small, but also the fact that the image can differ from the source when the hole is large. All these results are produced by using a point analysis of the image. Other contributions The Kitab al-Manazir (Book of Optics) describes several experimental observations that Alhazen made and how he used his results to explain certain optical phenomena using mechanical analogies. He conducted experiments with projectiles and concluded that only the impact of perpendicular projectiles on surfaces was forceful enough to make them penetrate, whereas surfaces tended to deflect oblique projectile strikes. For example, to explain refraction from a rare to a dense medium, he used the mechanical analogy of an iron ball thrown at a thin slate covering a wide hole in a metal sheet. A perpendicular throw breaks the slate and passes through, whereas an oblique one with equal force and from an equal distance does not. He also used this result to explain how intense, direct light hurts the eye, using a mechanical analogy: Alhazen associated 'strong' lights with perpendicular rays and 'weak' lights with oblique ones. The obvious answer to the problem of multiple rays and the eye was in the choice of the perpendicular ray, since only one such ray from each point on the surface of the object could penetrate the eye. Sudanese psychologist Omar Khaleefa has argued that Alhazen should be considered the founder of experimental psychology, for his pioneering work on the psychology of visual perception and optical illusions. Khaleefa has also argued that Alhazen should also be considered the "founder of psychophysics", a sub-discipline and precursor to modern psychology. Although Alhazen made many subjective reports regarding vision, there is no evidence that he used quantitative psychophysical techniques and the claim has been rebuffed. Alhazen offered an explanation of the Moon illusion, an illusion that played an important role in the scientific tradition of medieval Europe. Many authors repeated explanations that attempted to solve the problem of the Moon appearing larger near the horizon than it does when higher up in the sky. Alhazen argued against Ptolemy's refraction theory, and defined the problem in terms of perceived, rather than real, enlargement. He said that judging the distance of an object depends on there being an uninterrupted sequence of intervening bodies between the object and the observer. When the Moon is high in the sky there are no intervening objects, so the Moon appears close. The perceived size of an object of constant angular size varies with its perceived distance. Therefore, the Moon appears closer and smaller high in the sky, and further and larger on the horizon. Through works by Roger Bacon, John Pecham and Witelo based on Alhazen's explanation, the Moon illusion gradually came to be accepted as a psychological phenomenon, with the refraction theory being rejected in the 17th century. Although Alhazen is often credited with the perceived distance explanation, he was not the first author to offer it. Cleomedes ( 2nd century) gave this account (in addition to refraction), and he credited it to Posidonius ( 135–50 BCE). Ptolemy may also have offered this explanation in his Optics, but the text is obscure. Alhazen's writings were more widely available in the Middle Ages than those of these earlier authors, and that probably explains why Alhazen received the credit. Other works on physics Optical treatises Besides the Book of Optics, Alhazen wrote several other treatises on the same subject, including his Risala fi l-Daw (Treatise on Light). He investigated the properties of luminance, the rainbow, eclipses, twilight, and moonlight. Experiments with mirrors and the refractive interfaces between air, water, and glass cubes, hemispheres, and quarter-spheres provided the foundation for his theories on catoptrics. Celestial physics Alhazen discussed the physics of the celestial region in his Epitome of Astronomy, arguing that Ptolemaic models must be understood in terms of physical objects rather than abstract hypotheses—in other words that it should be possible to create physical models where (for example) none of the celestial bodies would collide with each other. The suggestion of mechanical models for the Earth centred Ptolemaic model "greatly contributed to the eventual triumph of the Ptolemaic system among the Christians of the West". Alhazen's determination to root astronomy in the realm of physical objects was important, however, because it meant astronomical hypotheses "were accountable to the laws of physics", and could be criticised and improved upon in those terms. He also wrote Maqala fi daw al-qamar (On the Light of the Moon). Mechanics In his work, Alhazen discussed theories on the motion of a body. In his Treatise on Place, Alhazen disagreed with Aristotle's view that nature abhors a void, and he used geometry in an attempt to demonstrate that place (al-makan) is the imagined three-dimensional void between the inner surfaces of a containing body. Astronomical works On the Configuration of the World In his On the Configuration of the World Alhazen presented a detailed description of the physical structure of the earth: The book is a non-technical explanation of Ptolemy's Almagest, which was eventually translated into Hebrew and Latin in the 13th and 14th centuries and subsequently had an influence on astronomers such as Georg von Peuerbach during the European Middle Ages and Renaissance. Doubts Concerning Ptolemy In his Al-Shukūk ‛alā Batlamyūs, variously translated as Doubts Concerning Ptolemy or Aporias against Ptolemy, published at some time between 1025 and 1028, Alhazen criticized Ptolemy's Almagest, Planetary Hypotheses, and Optics, pointing out various contradictions he found in these works, particularly in astronomy. Ptolemy's Almagest concerned mathematical theories regarding the motion of the planets, whereas the Hypotheses concerned what Ptolemy thought was the actual configuration of the planets. Ptolemy himself acknowledged that his theories and configurations did not always agree with each other, arguing that this was not a problem provided it did not result in noticeable error, but Alhazen was particularly scathing in his criticism of the inherent contradictions in Ptolemy's works. He considered that some of the mathematical devices Ptolemy introduced into astronomy, especially the equant, failed to satisfy the physical requirement of uniform circular motion, and noted the absurdity of relating actual physical motions to imaginary mathematical points, lines and circles: Having pointed out the problems, Alhazen appears to have intended to resolve the contradictions he pointed out in Ptolemy in a later work. Alhazen believed there was a "true configuration" of the planets that Ptolemy had failed to grasp. He intended to complete and repair Ptolemy's system, not to replace it completely. In the Doubts Concerning Ptolemy Alhazen set out his views on the difficulty of attaining scientific knowledge and the need to question existing authorities and theories: He held that the criticism of existing theories—which dominated this book—holds a special place in the growth of scientific knowledge. Model of the Motions of Each of the Seven Planets Alhazen's The Model of the Motions of Each of the Seven Planets was written 1038. Only one damaged manuscript has been found, with only the introduction and the first section, on the theory of planetary motion, surviving. (There was also a second section on astronomical calculation, and a third section, on astronomical instruments.) Following on from his Doubts on Ptolemy, Alhazen described a new, geometry-based planetary model, describing the motions of the planets in terms of spherical geometry, infinitesimal geometry and trigonometry. He kept a geocentric universe and assumed that celestial motions are uniformly circular, which required the inclusion of epicycles to explain observed motion, but he managed to eliminate Ptolemy's equant. In general, his model didn't try to provide a causal explanation of the motions, but concentrated on providing a complete, geometric description that could explain observed motions without the contradictions inherent in Ptolemy's model. Other astronomical works Alhazen wrote a total of twenty-five astronomical works, some concerning technical issues such as Exact Determination of the Meridian, a second group concerning accurate astronomical observation, a third group concerning various astronomical problems and questions such as the location of the Milky Way; Alhazen made the first systematic effort of evaluating the Milky Way's parallax, combining Ptolemy's data and his own. He concluded that the parallax is (probably very much) smaller than Lunar parallax, and the Milky way should be a celestial object. Though he was not the first who argued that the Milky Way does not belong to the atmosphere, he is the first who did quantitative analysis for the claim. The fourth group consists of ten works on astronomical theory, including the Doubts and Model of the Motions discussed above. Mathematical works In mathematics, Alhazen built on the mathematical works of Euclid and Thabit ibn Qurra and worked on "the beginnings of the link between algebra and geometry". He developed a formula for summing the first 100 natural numbers, using a geometric proof to prove the formula. Geometry Alhazen explored what is now known as the Euclidean parallel postulate, the fifth postulate in Euclid's Elements, using a proof by contradiction, and in effect introducing the concept of motion into geometry. He formulated the Lambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral". In elementary geometry, Alhazen attempted to solve the problem of squaring the circle using the area of lunes (crescent shapes), but later gave up on the impossible task. The two lunes formed from a right triangle by erecting a semicircle on each of the triangle's sides, inward for the hypotenuse and outward for the other two sides, are known as the lunes of Alhazen; they have the same total area as the triangle itself. Number theory Alhazen's contributions to number theory include his work on perfect numbers. In his Analysis and Synthesis, he may have been the first to state that every even perfect number is of the form 2n−1(2n − 1) where 2n − 1 is prime, but he was not able to prove this result; Euler later proved it in the 18th century, and it is now called the Euclid–Euler theorem. Alhazen solved problems involving congruences using what is now called Wilson's theorem. In his Opuscula, Alhazen considers the solution of a system of congruences, and gives two general methods of solution. His first method, the canonical method, involved Wilson's theorem, while his second method involved a version of the Chinese remainder theorem. Calculus Alhazen discovered the sum formula for the fourth power, using a method that could be generally used to determine the sum for any integral power. He used this to find the volume of a paraboloid. He could find the integral formula for any polynomial without having developed a general formula. Other works Influence of Melodies on the Souls of Animals Alhazen also wrote a Treatise on the Influence of Melodies on the Souls of Animals, although no copies have survived. It appears to have been concerned with the question of whether animals could react to music, for example whether a camel would increase or decrease its pace. Engineering In engineering, one account of his career as a civil engineer has him summoned to Egypt by the Fatimid Caliph, Al-Hakim bi-Amr Allah, to regulate the flooding of the Nile River. He carried out a detailed scientific study of the annual inundation of the Nile River, and he drew plans for building a dam, at the site of the modern-day Aswan Dam. His field work, however, later made him aware of the impracticality of this scheme, and he soon feigned madness so he could avoid punishment from the Caliph. Philosophy In his Treatise on Place, Alhazen disagreed with Aristotle's view that nature abhors a void, and he used geometry in an attempt to demonstrate that place (al-makan) is the imagined three-dimensional void between the inner surfaces of a containing body. Abd-el-latif, a supporter of Aristotle's philosophical view of place, later criticized the work in Fi al-Radd 'ala Ibn al-Haytham fi al-makan (A refutation of Ibn al-Haytham’s place) for its geometrization of place. Alhazen also discussed space perception and its epistemological implications in his Book of Optics. In "tying the visual perception of space to prior bodily experience, Alhazen unequivocally rejected the intuitiveness of spatial perception and, therefore, the autonomy of vision. Without tangible notions of distance and size for correlation, sight can tell us next to nothing about such things." Alhazen came up with many theories that shattered what was known of reality at the time. These ideas of optics and perspective did not just tie into physical science, rather existential philosophy. This led to religious viewpoints being upheld to the point that there is an observer and their perspective, which in this case is reality. Theology Alhazen was a Muslim and most sources report that he was a Sunni and a follower of the Ash'ari school. Ziauddin Sardar says that some of the greatest Muslim scientists, such as Ibn al-Haytham and Abū Rayhān al-Bīrūnī, who were pioneers of the scientific method, were themselves followers of the Ashʿari school of Islamic theology. Like other Ashʿarites who believed that faith or taqlid should apply only to Islam and not to any ancient Hellenistic authorities, Ibn al-Haytham's view that taqlid should apply only to prophets of Islam and not to any other authorities formed the basis for much of his scientific skepticism and criticism against Ptolemy and other ancient authorities in his Doubts Concerning Ptolemy and Book of Optics. Alhazen wrote a work on Islamic theology in which he discussed prophethood and developed a system of philosophical criteria to discern its false claimants in his time. He also wrote a treatise entitled Finding the Direction of Qibla by Calculation in which he discussed finding the Qibla, where prayers (salat) are directed towards, mathematically. There are occasional references to theology or religious sentiment in his technical works, e.g. in Doubts Concerning Ptolemy: In The Winding Motion: Regarding the relation of objective truth and God: Legacy Alhazen made significant contributions to optics, number theory, geometry, astronomy and natural philosophy. Alhazen's work on optics is credited with contributing a new emphasis on experiment. His main work, Kitab al-Manazir (Book of Optics), was known in the Muslim world mainly, but not exclusively, through the thirteenth-century commentary by Kamāl al-Dīn al-Fārisī, the Tanqīḥ al-Manāẓir li-dhawī l-abṣār wa l-baṣā'ir. In al-Andalus, it was used by the eleventh-century prince of the Banu Hud dynasty of Zaragossa and author of an important mathematical text, al-Mu'taman ibn Hūd. A Latin translation of the Kitab al-Manazir was made probably in the late twelfth or early thirteenth century. This translation was read by and greatly influenced a number of scholars in Christian Europe including: Roger Bacon, Robert Grosseteste, Witelo, Giambattista della Porta, Leonardo da Vinci, Galileo Galilei, Christiaan Huygens, René Descartes, and Johannes Kepler. His research in catoptrics (the study of optical systems using mirrors) centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens. His work on catoptrics also contains the problem known as "Alhazen's problem". Meanwhile, in the Islamic world, Alhazen's work influenced Averroes' writings on optics, and his legacy was further advanced through the 'reforming' of his Optics by Persian scientist Kamal al-Din al-Farisi (died c. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics). Alhazen wrote as many as 200 books, although only 55 have survived. Some of his treatises on optics survived only through Latin translation. During the Middle Ages his books on cosmology were translated into Latin, Hebrew and other languages. The impact crater Alhazen on the Moon is named in his honour, as was the asteroid 59239 Alhazen. In honour of Alhazen, the Aga Khan University (Pakistan) named its Ophthalmology endowed chair as "The Ibn-e-Haitham Associate Professor and Chief of Ophthalmology". Alhazen, by the name Ibn al-Haytham, is featured on the obverse of the Iraqi 10,000-dinar banknote issued in 2003, and on 10-dinar notes from 1982. The 2015 International Year of Light celebrated the 1000th anniversary of the works on optics by Ibn Al-Haytham. Commemorations In 2014, the "Hiding in the Light" episode of Cosmos: A Spacetime Odyssey, presented by Neil deGrasse Tyson, focused on the accomplishments of Ibn al-Haytham. He was voiced by Alfred Molina in the episode. Over forty years previously, Jacob Bronowski presented Alhazen's work in a similar television documentary (and the corresponding book), The Ascent of Man. In episode 5 (The Music of the Spheres), Bronowski remarked that in his view, Alhazen was "the one really original scientific mind that Arab culture produced", whose theory of optics was not improved on till the time of Newton and Leibniz. H. J. J. Winter, a British historian of science, summing up the importance of Ibn al-Haytham in the history of physics wrote: After the death of Archimedes no really great physicist appeared until Ibn al-Haytham. If, therefore, we confine our interest only to the history of physics, there is a long period of over twelve hundred years during which the Golden Age of Greece gave way to the era of Muslim Scholasticism, and the experimental spirit of the noblest physicist of Antiquity lived again in the Arab Scholar from Basra. UNESCO declared 2015 the International Year of Light and its Director-General Irina Bokova dubbed Ibn al-Haytham 'the father of optics'. Amongst others, this was to celebrate Ibn Al-Haytham's achievements in optics, mathematics and astronomy. An international campaign, created by the 1001 Inventions organisation, titled 1001 Inventions and the World of Ibn Al-Haytham featuring a series of interactive exhibits, workshops and live shows about his work, partnering with science centers, science festivals, museums, and educational institutions, as well as digital and social media platforms. The campaign also produced and released the short educational film 1001 Inventions and the World of Ibn Al-Haytham. List of works According to medieval biographers, Alhazen wrote more than 200 works on a wide range of subjects, of which at least 96 of his scientific works are known. Most of his works are now lost, but more than 50 of them have survived to some extent. Nearly half of his surviving works are on mathematics, 23 of them are on astronomy, and 14 of them are on optics, with a few on other subjects. Not all his surviving works have yet been studied, but some of the ones that have are given below. Book of Optics (كتاب المناظر) Analysis and Synthesis (مقالة في التحليل والتركيب) Balance of Wisdom (ميزان الحكمة) Corrections to the Almagest (تصويبات على المجسطي) Discourse on Place (مقالة في المكان) Exact Determination of the Pole (التحديد الدقيق للقطب) Exact Determination of the Meridian (رسالة في الشفق) Finding the Direction of Qibla by Calculation (كيفية حساب اتجاه القبلة) Horizontal Sundials (المزولة الأفقية) Hour Lines (خطوط الساعة) Doubts Concerning Ptolemy (شكوك على بطليموس) Maqala fi'l-Qarastun (مقالة في قرسطون) On Completion of the Conics (إكمال المخاريط) On Seeing the Stars (رؤية الكواكب) On Squaring the Circle (مقالة فی تربیع الدائرة) On the Burning Sphere (المرايا المحرقة بالدوائر) On the Configuration of the World (تكوين العالم) On the Form of Eclipse (مقالة فی صورة ‌الکسوف) On the Light of Stars (مقالة في ضوء النجوم) On the Light of the Moon (مقالة في ضوء القمر) On the Milky Way (مقالة في درب التبانة) On the Nature of Shadows (كيفيات الإظلال) On the Rainbow and Halo (مقالة في قوس قزح) Opuscula (Minor Works) Resolution of Doubts Concerning the Almagest (تحليل شكوك حول الجست) Resolution of Doubts Concerning the Winding Motion The Correction of the Operations in Astronomy (تصحيح العمليات في الفلك) The Different Heights of the Planets (اختلاف ارتفاع الكواكب) The Direction of Mecca (اتجاه القبلة) The Model of the Motions of Each of the Seven Planets (نماذج حركات الكواكب السبعة) The Model of the Universe (نموذج الكون) The Motion of the Moon (حركة القمر) The Ratios of Hourly Arcs to their Heights The Winding Motion (الحركة المتعرجة) Treatise on Light (رسالة في الضوء) Treatise on Place (رسالة في المكان) Treatise on the Influence of Melodies on the Souls of Animals (تأثير اللحون الموسيقية في النفوس الحيوانية) كتاب في تحليل المسائل الهندسية (A book in engineering analysis) الجامع في أصول الحساب (The whole in the assets of the account) قول فی مساحة الکرة (Say in the sphere) القول المعروف بالغریب فی حساب المعاملات (Saying the unknown in the calculation of transactions) خواص المثلث من جهة العمود (Triangle properties from the side of the column) رسالة فی مساحة المسجم المکافی (A message in the free space) شرح أصول إقليدس (Explain the origins of Euclid) المرايا المحرقة بالقطوع (The burning mirrors of the rainbow) Lost works A Book in which I have Summarized the Science of Optics from the Two Books of Euclid and Ptolemy, to which I have added the Notions of the First Discourse which is Missing from Ptolemy's Book Treatise on Burning Mirrors Treatise on the Nature of [the Organ of] Sight and on How Vision is Achieved Through It See also "Hiding in the Light" History of mathematics History of optics History of physics History of science History of scientific method Hockney–Falco thesis Mathematics in medieval Islam Physics in medieval Islam Science in the medieval Islamic world Fatima al-Fihri Islamic Golden Age Notes References Sources Reprinted in (various editions) (Books I-III (2001) Vol 1 Commentary and Latin text via JSTOR; Vol 2 English translation I: TOC pp. 339–41, II: TOC pp. 415–16, III: TOC pp. 559–60, Notes 681ff, Bibl. via JSTOR) (Books 4–5 (2006) 95 4 – Vol 1 Commentary and Latin text via JSTOR; 95 5 – Vol 2 English translation IV: TOC pp. 289–94, V: TOC pp. 377–84, Notes, Bibl. via JSTOR) (Book 6 (2008) 98 (#1, section 1) – Vol 1 Commentary and Latin text via JSTOR; 98 (#1, section 2) – Vol 2 English translation VI:TOC pp. 155–160, Notes, Bibl. via JSTOR) (Book 7 (2010) 100(#3, section 1) – Vol 1 Commentary and Latin text via JSTOR; 100(#3, section 2) – Vol 2 English translation VII: TOC pp. 213–18, Notes, Bibl. via JSTOR) Further reading Primary 2 vols: . (Philadelphia: American Philosophical Society), 2006 – 95(#2) Books 4–5 Vol 1 Commentary and Latin text via JSTOR; 95(#3) Vol 2 English translation, Notes, Bibl. via JSTOR Smith, A. Mark, ed. and trans. (2008) Alhacen on Image-formation and distortion in mirrors : a critical edition, with English translation and commentary, of Book 6 of Alhacen's De aspectibus, [the Medieval Latin version of Ibn al-Haytham's Kitāb al-Manāzir], Transactions of the American Philosophical Society, 2 vols: Vol 1 98(#1, section 1 – Vol 1 Commentary and Latin text); 98(#1, section 2 – Vol 2 English translation). (Philadelphia: American Philosophical Society), 2008. Book 6 (2008) Vol 1 Commentary and Latin text via JSTOR; Vol 2 English translation, Notes, Bibl. via JSTOR Smith, A. Mark, ed. and trans. (2010) Alhacen on Refraction : a critical edition, with English translation and commentary, of Book 7 of Alhacen's De aspectibus, [the Medieval Latin version of Ibn al-Haytham's Kitāb al-Manāzir], Transactions of the American Philosophical Society, 2 vols: 100(#3, section 1 – Vol 1, Introduction and Latin text); 100'''(#3, section 2 – Vol 2 English translation). (Philadelphia: American Philosophical Society), 2010. Book 7 (2010) Vol 1 Commentary and Latin text via JSTOR;Vol 2 English translation, Notes, Bibl. via JSTOR Secondary Belting, Hans, Afterthoughts on Alhazen’s Visual Theory and Its Presence in the Pictorial Theory of Western Perspective, in: Variantology 4. On Deep Time Relations of Arts, Sciences and Technologies in the Arabic-Islamic World and Beyond, ed. by Siegfried Zielinski and Eckhard Fürlus in cooperation with Daniel Irrgang and Franziska Latell (Cologne: Verlag der Buchhandlung Walther König, 2010), pp. 19–42. Graham, Mark. How Islam Created the Modern World. Amana Publications, 2006. Roshdi Rashed, Optics and Mathematics: Research on the history of scientific thought in Arabic, Variorum reprints, Aldershot, 1992. Roshdi Rashed, Geometry and Dioptrics the tenth century: Ibn Sahl al-Quhi and Ibn al-Haytham (in French), Les Belles Lettres, Paris, 1993 Roshdi Rashed, Infinitesimal Mathematics, vols. 1–5, al-Furqan Islamic Heritage Foundation, London, 1993–2006 Siegfried Zielinski & Franziska Latell, How One Sees'', in: Variantology 4. On Deep Time Relations of Arts, Sciences and Technologies in the Arabic-Islamic World and Beyond, ed. by Siegfried Zielinski and Eckhard Fürlus in cooperation with Daniel Irrgang and Franziska Latell (Cologne: Verlag der Buchhandlung Walther König, 2010), pp. 19–42. External links (PDF version) 'A Brief Introduction on Ibn al-Haytham' based on a lecture delivered at the Royal Society in London by Nader El-Bizri Ibn al-Haytham on two Iraqi banknotes The Miracle of Light – a UNESCO article on Ibn al-Haytham Biography from Malaspina Global Portal Short biographies on several "Muslim Heroes and Personalities" including Ibn al-Haytham Biography from Trinity College (Connecticut) Biography from Molecular Expressions The First True Scientist from BBC News Over the Moon From The UNESCO Courier on the occasion of the International Year of Astronomy 2009 The Mechanical Water Clock Of Ibn Al-Haytham, Muslim Heritage Alhazen's (1572) Opticae thesaurus (English) – digital facsimile from the Linda Hall Library 960s births 1040 deaths 10th-century Arabs 10th-century mathematicians 11th-century Arabs 11th-century astronomers 11th-century mathematicians Buyid scholars Scholars of the Fatimid Caliphate Astronomers of medieval Islam Mathematicians of medieval Islam Physicians of medieval Islam Medieval Arab mathematicians Medieval Arab astronomers Medieval Arab physicians Medieval Iraqi physicians Medieval Iraqi astronomers Medieval Iraqi mathematicians Medieval Egyptian physicians Medieval Egyptian astronomers Medieval Egyptian mathematicians Medieval Arab engineers Medieval physicists Medieval Arab philosophers Philosophers of science Natural philosophers People from Basra Precursors of photography Scientists who worked on qibla determination Inventors of medieval Islam History of scientific method History of optics 11th-century inventors Islamic psychology
Ibn al-Haytham
Ambergris ( or , , ), ambergrease, or grey amber, is a solid, waxy, flammable substance of a dull grey or blackish colour produced in the digestive system of sperm whales. Freshly produced ambergris has a marine, fecal odor. It acquires a sweet, earthy scent as it ages, commonly likened to the fragrance of rubbing alcohol without the vaporous chemical astringency. Ambergris has been highly valued by perfume makers as a fixative that allows the scent to endure much longer, although it has been mostly replaced by synthetic ambroxide. Dogs are attracted to the smell of ambergris and are sometimes used by ambergris searchers. Etymology The word ambergris comes from the Old French "ambre gris" or "grey amber". The word "amber" comes from the same source, but it has been applied almost exclusively to fossilized tree resins from the Baltic region since the late 13th century in Europe. Furthermore, the word "amber" is derived from the Middle Persian (Pahlavi) word ambar (variants: ’mbl, 'nbl). Formation Ambergris is formed from a secretion of the bile duct in the intestines of the sperm whale, and can be found floating on the sea or washed up on coastlines. It is sometimes found in the abdomens of dead sperm whales. Because the beaks of giant squids have been discovered within lumps of ambergris, scientists have theorized that the substance is produced by the whale's gastrointestinal tract to ease the passage of hard, sharp objects that it may have eaten. Ambergris is passed like fecal matter. It is speculated that an ambergris mass too large to be passed through the intestines is expelled via the mouth, but this remains under debate. Ambergris takes years to form. Christopher Kemp, the author of Floating Gold: A Natural (and Unnatural) History of Ambergris, says that it is only produced by sperm whales, and only by an estimated one percent of them. Ambergris is rare; once expelled by a whale, it often floats for years before making landfall. The slim chances of finding ambergris and the legal ambiguity involved led perfume makers away from ambergris, and led chemists on a quest to find viable alternatives. Ambergris is found in primarily the Atlantic Ocean and on the coasts of South Africa; Brazil; Madagascar; the East Indies; The Maldives; China; Japan; India; Australia; New Zealand; and the Molucca Islands. Most commercially collected ambergris comes from the Bahamas in the Atlantic, particularly New Providence. In 2021, fishermen found a 280 pound piece of ambergris off the coast of Yemen, valued at US$1.5 million. Fossilised ambergris from 1.75 million years ago has also been found. Physical properties Ambergris is found in lumps of various shapes and sizes, usually weighing from to or more. When initially expelled by or removed from the whale, the fatty precursor of ambergris is pale white in color (sometimes streaked with black), soft, with a strong fecal smell. Following months to years of photodegradation and oxidation in the ocean, this precursor gradually hardens, developing a dark grey or black color, a crusty and waxy texture, and a peculiar odor that is at once sweet, earthy, marine, and animalic. Its scent has been generally described as a vastly richer and smoother version of isopropanol without its stinging harshness. In this developed condition, ambergris has a specific gravity ranging from 0.780 to 0.926. It melts at about to a fatty, yellow resinous liquid; and at it is volatilised into a white vapor. It is soluble in ether, and in volatile and fixed oils. Chemical properties Ambergris is relatively nonreactive to acid. White crystals of a terpene known as ambrein, discovered by Ružička and Fernand Lardon in 1946, can be separated from ambergris by heating raw ambergris in alcohol, then allowing the resulting solution to cool. Breakdown of the relatively scentless ambrein through oxidation produces ambroxan and ambrinol, the main odor components of ambergris. Ambroxan is now produced synthetically and used extensively in the perfume industry. Applications Ambergris has been mostly known for its use in creating perfume and fragrance much like musk. Perfumes can still be found with ambergris. Ambergris has historically been used in food and drink. A serving of eggs and ambergris was reportedly King Charles II of England's favorite dish. A recipe for Rum Shrub liqueur from the mid 19th century called for a thread of ambergris to be added to rum, almonds, cloves, cassia, and the peel of oranges in making a cocktail from The English and Australian Cookery Book. It has been used as a flavoring agent in Turkish coffee and in hot chocolate in 18th century Europe. The substance is considered an aphrodisiac in some cultures. Ancient Egyptians burned ambergris as incense, while in modern Egypt ambergris is used for scenting cigarettes. The ancient Chinese called the substance "dragon's spittle fragrance". During the Black Death in Europe, people believed that carrying a ball of ambergris could help prevent them from contracting plague. This was because the fragrance covered the smell of the air which was believed to be a cause of plague. During the Middle Ages, Europeans used ambergris as a medication for headaches, colds, epilepsy, and other ailments. Legality From the 18th to the mid-19th century, the whaling industry prospered. By some reports, nearly 50,000 whales, including sperm whales, were killed each year. Throughout the 1800s, "millions of whales were killed for their oil, whalebone, and ambergris" to fuel profits, and they soon became endangered as a species as a result. Due to studies showing that the whale populations were being threatened, the International Whaling Commission instituted a moratorium on commercial whaling in 1982. Although ambergris is not harvested from whales, many countries also ban the trade of ambergris as part of the more general ban on the hunting and exploitation of whales. Urine, faeces and ambergris (that has been naturally excreted by a sperm whale) are waste products not considered parts or derivatives of a CITES species and are therefore not covered by the provisions of the convention. Illegal Australia – Under federal law, the export and import of ambergris for commercial purposes is banned by the Environment Protection and Biodiversity Conservation Act 1999. The various states and territories have additional laws regarding ambergris. United States – The possession and trade of ambergris is prohibited by the Endangered Species Act of 1973. India - Sale or possession is illegal under the Wild Life (Protection) Act, 1972. Legal United Kingdom France Switzerland Maldives In popular culture Historical The knowledge of ambergris and how it is produced may have been kept secret. Ibn Battuta wrote about ambergris, "I sent along with them all the things that I valued and the gems and ambergris..." Glasgow apothecary John Spreul told the historian Robert Wodrow about the substance but said he had never told anyone else. In literature In chapter 91 of Herman Melville's Moby-Dick (1851), Stubb, one of the mates of the Pequod, fools the captain of a French whaler (Rose-bud) into abandoning the corpse of a sperm whale found floating in the sea. His plan is to recover the corpse himself in hopes that it contains ambergris. His hope proves well founded, and the Pequod'''s crew recovers a valuable quantity of the substance. Melville devotes the following chapter to a discussion of ambergris, with special attention to the irony that "fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale." In A Romance of Perfume Lands or the Search for Capt. Jacob Cole, F. S. Clifford, October 1881, the last chapter concerns one of the novel's characters discovering an area of a remote island which contains large amounts of ambergris. He hopes to use this knowledge to help make his fortune in the manufacture of perfumes. The 1949 Ghanada short-story Chhori (The Stick) is centred around whaling for ambergris. In television Ambergris features prominently in the 2003 Futurama'' episode "Three Hundred Big Boys." The fact that 'whale vomit' is considered so valuable serves as a gag throughout the episode. Ambergris features prominently in the 2014 Bob's Burgers episode "Ambergris." The Belcher children discover a lump of ambergris, which they decide to sell on the black market. Ambergris is also mentioned in the 2020 Blacklist season 7 episode ”Twamie Ullulaq (No. 126)”, where Reddington has lost a shipment including, among other things, ambergris and goes to a perfumer to obtain information about the shipment. Ambergris is featured in the Television series H2O [S2:E15:Irresistible] where it has mesmerizing effects on mermaids. References Further reading montalvoeascinciasdonossotempo.blogspot, accessed 21 August 2015 External links Natural History Magazine Article (from 1933): Floating Gold – The Romance of Ambergris Ambergris – A Pathfinder and Annotated Bibliography On the chemistry and ethics of Ambergris Perfume ingredients Whale products Animal glandular products Natural products Traditional medicine
Ambergris
Alexander Selkirk (167613 December 1721) was a Scottish privateer and Royal Navy officer who spent four years and four months as a castaway (1704–1709) after being marooned by his captain, initially at his request, on an uninhabited island in the South Pacific Ocean. He survived that ordeal, but succumbed to tropical illness years later while serving aboard off West Africa. Selkirk was an unruly youth, and joined buccaneering voyages to the South Pacific during the War of the Spanish Succession. One such expedition was on Cinque Ports, captained by Thomas Stradling under the overall command of William Dampier. Stradling's ship stopped to resupply at the uninhabited Juan Fernández Islands, west of South America, and Selkirk judged correctly that the craft was unseaworthy and asked to be left there. Selkirk's suspicions were soon justified, as Cinque Ports foundered near Malpelo Island 400 km (250 mi) from the coast of what is now Colombia. By the time he was eventually rescued by English privateer Woodes Rogers, who was accompanied by Dampier, Selkirk had become adept at hunting and making use of the resources that he found on the island. His story of survival was widely publicised after his return to England, becoming one of the sources of inspiration for writer Daniel Defoe's fictional character Robinson Crusoe. Early life and privateering Alexander Selkirk was the son of a shoemaker and tanner in Lower Largo, Fife, Scotland, born in 1676. In his youth he displayed a quarrelsome and unruly disposition. He was summoned before the Kirk Session in August 1693 for his "indecent conduct in church", but he "did not appear, being gone to sea". He was back at Largo in 1701 when he again came to the attention of church authorities for assaulting his brothers. Early on, he was engaged in buccaneering. In 1703, he joined an expedition of English privateer and explorer William Dampier to the South Pacific Ocean, setting sail from Kinsale in Ireland on 11 September. They carried letters of marque from the Lord High Admiral authorising their armed merchant ships to attack foreign enemies as the War of the Spanish Succession was then going on between England and Spain. Dampier was captain of St George and Selkirk served on Cinque Ports, St Georges companion ship, as sailing master under Captain Thomas Stradling. By this time, Selkirk must have had considerable experience at sea. In February 1704, following a stormy passage around Cape Horn, the privateers fought a long battle with a well-armed French vessel, St Joseph, only to have it escape to warn its Spanish allies of their arrival in the Pacific. A raid on the Panamanian gold mining town of Santa María failed when their landing party was ambushed. The easy capture of Asunción, a heavily-laden merchantman, revived the men's hopes of plunder, and Selkirk was put in charge of the prize ship. Dampier took off some much-needed provisions of wine, brandy, sugar and flour, then abruptly set the ship free, arguing that the gain was not worth the effort. In May 1704, Stradling decided to abandon Dampier and strike out on his own. Castaway In September 1704, after parting ways with Dampier, Captain Stradling brought Cinque Ports to an island known to the Spanish as Más a Tierra located in the uninhabited Juan Fernández archipelago off the coast of Chile for a mid-expedition restocking of fresh water and supplies. Selkirk had grave concerns about the seaworthiness of their vessel, and wanted to make the necessary repairs before going any farther. He declared that he would rather stay on Juan Fernández than continue in a dangerously leaky ship. Stradling took him up on the offer and landed Selkirk on the island with a musket, a hatchet, a knife, a cooking pot, a Bible, bedding and some clothes. Selkirk immediately regretted his rashness, but Stradling refused to let him back on board. Cinque Ports did indeed later founder off the coast of what is now Colombia. Stradling and some of his crew survived the loss of their ship but were forced to surrender to the Spanish. The survivors were taken to Lima, Peru, where they endured a harsh imprisonment. Life on the island At first, Selkirk remained along the shoreline of Más a Tierra. During this time he ate spiny lobsters and scanned the ocean daily for rescue, suffering all the while from loneliness, misery and remorse. Hordes of raucous sea lions, gathered on the beach for the mating season, eventually drove him to the island's interior. Once inland, his way of life took a turn for the better. More foods were available there: feral goats—introduced by earlier sailors—provided him with meat and milk, while wild turnips, the leaves of the indigenous cabbage tree and dried Schinus fruits (pink peppercorns) offered him variety and spice. Rats would attack him at night, but he was able to sleep soundly and in safety by domesticating and living near feral cats. Selkirk proved resourceful in using materials that he found on the island: he forged a new knife out of barrel hoops left on the beach, he built two huts out of pepper trees, one of which he used for cooking and the other for sleeping, and he employed his musket to hunt goats and his knife to clean their carcasses. As his gunpowder dwindled, he had to chase prey on foot. During one such chase he was badly injured when he tumbled from a cliff, lying helpless and unable to move for about a day. His prey had cushioned his fall, probably sparing him a broken back. Childhood lessons learned from his father, a tanner, now served him well. For example, when his clothes wore out, he made new ones from hair-covered goatskins using a nail for sewing. As his shoes became unusable, he had no need to replace them, since his toughened, calloused feet made protection unnecessary. He sang psalms and read from the Bible, finding it a comfort in his situation and a prop for his English. During his sojourn on the island, two vessels came to anchor. Unfortunately for Selkirk, both were Spanish. As a Scotsman and a privateer, he would have faced a grim fate if captured and therefore did his best to hide himself. Once, he was spotted and chased by a group of Spanish sailors from one of the ships. His pursuers urinated beneath the tree in which he was hiding but failed to notice him. The would-be captors then gave up and sailed away. Rescue Selkirk's long-awaited deliverance came on 2 February 1709 by way of Duke, a privateering ship piloted by William Dampier, and its sailing companion Duchess. Thomas Dover led the landing party that met Selkirk. After four years and four months without human company, Selkirk was almost incoherent with joy. The Duke captain and leader of the expedition was Woodes Rogers, who wryly referred to Selkirk as the governor of the island. The agile castaway caught two or three goats a day and helped restore the health of Rogers' men, who were suffering from scurvy. Captain Rogers was impressed by Selkirk's physical vigour, but also by the peace of mind that he had attained while living on the island, observing: "One may see that solitude and retirement from the world is not such an insufferable state of life as most men imagine, especially when people are fairly called or thrown into it unavoidably, as this man was." He made Selkirk Dukes second mate, later giving him command of one of their prize ships, Increase, before it was ransomed by the Spanish. Selkirk returned to privateering with a vengeance. At Guayaquil in present-day Ecuador, he led a boat crew up the Guayas River where a number of wealthy Spanish ladies had fled, and looted the gold and jewels they had hidden inside their clothing. His part in the hunt for treasure galleons along the coast of Mexico resulted in the capture of Nuestra Señora de la Encarnación y Desengaño, renamed Bachelor, on which he served as sailing master under Captain Dover to the Dutch East Indies. Selkirk completed the around-the-world voyage by the Cape of Good Hope as the sailing master of Duke, arriving at the Downs off the English coast on 1 October 1711. He had been away for eight years. Later life and influence Selkirk's experience as a castaway aroused a great deal of attention in England. Fellow crewmember Edward Cooke mentioned Selkirk's ordeal in a book chronicling their privateering expedition, A Voyage to the South Sea and Round the World (1712). A more detailed recounting was published by the expedition's leader, Rogers, within months. The following year, prominent essayist Richard Steele wrote an article about him for The Englishman newspaper. Selkirk appeared set to enjoy a life of ease and celebrity, claiming his share of Duke'''s plundered wealth—about £800 (equivalent to £ today). However, legal disputes made the amount of any payment uncertain. After a few months in London, he began to seem more like his former self again. In September 1713, he was charged with assaulting a shipwright in Bristol and may have been kept in confinement for two years. He returned to Lower Largo, where he met Sophia Bruce, a young dairymaid. They eloped to London early in 1717 but apparently did not marry. He was soon off to sea again, having enlisted in the Royal Navy. While on a visit to Plymouth in 1720, he married a widowed innkeeper named Frances Candis. He was serving as master's mate on board , engaged in an anti-piracy patrol off the west coast of Africa, when he died on 13 December 1721, succumbing to the yellow fever that plagued the voyage. He was buried at sea. When Daniel Defoe published The Life and Surprising Adventures of Robinson Crusoe (1719), few readers could have missed the resemblance to Selkirk. An illustration on the first page of the novel shows "a rather melancholy-looking man standing on the shore of an island, gazing inland", in the words of modern explorer Tim Severin. He is dressed in the familiar hirsute goatskins, his feet and shins bare. Yet Crusoe's island is located not in the mid-latitudes of the South Pacific but away in the Caribbean, where the furry attire would hardly be comfortable in the tropical heat. This incongruity supports the popular belief that Selkirk was a model for the fictional character, though most literary scholars now accept that his was "just one of many survival narratives that Defoe knew about". In other literary works In filmSelkirk, the Real Robinson Crusoe is a stop motion film by Walter Tournier based on Selkirk's life. It premièred simultaneously in Argentina, Chile, and Uruguay on 2 February 2012, distributed by The Walt Disney Company. It was the first full-length animated feature to be produced in Uruguay. Commemoration Selkirk has been memorialised in his Scottish birthplace. Lord Aberdeen delivered a speech on 11 December 1885, after which his wife, Lady Aberdeen, unveiled a bronze statue and plaque in memory of Selkirk outside a house on the site of his original home on the Main Street of Lower Largo. David Gillies of Cardy House, Lower Largo, a descendant of the Selkirks, donated the statue created by Thomas Stuart Burnett. The Scotsman is also remembered in his former island home. In 1869 the crew of placed a bronze tablet at a spot called Selkirk's Lookout on a mountain of Más a Tierra, Juan Fernández Islands, to mark his stay. On 1 January 1966 Chilean president Eduardo Frei Montalva renamed Más a Tierra Robinson Crusoe Island after Defoe's fictional character to attract tourists. The largest of the Juan Fernández Islands, known as Más Afuera, became Alejandro Selkirk Island, although Selkirk probably never saw that island since it is located to the west. Archaeological findings An archaeological expedition to the Juan Fernández Islands in February 2005 found part of a nautical instrument that likely belonged to Selkirk. It was "a fragment of copper alloy identified as being from a pair of navigational dividers" dating from the early 18th (or late 17th) century. Selkirk is the only person known to have been on the island at that time who is likely to have had dividers, and was even said by Rogers to have had such instruments in his possession. The artefact was discovered while excavating a site not far from Selkirk's Lookout where the famous castaway is believed to have lived. See also List of solved missing person cases: pre-2000 Notes References Further reading External links "Trapped on a Pacific Island: Scientists Research the Real Robinson Crusoe" by Marco Evers (6 Feb­ru­ary 2009) in Spiegel Online "Island Gives Up Secret of Real Robinson Crusoe" in The Scotsman (22 Sep­tem­ber 2005) "The Real Robinson Crusoe" by Bruce Selcraig (July 2005) in Smithsonian An account of a trip to Selkirk's Island by James S. Bruce and Mayme S. Bruce (Spring 1993) in The Explorers Journal"On a Piece of Stone: Alexander Selkirk on Greater Land" by Edward E. Leslie (1988) in Desperate Journeys, Abandoned Souls: True Stories of Castaways and Other Survivors'' (pp. 61–85) Satellite imagery of the Juan Fernández Islands from Google Maps 1676 births 1721 deaths 18th century in Chile 18th-century Scottish people British privateers Burials at sea Castaways Circumnavigators of the globe Date of birth unknown Deaths from yellow fever Formerly missing people Juan Fernández Islands Maritime folklore People from Lower Largo People who died at sea Piracy in the Pacific Ocean Robinson Crusoe Scottish sailors
Alexander Selkirk
The acre is a unit of land area used in the imperial and US customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, of a square mile, 4,840 square yards, or 43,560 square feet, and approximately 4,047 m2, or about 40% of a hectare. Based upon the International yard and pound agreement of 1959, an acre may be declared as exactly square metres. The acre was sometimes abbreviated ac, but was often spelled out as the word "acre". Traditionally, in the Middle Ages, an acre was conceived of as the area of land that could be ploughed by one man using a team of oxen in one day. It is still a statute measure in the United States. Both the international acre and the US survey acre are in use, but they differ by only two parts per million (see below). The most common use of the acre is to measure tracts of land. The acre is commonly used in a number of current and former British Commonwealth countries by custom only. In a few it continues as a statute measure, although since 2010 not in the UK itself, and not since decades ago in Australia, New Zealand and South Africa. In many of those where it is not a statute measure, it is still lawful to "use for trade" if given as supplementary information and is not used for land registration. Description One acre equals (0.0015625) square mile, 4,840 square yards, 43,560 square feet, or about (see below). While all modern variants of the acre contain 4,840 square yards, there are alternative definitions of a yard, so the exact size of an acre depends upon the particular yard on which it is based. Originally, an acre was understood as a selion of land sized at forty perches (660 ft, or 1 furlong) long and four perches (66 ft) wide; this may have also been understood as an approximation of the amount of land a yoke of oxen could plough in one day (a furlong being "a furrow long"). A square enclosing one acre is approximately 69.57 yards, or 208 feet 9 inches (), on a side. As a unit of measure, an acre has no prescribed shape; any area of 43,560 square feet is an acre. US survey acres In the international yard and pound agreement of 1959, the United States and five countries of the Commonwealth of Nations defined the international yard to be exactly 0.9144 metre. The US authorities decided that, while the refined definition would apply nationally in all other respects, the US survey foot (and thus the survey acre) would continue 'until such a time as it becomes desirable and expedient to readjust [it]'. By inference, an "international acre" may be calculated as exactly square metres but it does not have a basis in any international agreement. Both the international acre and the US survey acre contain of a square mile or 4,840 square yards, but alternative definitions of a yard are used (see survey foot and survey yard), so the exact size of an acre depends upon which yard it is based. The US survey acre is about 4,046.872 square metres; its exact value ( m2) is based on an inch defined by 1 metre = 39.37 inches exactly, as established by the Mendenhall Order of 1893. Surveyors in the United States use both international and survey feet, and consequently, both varieties of acre. Since the difference between the US survey acre and international acre (0.016 square metres, 160 square centimetres or 24.8 square inches), is only about a quarter of the size of an A4 sheet or US letter, it is usually not important which one is being discussed. Areas are seldom measured with sufficient accuracy for the different definitions to be detectable. In October 2019, US National Geodetic Survey and National Institute of Standards and Technology announced their joint intent to end the "temporary" continuance of the US survey foot, mile and acre units (as permitted by their 1959 decision, above), with effect from the end of 2022. Spanish acre The Puerto Rican cuerda () is sometimes called the "Spanish acre" in the continental United States. Use The acre is commonly used in a number of current and former Commonwealth countries by custom, and in a few it continues as a statute measure. These include Antigua and Barbuda, American Samoa, The Bahamas, Belize, the British Virgin Islands, the Cayman Islands, Dominica, the Falkland Islands, Grenada, Ghana, Guam, the Northern Mariana Islands, Jamaica, Montserrat, Samoa, Saint Lucia, St. Helena, St. Kitts and Nevis, St. Vincent and the Grenadines, Turks and Caicos, the United Kingdom, the United States and the US Virgin Islands. South Asia In India, residential plots are measured in square feet, while agricultural land is measured in acres. In Sri Lanka, the division of an acre into 160 perches or 4 roods is common. In Pakistan, residential plots is measured in (20 = 1 = 500 sq yards) and open/agriculture land measurement is in acres (8 = 1 acre or 4 = 1 acre) and (25 acres = 1 = 200 ), and . United Kingdom Its use as a primary unit for trade in the United Kingdom ceased to be permitted from 1 October 1995, due to the 1994 amendment of the Weights and Measures Act, where it was replaced by the hectare though its use as a supplementary unit continues to be permitted indefinitely. This was with exemption of Land registration, which records the sale and possession of land, in 2010 HM Land Registry ended its exemption. The measure is still used to communicate with the public, and informally (non-contract) by the farming and property industries. Equivalence to other units of area 1 international acre is equal to the following metric units: 0.40468564224 hectare (A square with 100 m sides has an area of 1 hectare.) 4,046.8564224 square metres (or a square with approximately 63.61 m sides) 1 United States survey acre is equal to: 0.404687261 hectare 4,046.87261 square metres (1 square kilometre is equal to 247.105 acres) 1 acre (both variants) is equal to the following customary units: 66 feet × 660 feet (43,560 square feet) 10 square chains (1 chain = 66 feet = 22 yards = 4 rods = 100 links) 1 acre is approximately 208.71 feet × 208.71 feet (a square) 4,840 square yards 43,560 square feet 160 perches. A perch is equal to a square rod (1 square rod is 0.00625 acre) 4 roods A furlong by a chain (furlong 220 yards, chain 22 yards) 40 rods by 4 rods, 160 rods2 (historically fencing was often sold in 40 rod lengths) (0.0015625) square mile (1 square mile is equal to 640 acres) Perhaps the easiest way for US residents to envision an acre is as a rectangle measuring 88 yards by 55 yards ( of 880 yards by of 880 yards), about the size of a standard American football field. To be more exact, one acre is 90.75% of a 100-yd-long by 53.33-yd-wide American football field (without the end zone). The full field, including the end zones, covers about . For residents of other countries, the acre might be envisioned as rather more than half of a football pitch. It may also be remembered as 1% short of 44,000 square feet. Historical origin The word "acre" is derived from Old English originally meaning "open field", cognate with west coast Norwegian , Icelandic , Swedish , German , Dutch , Latin , Sanskrit , and Greek (). In English, an obsolete variant spelling was aker. According to the Act on the Composition of Yards and Perches, dating from around 1300, an acre is "40 perches [rods] in length and four in breadth", meaning 220 yards by 22 yards. As detailed in the box on the right, an acre was roughly the amount of land tillable by a yoke of oxen in one day. Before the enactment of the metric system, many countries in Europe used their own official acres. In France, the acre (spelled exactly the same as in English) was used only in Normandy (and neighbouring places outside its traditional borders), but its value varied greatly across Normandy, ranging from 3,632 to 9,725 square metres, with 8,172 square metres being the most frequent value. But inside the same pays of Normandy, for instance in pays de Caux, the farmers (still in the 20th century) made the difference between the grande acre (68 ares, 66 centiares) and the petite acre (56 to 65 ca). The Normandy acre was usually divided in 4 vergées (roods) and 160 square perches, like the English acre. The Normandy acre was equal to 1.6 arpents, the unit of area more commonly used in Northern France outside of Normandy. In Canada, the Paris arpent used in Quebec before the metric system was adopted is sometimes called "French acre" in English, even though the Paris arpent and the Normandy acre were two very different units of area in ancient France (the Paris arpent became the unit of area of French Canada, whereas the Normandy acre was never used in French Canada). The German word for acre is Morgen. There were many variants of the Morgen, differing between the different German territories: Statutory values for the acre were enacted in England, and subsequently the United Kingdom, by acts of: Edward I Edward III Henry VIII George IV Queen Victoria – the British Weights and Measures Act of 1878 defined it as containing 4,840 square yards. Historically, the size of farms and landed estates in the United Kingdom was usually expressed in acres (or acres, roods, and perches), even if the number of acres was so large that it might conveniently have been expressed in square miles. For example, a certain landowner might have been said to own 32,000 acres of land, not 50 square miles of land. The acre is related to the square mile, with 640 acres making up one square mile. One mile is 5280 feet (1760 yards). In western Canada and the western United States, divisions of land area were typically based on the square mile, and fractions thereof. If the square mile is divided into quarters, each quarter has a side length of mile (880 yards) and is square mile in area, or 160 acres. These subunits would typically then again be divided into quarters, with each side being mile long, and being of a square mile in area, or 40 acres. In the United States, farmland was typically divided as such, and the phrase "the back 40" would refer to the 40-acre parcel to the back of the farm. Most of the Canadian Prairie Provinces and the US Midwest are on square-mile grids for surveying purposes. Legacy acres Customary acre – The customary acre was roughly similar to the Imperial acre, but it was subject to considerable local variation similar to the variation in carucates, virgates, bovates, nooks, and farundels. These may have been multiples of the customary acre, rather than the statute acre. Builder's acre = an even or , used in US real-estate development to simplify the math and for marketing. It is nearly 10% smaller than a survey acre, and the discrepancy has led to lawsuits alleging misrepresentation. Scottish acre = 1.3 Imperial acres (5,080 m2, an obsolete Scottish measurement) Irish acre = Cheshire acre = Stremma or Greek acre ≈ 10,000 square Greek feet, but now set at exactly 1,000 square metres (a similar unit was the zeugarion) Dunam or Turkish acre ≈ 1,600 square Turkish paces, but now set at exactly 1,000 square metres (a similar unit was the çift) Actus quadratus or Roman acre ≈ 14,400 square Roman feet (about 1,260 square metres) God's Acre – a synonym for a churchyard. Long acre the grass strip on either side of a road that may be used for illicit grazing. Town acre was a term used in early 19th century in the planning of towns on a grid plan, such as Adelaide, South Australia and Wellington, New Plymouth and Nelson in New Zealand. The land was divided into plots of an Imperial acre, and these became known as town acres. See also Acre-foot – used in US to measure a large water volume Anthropic units Conversion of units French arpent – used in Louisiana to measure length and area Jugerum a Morgen ("morning") of land is normally of a Tagwerk ("day work") of ploughing with an ox Public Land Survey System Quarter acre Section (United States land surveying) Spanish customary units Notes References External links The Units of Measurement Regulations 1995 (United Kingdom) Customary units of measurement in the United States Imperial units Surveying Units of area
Acre
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the common cold or influenza; drugs which inhibit viruses are termed antiviral drugs or antivirals rather than antibiotics. Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas nonantibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed. Antibiotics have been used since ancient times. Many civilizations used topical application of mouldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. The World Health Organization has classified antimicrobial resistance as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Global deaths attributable to antimicrobial resistance numbered 1.27 million in 2019. Etymology The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947. The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not. The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod. Usage Medical uses Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days. When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis. Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related. The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke. Routes of administration There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose. Global consumption Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption’ published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed. Side effects Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis. Common side-effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid. Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts. Interactions Birth control pills There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended. In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception. Alcohol Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics, with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered. Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound. Pharmacodynamics The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial. To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy. Combination therapy In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic. In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria. Classes Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin). Production With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons. Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions. Resistance The emergence of resistance of bacteria to antibiotics is a common phenomenon. Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains. Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces. The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use. Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria. Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability. Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound. Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were for a while well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic. Misuse Per The ICU Book "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics. Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse. Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children. The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association. Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year. There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations. Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. History Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source. The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes. Synthetic antibiotics derived from dyes Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine. This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Erlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials. Penicillin and other natural antibiotics Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics". In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination. In 1876, physicist John Tyndall also contributed to this field. In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mould. In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Unfortunately Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics. In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium chrysogenum, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists. Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming. Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War. Late 20th century During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003. Antibiotic pipeline Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1-3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority. A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection. Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible." Replenishing the antibiotic pipeline and developing other new therapies Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments. Natural product-based antibiotic discovery Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes). In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics. Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility). Immunoglobulin therapy Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial disease. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridium difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors. Phage therapy Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known simply as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction. Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails. There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option. Fecal microbiota transplants Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Antisense RNA-based treatments Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia. In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies. CRISPR-Cas9-based treatments In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA. Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection. Reducing the selection pressure for antibiotic resistance In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food) and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance. Vaccines Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases. Vaccines made from attenuated whole cells or lysates have been replaced largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins. See also References Further reading External links Anti-infective agents .
Antibiotic
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in a different manner. For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations). The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element. For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state. History The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure. By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only. Differences in properties of an element's allotropes Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semiconductor form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2). List of allotropes Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate. Examples of allotropes include: Non-metals Metalloids Metals Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C. Lanthanides and actinides Cerium, samarium, dysprosium and ytterbium have three allotropes. Praseodymium, neodymium, gadolinium and terbium have two allotropes. Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic. Promethium, americium, berkelium and californium have three allotropes each. Nanoallotropes In 2017, the concept of nanoallotropy was proposed by Prof. Rafal Klajn of the Organic Chemistry Department of the Weizmann Institute of Science. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created. See also Isomer Polymorphism (materials science) Notes References External links Allotropes – Chemistry Encyclopedia Chemistry Inorganic chemistry Physical chemistry
Allotropy
Baron Augustin-Louis Cauchy (; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He was one of the first to state and rigorously prove theorems of calculus, rejecting the heuristic principle of the generality of algebra of earlier authors. He almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had a great influence over his contemporaries and successors; Hans Freudenthal stated: "More concepts and theorems have been named for Cauchy than for any other mathematician (in elasticity alone there are sixteen concepts and theorems named for Cauchy)." Cauchy was a prolific writer; he wrote approximately eight hundred research articles and five complete textbooks on a variety of topics in the fields of mathematics and mathematical physics. Biography Youth and education Cauchy was the son of Louis François Cauchy (1760–1848) and Marie-Madeleine Desestre. Cauchy had two brothers: Alexandre Laurent Cauchy (1792–1857), who became a president of a division of the court of appeal in 1847 and a judge of the court of cassation in 1849, and Eugene François Cauchy (1802–1877), a publicist who also wrote several mathematical works. Cauchy married Aloise de Bure in 1818. She was a close relative of the publisher who published most of Cauchy's works. They had two daughters, Marie Françoise Alicia (1819) and Marie Mathilde (1823). Cauchy's father was a high official in the Parisian Police of the Ancien Régime, but lost this position due to the French Revolution (July 14, 1789), which broke out one month before Augustin-Louis was born. The Cauchy family survived the revolution and the following Reign of Terror (1793–94) by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre (1794), it was safe for the family to return to Paris. There Louis-François Cauchy found himself a new bureaucratic job in 1800, and quickly moved up the ranks. When Napoleon Bonaparte came to power (1799), Louis-François Cauchy was further promoted, and became Secretary-General of the Senate, working directly under Laplace (who is now better known for his work on mathematical physics). The famous mathematician Lagrange was also a friend of the Cauchy family. On Lagrange's advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, the best secondary school of Paris at that time, in the fall of 1802. Most of the curriculum consisted of classical languages; the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and the humanities. In spite of these successes, Augustin-Louis chose an engineering career, and prepared himself for the entrance examination to the École Polytechnique. In 1805, he placed second of 293 applicants on this exam and was admitted. One of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young and pious Cauchy some problems in adapting. Nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées (School for Bridges and Roads). He graduated in civil engineering, with the highest honors. Engineering days After finishing school in 1810, Cauchy accepted a job as a junior engineer in Cherbourg, where Napoleon intended to build a naval base. Here Augustin-Louis stayed for three years, and was assigned the Ourcq Canal project and the Saint-Cloud Bridge project, and worked at the Harbor of Cherbourg. Although he had an extremely busy managerial job, he still found time to prepare three mathematical manuscripts, which he submitted to the Première Classe (First Class) of the Institut de France. Cauchy's first two manuscripts (on polyhedra) were accepted; the third one (on directrices of conic sections) was rejected. In September 1812, now 23 years old, Cauchy returned to Paris after becoming ill from overwork. Another reason for his return to the capital was that he was losing his interest in his engineering job, being more and more attracted to the abstract beauty of mathematics; in Paris, he would have a much better chance to find a mathematics related position. Therefore, when his health improved in 1813, Cauchy chose to not return to Cherbourg. Although he formally kept his engineering position, he was transferred from the payroll of the Ministry of the Marine to the Ministry of the Interior. The next three years Augustin-Louis was mainly on unpaid sick leave, and spent his time quite fruitfully, working on mathematics (on the related topics of symmetric functions, the symmetric group and the theory of higher-order algebraic equations). He attempted admission to the First Class of the Institut de France but failed on three different occasions between 1813 and 1815. In 1815 Napoleon was defeated at Waterloo, and the newly installed Bourbon king Louis XVIII took the restoration in hand. The Académie des Sciences was re-established in March 1816; Lazare Carnot and Gaspard Monge were removed from this Academy for political reasons, and the king appointed Cauchy to take the place of one of them. The reaction of Cauchy's peers was harsh; they considered the acceptance of his membership in the Academy an outrage, and Cauchy thereby created many enemies in scientific circles. Professor at École Polytechnique In November 1815, Louis Poinsot, who was an associate professor at the École Polytechnique, asked to be exempted from his teaching duties for health reasons. Cauchy was by then a rising mathematical star, who certainly merited a professorship. One of his great successes at that time was the proof of Fermat's polygonal number theorem. However, the fact that Cauchy was known to be very loyal to the Bourbons doubtless also helped him in becoming the successor of Poinsot. He finally quit his engineering job, and received a one-year contract for teaching mathematics to second-year students of the École Polytechnique. In 1816, this Bonapartist, non-religious school was reorganized, and several liberal professors were fired; the reactionary Cauchy was promoted to full professor. When Cauchy was 28 years old, he was still living with his parents. His father found it high time for his son to marry; he found him a suitable bride, Aloïse de Bure, five years his junior. The de Bure family were printers and booksellers, and published most of Cauchy's works. Aloïse and Augustin were married on April 4, 1818, with great Roman Catholic pomp and ceremony, in the Church of Saint-Sulpice. In 1819 the couple's first daughter, Marie Françoise Alicia, was born, and in 1823 the second and last daughter, Marie Mathilde. The conservative political climate that lasted until 1830 suited Cauchy perfectly. In 1824 Louis XVIII died, and was succeeded by his even more reactionary brother Charles X. During these years Cauchy was highly productive, and published one important mathematical treatise after another. He received cross-appointments at the Collège de France, and the . In exile In July 1830, the July Revolution occurred in France. Charles X fled the country, and was succeeded by the non-Bourbon king Louis-Philippe (of the House of Orléans). Riots, in which uniformed students of the École Polytechnique took an active part, raged close to Cauchy's home in Paris. These events marked a turning point in Cauchy's life, and a break in his mathematical productivity. Cauchy, shaken by the fall of the government, and moved by a deep hatred of the liberals who were taking power, left Paris to go abroad, leaving his family behind. He spent a short time at Fribourg in Switzerland, where he had to decide whether he would swear a required oath of allegiance to the new regime. He refused to do this, and consequently lost all his positions in Paris, except his membership of the Academy, for which an oath was not required. In 1831 Cauchy went to the Italian city of Turin, and after some time there, he accepted an offer from the King of Sardinia (who ruled Turin and the surrounding Piedmont region) for a chair of theoretical physics, which was created especially for him. He taught in Turin during 1832–1833. In 1831, he was elected a foreign member of the Royal Swedish Academy of Sciences, and the following year a Foreign Honorary Member of the American Academy of Arts and Sciences. In August 1833 Cauchy left Turin for Prague, to become the science tutor of the thirteen-year-old Duke of Bordeaux Henri d'Artois (1820–1883), the exiled Crown Prince and grandson of Charles X. As a professor of the École Polytechnique, Cauchy had been a notoriously bad lecturer, assuming levels of understanding that only a few of his best students could reach, and cramming his allotted time with too much material. The young Duke had neither taste nor talent for either mathematics or science, so student and teacher were a perfect mismatch. Although Cauchy took his mission very seriously, he did this with great clumsiness, and with surprising lack of authority over the Duke. During his civil engineering days, Cauchy once had been briefly in charge of repairing a few of the Parisian sewers, and he made the mistake of mentioning this to his pupil; with great malice, the young Duke went about saying Mister Cauchy started his career in the sewers of Paris. His role as tutor lasted until the Duke became eighteen years old, in September 1838. Cauchy did hardly any research during those five years, while the Duke acquired a lifelong dislike of mathematics. The only good that came out of this episode was Cauchy's promotion to baron, a title by which Cauchy set great store. In 1834, his wife and two daughters moved to Prague, and Cauchy was finally reunited with his family after four years in exile. Last years Cauchy returned to Paris and his position at the Academy of Sciences late in 1838. He could not regain his teaching positions, because he still refused to swear an oath of allegiance. In August 1839 a vacancy appeared in the Bureau des Longitudes. This Bureau bore some resemblance to the Academy; for instance, it had the right to co-opt its members. Further, it was believed that members of the Bureau could "forget about" the oath of allegiance, although formally, unlike the Academicians, they were obliged to take it. The Bureau des Longitudes was an organization founded in 1795 to solve the problem of determining position at sea — mainly the longitudinal coordinate, since latitude is easily determined from the position of the sun. Since it was thought that position at sea was best determined by astronomical observations, the Bureau had developed into an organization resembling an academy of astronomical sciences. In November 1839 Cauchy was elected to the Bureau, and discovered immediately that the matter of the oath was not so easily dispensed with. Without his oath, the king refused to approve his election. For four years Cauchy was in the position of being elected but not approved; accordingly, he was not a formal member of the Bureau, did not receive payment, could not participate in meetings, and could not submit papers. Still Cauchy refused to take any oaths; however, he did feel loyal enough to direct his research to celestial mechanics. In 1840, he presented a dozen papers on this topic to the Academy. He also described and illustrated the signed-digit representation of numbers, an innovation presented in England in 1727 by John Colson. The confounded membership of the Bureau lasted until the end of 1843, when Cauchy was finally replaced by Poinsot. Throughout the nineteenth century the French educational system struggled over the separation of church and state. After losing control of the public education system, the Catholic Church sought to establish its own branch of education and found in Cauchy a staunch and illustrious ally. He lent his prestige and knowledge to the École Normale Écclésiastique, a school in Paris run by Jesuits, for training teachers for their colleges. He also took part in the founding of the Institut Catholique. The purpose of this institute was to counter the effects of the absence of Catholic university education in France. These activities did not make Cauchy popular with his colleagues, who, on the whole, supported the Enlightenment ideals of the French Revolution. When a chair of mathematics became vacant at the Collège de France in 1843, Cauchy applied for it, but received just three of 45 votes. The year 1848 was the year of revolution all over Europe; revolutions broke out in numerous countries, beginning in France. King Louis-Philippe, fearful of sharing the fate of Louis XVI, fled to England. The oath of allegiance was abolished, and the road to an academic appointment was finally clear for Cauchy. On March 1, 1849, he was reinstated at the Faculté de Sciences, as a professor of mathematical astronomy. After political turmoil all through 1848, France chose to become a Republic, under the Presidency of Louis Napoleon Bonaparte, nephew of Napoleon Bonaparte, and son of Napoleon's brother, who had been installed as the first king of Holland. Soon (early 1852) the President made himself Emperor of France, and took the name Napoleon III. Not unexpectedly, the idea came up in bureaucratic circles that it would be useful to again require a loyalty oath from all state functionaries, including university professors. This time a cabinet minister was able to convince the Emperor to exempt Cauchy from the oath. Cauchy remained a professor at the university until his death at the age of 67. He received the Last Rites and died of a bronchial condition at 4 a.m. on 23 May 1857. His name is one of the 72 names inscribed on the Eiffel Tower. Work Early work The genius of Cauchy was illustrated in his simple solution of the problem of Apollonius—describing a circle touching three given circles—which he discovered in 1805, his generalization of Euler's formula on polyhedra in 1811, and in several other elegant problems. More important is his memoir on wave propagation, which obtained the Grand Prix of the French Academy of Sciences in 1816. Cauchy's writings covered notable topics including: the theory of series, where he developed the notion of convergence and discovered many of the basic formulas for q-series. In the theory of numbers and complex quantities, he was the first to define complex numbers as pairs of real numbers. He also wrote on the theory of groups and substitutions, the theory of functions, differential equations and determinants. Wave theory, mechanics, elasticity In the theory of light he worked on Fresnel's wave theory and on the dispersion and polarization of light. He also contributed research in mechanics, substituting the notion of the continuity of geometrical displacements for the principle of the continuity of matter. He wrote on the equilibrium of rods and elastic membranes and on waves in elastic media. He introduced a 3 × 3 symmetric matrix of numbers that is now known as the Cauchy stress tensor. In elasticity, he originated the theory of stress, and his results are nearly as valuable as those of Siméon Poisson. Number theory Other significant contributions include being the first to prove the Fermat polygonal number theorem. Complex functions Cauchy is most famous for his single-handed development of complex function theory. The first pivotal theorem proved by Cauchy, now known as Cauchy's integral theorem, was the following: where f(z) is a complex-valued function holomorphic on and within the non-self-intersecting closed curve C (contour) lying in the complex plane. The contour integral is taken along the contour C. The rudiments of this theorem can already be found in a paper that the 24-year-old Cauchy presented to the Académie des Sciences (then still called "First Class of the Institute") on August 11, 1814. In full form the theorem was given in 1825. The 1825 paper is seen by many as Cauchy's most important contribution to mathematics. In 1826 Cauchy gave a formal definition of a residue of a function. This concept concerns functions that have poles—isolated singularities, i.e., points where a function goes to positive or negative infinity. If the complex-valued function f(z) can be expanded in the neighborhood of a singularity a as where φ(z) is analytic (i.e., well-behaved without singularities), then f is said to have a pole of order n in the point a. If n = 1, the pole is called simple. The coefficient B1 is called by Cauchy the residue of function f at a. If f is non-singular at a then the residue of f is zero at a. Clearly the residue is in the case of a simple pole equal to, where we replaced B1 by the modern notation of the residue. In 1831, while in Turin, Cauchy submitted two papers to the Academy of Sciences of Turin. In the first he proposed the formula now known as Cauchy's integral formula, where f(z) is analytic on C and within the region bounded by the contour C and the complex number a is somewhere in this region. The contour integral is taken counter-clockwise. Clearly, the integrand has a simple pole at z = a. In the second paper he presented the residue theorem, where the sum is over all the n poles of f(z) on and within the contour C. These results of Cauchy's still form the core of complex function theory as it is taught today to physicists and electrical engineers. For quite some time, contemporaries of Cauchy ignored his theory, believing it to be too complicated. Only in the 1840s the theory started to get response, with Pierre Alphonse Laurent being the first mathematician, besides Cauchy, making a substantial contribution (his Laurent series published in 1843). Cours d'Analyse In his book Cours d'Analyse Cauchy stressed the importance of rigor in analysis. Rigor in this case meant the rejection of the principle of Generality of algebra (of earlier authors such as Euler and Lagrange) and its replacement by geometry and infinitesimals. Judith Grabiner wrote Cauchy was "the man who taught rigorous analysis to all of Europe". The book is frequently noted as being the first place that inequalities, and arguments were introduced into Calculus. Here Cauchy defined continuity as follows: The function f(x) is continuous with respect to x between the given limits if, between these limits, an infinitely small increment in the variable always produces an infinitely small increment in the function itself. M. Barany claims that the École mandated the inclusion of infinitesimal methods against Cauchy's better judgement. Gilain notes that when the portion of the curriculum devoted to Analyse Algébrique was reduced in 1825, Cauchy insisted on placing the topic of continuous functions (and therefore also infinitesimals) at the beginning of the Differential Calculus. Laugwitz (1989) and Benis-Sinaceur (1973) point out that Cauchy continued to use infinitesimals in his own research as late as 1853. Cauchy gave an explicit definition of an infinitesimal in terms of a sequence tending to zero. There has been a vast body of literature written about Cauchy's notion of "infinitesimally small quantities", arguing they lead from everything from the usual "epsilontic" definitions or to the notions of non-standard analysis. The consensus is that Cauchy omitted or left implicit the important ideas to make clear the precise meaning of the infinitely small quantities he used. Taylor's theorem He was the first to prove Taylor's theorem rigorously, establishing his well-known form of the remainder. He wrote a textbook (see the illustration) for his students at the École Polytechnique in which he developed the basic theorems of mathematical analysis as rigorously as possible. In this book he gave the necessary and sufficient condition for the existence of a limit in the form that is still taught. Also Cauchy's well-known test for absolute convergence stems from this book: Cauchy condensation test. In 1829 he defined for the first time a complex function of a complex variable in another textbook. In spite of these, Cauchy's own research papers often used intuitive, not rigorous, methods; thus one of his theorems was exposed to a "counter-example" by Abel, later fixed by the introduction of the notion of uniform continuity. Argument principle, stability In a paper published in 1855, two years before Cauchy's death, he discussed some theorems, one of which is similar to the "Argument Principle" in many modern textbooks on complex analysis. In modern control theory textbooks, the Cauchy argument principle is quite frequently used to derive the Nyquist stability criterion, which can be used to predict the stability of negative feedback amplifier and negative feedback control systems. Thus Cauchy's work has a strong impact on both pure mathematics and practical engineering. Published works Cauchy was very productive, in number of papers second only to Leonhard Euler. It took almost a century to collect all his writings into 27 large volumes: (Paris : Gauthier-Villars et fils, 1882–1974) His greatest contributions to mathematical science are enveloped in the rigorous methods which he introduced; these are mainly embodied in his three great treatises: Le Calcul infinitésimal (1823) Leçons sur les applications de calcul infinitésimal; La géométrie (1826–1828) His other works include: Exercices d'analyse et de physique mathematique (Volume 1) Exercices d'analyse et de physique mathematique (Volume 2) Exercices d'analyse et de physique mathematique (Volume 3) Exercices d'analyse et de physique mathematique (Volume 4) (Paris: Bachelier, 1840–1847) Analyse algèbrique (Imprimerie Royale, 1821) Nouveaux exercices de mathématiques (Paris : Gauthier-Villars, 1895) Courses of mechanics (for the École Polytechnique) Higher algebra (for the ) Mathematical physics (for the Collège de France). Mémoire sur l'emploi des equations symboliques dans le calcul infinitésimal et dans le calcul aux différences finis CR Ac ad. Sci. Paris, t. XVII, 449–458 (1843) credited as originating the operational calculus. Politics and religious beliefs Augustin-Louis Cauchy grew up in the house of a staunch royalist. This made his father flee with the family to Arcueil during the French Revolution. Their life there during that time was apparently hard; Augustin-Louis's father, Louis François, spoke of living on rice, bread, and crackers during the period. A paragraph from an undated letter from Louis François to his mother in Rouen says: In any event, he inherited his father's staunch royalism and hence refused to take oaths to any government after the overthrow of Charles X. He was an equally staunch Catholic and a member of the Society of Saint Vincent de Paul. He also had links to the Society of Jesus and defended them at the Academy when it was politically unwise to do so. His zeal for his faith may have led to his caring for Charles Hermite during his illness and leading Hermite to become a faithful Catholic. It also inspired Cauchy to plead on behalf of the Irish during the Great Famine of Ireland. His royalism and religious zeal also made him contentious, which caused difficulties with his colleagues. He felt that he was mistreated for his beliefs, but his opponents felt he intentionally provoked people by berating them over religious matters or by defending the Jesuits after they had been suppressed. Niels Henrik Abel called him a "bigoted Catholic" and added he was "mad and there is nothing that can be done about him", but at the same time praised him as a mathematician. Cauchy's views were widely unpopular among mathematicians and when Guglielmo Libri Carucci dalla Sommaja was made chair in mathematics before him he, and many others, felt his views were the cause. When Libri was accused of stealing books he was replaced by Joseph Liouville rather than Cauchy, which caused a rift between Liouville and Cauchy. Another dispute with political overtones concerned Jean-Marie Constant Duhamel and a claim on inelastic shocks. Cauchy was later shown, by Jean-Victor Poncelet, to be wrong. See also List of topics named after Augustin-Louis Cauchy Cauchy–Binet formula Cauchy boundary condition Cauchy's convergence test Cauchy (crater) Cauchy determinant Cauchy distribution Cauchy's equation Cauchy–Euler equation Cauchy's functional equation Cauchy horizon Cauchy formula for repeated integration Cauchy–Frobenius lemma Cauchy–Hadamard theorem Cauchy–Kovalevskaya theorem Cauchy momentum equation Cauchy–Peano theorem Cauchy principal value Cauchy problem Cauchy product Cauchy's radical test Cauchy–Rassias stability Cauchy–Riemann equations Cauchy–Schwarz inequality Cauchy sequence Cauchy surface Cauchy's theorem (geometry) Cauchy's theorem (group theory) Maclaurin–Cauchy test References Notes Citations Sources Further reading Boyer, C.: The concepts of the calculus. Hafner Publishing Company, 1949. Benis-Sinaceur Hourya. Cauchy et Bolzano. In: Revue d'histoire des sciences. 1973, Tome 26 n°2. pp. 97–112. . External links Augustin-Louis Cauchy – Œuvres complètes (in 2 series) Gallica-Math Augustin-Louis Cauchy – Cauchy's Life by Robin Hartshorne 1789 births 1857 deaths 19th-century French mathematicians Corps des ponts École des Ponts ParisTech alumni École Polytechnique alumni Fellows of the American Academy of Arts and Sciences Foreign Members of the Royal Society French Roman Catholics Geometers History of calculus Mathematical analysts Linear algebraists Members of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences Recipients of the Pour le Mérite (civil class) Textbook writers University of Turin faculty
Augustin-Louis Cauchy
Alternative medicine is any practice that aims to achieve the healing effects of medicine, but which lacks biological plausibility and is untested, untestable or proven ineffective. Complementary medicine (CM), complementary and alternative medicine (CAM), integrated medicine or integrative medicine (IM), and holistic medicine are among many rebrandings of the same phenomenon. Alternative therapies share in common that they reside outside of medical science and instead rely on pseudoscience. Traditional practices become "alternative" when used outside their original settings without proper scientific explanation and evidence. Frequently used derogatory terms for the alternative are new-age or pseudo, with little distinction from quackery. Some alternative practices are based on theories that contradict the science of how the human body works; others resort to the supernatural or superstitious to explain their effect. In others, the practice is plausibly effective but has too many side effects. Alternative medicine is distinct from scientific medicine, which employs the scientific method to test plausible therapies by way of responsible and ethical clinical trials, producing evidence of either effect or of no effect. Research into alternative therapies often fails to follow proper research protocols (such as placebo-controlled trials, blind experiments and calculation of prior probability), providing invalid results. Much of the perceived effect of an alternative practice arises from a belief that it will be effective (the placebo effect), or from the treated condition resolving on its own (the natural course of disease). This is further exacerbated by the tendency to turn to alternative therapies upon the failure of medicine, at which point the condition will be at its worst and most likely to spontaneously improve. In the absence of this bias, especially for diseases that are not expected to get better by themselves such as cancer or HIV infection, multiple studies have shown significantly worse outcomes if patients turn to alternative therapies. While this may be because these patients avoid effective treatment, some alternative therapies are actively harmful (e.g. cyanide poisoning from amygdalin, or the intentional ingestion of hydrogen peroxide) or actively interfere with effective treatments. The alternative sector is a highly profitable industry with a strong lobby, and faces far less regulation over the use and marketing of unproven treatments. Its marketing often advertises the treatments as being "natural" or "holistic", in comparison to those offered by medical science. Billions of dollars have been spent studying alternative medicine, with few or no positive results. Some of the successful practices are only considered alternative under very specific definitions, such as those which include all physical activity under the umbrella of "alternative medicine". Definitions and terminology The terms alternative medicine, complementary medicine, integrative medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning and are almost synonymous in most contexts. Terminology has shifted over time, reflecting the preferred branding of practitioners. For example, the United States National Institutes of Health department studying alternative medicine, currently named the National Center for Complementary and Integrative Health (NCCIH), was established as the Office of Alternative Medicine (OAM) and was renamed the National Center for Complementary and Alternative Medicine (NCCAM) before obtaining its current name. Therapies are often framed as "natural" or "holistic", implicitly and intentionally suggesting that conventional medicine is "artificial" and "narrow in scope". The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an effective alternative to medical science, although some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness. Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not, e.g., the use of the expressions "Western medicine" and "Eastern medicine" to suggest that the difference is a cultural difference between the Asiatic east and the European west, rather than that the difference is between evidence-based medicine and treatments that do not work. Alternative medicine Alternative medicine is defined loosely as a set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine, but whose effectiveness has not been established using scientific methods, or whose theory and practice is not part of biomedicine, or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine. "Biomedicine" or "medicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Unlike medicine, an alternative product or practice does not originate from using scientific methods, but may instead be based on hearsay, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources. Some other definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare. This can refer to the lack of support that alternative therapies receive from medical scientists regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum. For example, a widely used definition devised by the US NCCIH calls it "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine". However, these descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and introductory courses or modules can be offered as part of standard undergraduate medical training; alternative medicine is taught in more than half of US medical schools and US health insurers are increasingly willing to provide reimbursement for alternative therapies. Complementary or integrative medicine Complementary medicine (CM) or integrative medicine (IM) is when alternative medicine is used together with functional medical treatment in a belief that it improves the effect of treatments. For example, acupuncture (piercing the body with needles to influence the flow of a supernatural energy) might be believed to increase the effectiveness or "complement" science-based medicine when used at the same time. Instead, significant drug interactions caused by alternative therapies may make treatments less effective, notably in cancer therapy. Besides the usual issues with alternative medicine, integrative medicine has been described as an attempt to bring pseudoscience into academic science-based medicine, leading to the pejorative term "quackademic medicine". Due to its many names, the field has been criticized for intense rebranding of what are essentially the same practices. CAM is an abbreviation of the phrase complementary and alternative medicine. It has also been called sCAM or SCAM with the addition of "so-called" or "supplements". Other terms Traditional medicine refers to the pre-scientific practices of a certain culture, in contrast to what is typically practiced in cultures where medical science dominates. "Eastern medicine" typically refers to the traditional medicines of Asia where evidence-based medicine penetrated much later. Holistic medicine is another rebranding of alternative medicine. In this case, the words balance and holism are often used alongside complementary or integrative, claiming to take into account a "whole" person, in contrast to the supposed reductionism of medicine. Challenges in defining alternative medicine Prominent members of the science and biomedical science community say that it is not meaningful to define an alternative medicine that is separate from a conventional medicine because the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to any medicine at all. Others say that alternative medicine cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between alternative and conventional medicine overlap, are porous, and change. Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Under a definition of alternative medicine as "non-mainstream", treatments considered alternative in one location may be considered conventional in another. Critics say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo. John Diamond wrote that "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't", a notion later echoed by Paul Offit: "The truth is there's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't. And the best way to sort it out is by carefully evaluating scientific studies—not by visiting Internet chat rooms, reading magazine articles, or talking to friends." Comedian Tim Minchin has also taken to the issue in his viral animation short Storm: "By definition alternative medicine has either not been proved to work, or been proved not to work. Do you know what they call alternative medicine that's been proved to work? Medicine." Types Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies. Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based. Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, superstition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods. Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices. Unscientific belief systems Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science. Traditional ethnic systems Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine (TCM), Ayurveda in India, or practices of other cultures around the world. Some useful applications of traditional medicines have been researched and accepted within ordinary medicine, however the underlying belief systems are seldom scientific and are not accepted. Traditional medicine is considered alternative when it is used outside its home region; or when it is used together with or instead of known functional treatment; or when it can be reasonably expected that the patient or practitioner knows or should know that it will not work – such as knowing that the practice is based on superstition. Supernatural energies Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine. Herbal remedies and other substances Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods. Examples include healing claims for non-vitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng. Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products. It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements". Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents. Religion, faith healing, and prayer NCCIH classification A US agency, National Center on Complementary and Integrative Health (NCCIH), has created a classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy. None of these energies have any evidence to support that they effect the body in any positive or health promoting way. Whole medical systems: Cut across more than one of the other groups; examples include traditional Chinese medicine, naturopathy, homeopathy, and ayurveda. Mind-body interventions: Explore the interconnection between the mind, body, and spirit, under the premise that they affect "bodily functions and symptoms". A connection between mind and body is conventional medical fact, and this classification does not include therapies with proven function such as cognitive behavioral therapy. "Biology"-based practices: Use substances found in nature such as herbs, foods, vitamins, and other natural substances. (Note that as used here, "biology" does not refer to the science of biology, but is a usage newly coined by NCCIH in the primary source used for this article. "Biology-based" as coined by NCCIH may refer to chemicals from a nonbiological source, such as use of the poison lead in traditional Chinese medicine, and to other nonbiological substances.) Manipulative and body-based practices: feature manipulation or movement of body parts, such as is done in bodywork, chiropractic, and osteopathic manipulation. Energy medicine: is a domain that deals with putative and verifiable energy fields: Biofield therapies are intended to influence energy fields that are purported to surround and penetrate the body. The existence of such energy fields have been disproven. Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in a non-scientific manner. History The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment. It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery. Until the 1970s, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments. In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine". Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s. This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine. At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation. By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine. By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen". An analysis of trends in the criticism of complementary and alternative medicine (CAM) in five prestigious American medical journals during the period of reorganization within medicine (1965–1999) was reported as showing that the medical profession had responded to the growth of CAM in three phases, and that in each phase, changes in the medical marketplace had influenced the type of response in the journals. Changes included relaxed medical licensing, the development of managed care, rising consumerism, and the establishment of the USA Office of Alternative Medicine (later National Center for Complementary and Alternative Medicine, currently National Center for Complementary and Integrative Health). Medical education Mainly as a result of reforms following the Flexner Report of 1910 medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic. Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology. Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine, and engaging in complex clinical reasoning (medical decision-making). Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies. By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US. Exceptionally, the School of Medicine of the University of Maryland, Baltimore, includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration). Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD). All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Exam (USMLE). Efficacy There is a general scientific consensus that alternative therapies lack the requisite scientific validation, and their effectiveness is either unproved or disproved. Many of the claims regarding the efficacy of alternative medicines are controversial, since research on them is frequently of low quality and methodologically flawed. Selective publication bias, marked differences in product quality and standardisation, and some companies making unsubstantiated claims call into question the claims of efficacy of isolated examples where there is evidence for alternative therapies. The Scientific Review of Alternative Medicine points to confusions in the general population – a person may attribute symptomatic relief to an otherwise-ineffective therapy just because they are taking something (the placebo effect); the natural recovery from or the cyclical nature of an illness (the regression fallacy) gets misattributed to an alternative medicine being taken; a person not diagnosed with science-based medicine may never originally have had a true illness diagnosed as an alternative disease category. Edzard Ernst characterized the evidence for many alternative techniques as weak, nonexistent, or negative and in 2011 published his estimate that about 7.4% were based on "sound evidence", although he believes that may be an overestimate. Ernst has concluded that 95% of the alternative therapies he and his team studied, including acupuncture, herbal medicine, homeopathy, and reflexology, are "statistically indistinguishable from placebo treatments", but he also believes there is something that conventional doctors can usefully learn from the chiropractors and homeopath: this is the therapeutic value of the placebo effect, one of the strangest phenomena in medicine. In 2003, a project funded by the CDC identified 208 condition-treatment pairs, of which 58% had been studied by at least one randomized controlled trial (RCT), and 23% had been assessed with a meta-analysis. According to a 2005 book by a US Institute of Medicine panel, the number of RCTs focused on CAM has risen dramatically. , the Cochrane Library had 145 CAM-related Cochrane systematic reviews and 340 non-Cochrane systematic reviews. An analysis of the conclusions of only the 145 Cochrane reviews was done by two readers. In 83% of the cases, the readers agreed. In the 17% in which they disagreed, a third reader agreed with one of the initial readers to set a rating. These studies found that, for CAM, 38.4% concluded positive effect or possibly positive (12.4%), 4.8% concluded no effect, 0.7% concluded harmful effect, and 56.6% concluded insufficient evidence. An assessment of conventional treatments found that 41.3% concluded positive or possibly positive effect, 20% concluded no effect, 8.1% concluded net harmful effects, and 21.3% concluded insufficient evidence. However, the CAM review used the more developed 2004 Cochrane database, while the conventional review used the initial 1998 Cochrane database. Alternative therapies do not "complement" (improve the effect of, or mitigate the side effects of) functional medical treatment. Significant drug interactions caused by alternative therapies may instead negatively impact functional treatment by making prescription drugs less effective, such as interference by herbal preparations with warfarin. In the same way as for conventional therapies, drugs, and interventions, it can be difficult to test the efficacy of alternative medicine in clinical trials. In instances where an established, effective, treatment for a condition is already available, the Helsinki Declaration states that withholding such treatment is unethical in most circumstances. Use of standard-of-care treatment in addition to an alternative technique being tested may produce confounded or difficult-to-interpret results. Cancer researcher Andrew J. Vickers has stated: Perceived mechanism of effect Anything classified as alternative medicine by definition does not have a healing or medical effect. However, there are different mechanisms through which it can be perceived to "work". The common denominator of these mechanisms is that effects are mis-attributed to the alternative treatment. Placebo effect A placebo is a treatment with no intended therapeutic value. An example of a placebo is an inert pill, but it can include more dramatic interventions like sham surgery. The placebo effect is the concept that patients will perceive an improvement after being treated with an inert treatment. The opposite of the placebo effect is the nocebo effect, when patients who expect a treatment to be harmful will perceive harmful effects after taking it. Placebos do not have a physical effect on diseases or improve overall outcomes, but patients may report improvements in subjective outcomes such as pain and nausea. A 1955 study suggested that a substantial part of a medicine's impact was due to the placebo effect. However, reassessments found the study to have flawed methodology. This and other modern reviews suggest that other factors like natural recovery and reporting bias should also be considered. All of these are reasons why alternative therapies may be credited for improving a patient's condition even though the objective effect is non-existent, or even harmful. David Gorski argues that alternative treatments should be treated as a placebo, rather than as medicine. Almost none have performed significantly better than a placebo in clinical trials. Furthermore, distrust of conventional medicine may lead to patients experiencing the nocebo effect when taking effective medication. Regression to the mean A patient who receives an inert treatment may report improvements afterwards that it did not cause. Assuming it was the cause without evidence is an example of the regression fallacy. This may be due to a natural recovery from the illness, or a fluctuation in the symptoms of a long-term condition. The concept of regression toward the mean implies that an extreme result is more likely to be followed by a less extreme result. Other factors There are also reasons why a placebo treatment group may outperform a "no-treatment" group in a test which are not related to a patient's experience. These include patients reporting more favourable results than they really felt due to politeness or "experimental subordination", observer bias, and misleading wording of questions. In their 2010 systematic review of studies into placebos, Asbjørn Hróbjartsson and Peter C. Gøtzsche write that "even if there were no true effect of placebo, one would expect to record differences between placebo and no-treatment groups due to bias associated with lack of blinding." Alternative therapies may also be credited for perceived improvement through decreased use or effect of medical treatment, and therefore either decreased side effects or nocebo effects towards standard treatment. Use and regulation Appeal Practitioners of complementary medicine usually discuss and advise patients as to available alternative therapies. Patients often express interest in mind-body complementary therapies because they offer a non-drug approach to treating some health conditions. In addition to the social-cultural underpinnings of the popularity of alternative medicine, there are several psychological issues that are critical to its growth, notably psychological effects, such as the will to believe, cognitive biases that help maintain self-esteem and promote harmonious social functioning, and the post hoc, ergo propter hoc fallacy. Marketing Alternative medicine is a profitable industry with large media advertising expenditures. Accordingly, alternative practices are often portrayed positively and compared favorably to "big pharma". The popularity of complementary & alternative medicine (CAM) may be related to other factors that Edzard Ernst mentioned in an interview in The Independent: Paul Offit proposed that "alternative medicine becomes quackery" in four ways: by recommending against conventional therapies that are helpful, promoting potentially harmful therapies without adequate warning, draining patients' bank accounts, or by promoting "magical thinking." Promoting alternative medicine has been called dangerous and unethical. Social factors Authors have speculated on the socio-cultural and psychological reasons for the appeal of alternative medicines among the minority using them in lieu of conventional medicine. There are several socio-cultural reasons for the interest in these treatments centered on the low level of scientific literacy among the public at large and a concomitant increase in antiscientific attitudes and new age mysticism. Related to this are vigorous marketing of extravagant claims by the alternative medical community combined with inadequate media scrutiny and attacks on critics. Alternative medicine is criticized for taking advantage of the least fortunate members of society. There is also an increase in conspiracy theories toward conventional medicine and pharmaceutical companies, mistrust of traditional authority figures, such as the physician, and a dislike of the current delivery methods of scientific biomedicine, all of which have led patients to seek out alternative medicine to treat a variety of ailments. Many patients lack access to contemporary medicine, due to a lack of private or public health insurance, which leads them to seek out lower-cost alternative medicine. Medical doctors are also aggressively marketing alternative medicine to profit from this market. Patients can be averse to the painful, unpleasant, and sometimes-dangerous side effects of biomedical treatments. Treatments for severe diseases such as cancer and HIV infection have well-known, significant side-effects. Even low-risk medications such as antibiotics can have potential to cause life-threatening anaphylactic reactions in a very few individuals. Many medications may cause minor but bothersome symptoms such as cough or upset stomach. In all of these cases, patients may be seeking out alternative therapies to avoid the adverse effects of conventional treatments. Prevalence of use According to recent research, the increasing popularity of the CAM needs to be explained by moral convictions or lifestyle choices rather than by economic reasoning. In developing nations, access to essential medicines is severely restricted by lack of resources and poverty. Traditional remedies, often closely resembling or forming the basis for alternative remedies, may comprise primary healthcare or be integrated into the healthcare system. In Africa, traditional medicine is used for 80% of primary healthcare, and in developing nations as a whole over one-third of the population lack access to essential medicines. Some have proposed adopting a prize system to reward medical research. However, public funding for research exists. In the US increasing the funding for research on alternative medicine is the purpose of the US National Center for Complementary and Alternative Medicine (NCCAM). NCCAM has spent more than US$2.5 billion on such research since 1992 and this research has not demonstrated the efficacy of alternative therapies. The NCCAM's sister organization in the NIC Office of Cancer Complementary and Alternative Medicine gives grants of around $105 million every year. Testing alternative medicine that has no scientific basis has been called a waste of scarce research resources. That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia. In the US In the United States, the 1974 Child Abuse Prevention and Treatment Act (CAPTA) required that for states to receive federal money, they had to grant religious exemptions to child neglect and abuse laws regarding religion-based healing practices. Thirty-one states have child-abuse religious exemptions. The use of alternative medicine in the US has increased, with a 50 percent increase in expenditures and a 25 percent increase in the use of alternative therapies between 1990 and 1997 in America. According to a national survey conducted in 2002, "36 percent of U.S. adults aged 18 years and over use some form of complementary and alternative medicine." Americans spend many billions on the therapies annually. Most Americans used CAM to treat and/or prevent musculoskeletal conditions or other conditions associated with chronic or recurring pain. In America, women were more likely than men to use CAM, with the biggest difference in use of mind-body therapies including prayer specifically for health reasons". In 2008, more than 37% of American hospitals offered alternative therapies, up from 27 percent in 2005, and 25% in 2004. More than 70% of the hospitals offering CAM were in urban areas. A survey of Americans found that 88 percent thought that "there are some good ways of treating sickness that medical science does not recognize". Use of magnets was the most common tool in energy medicine in America, and among users of it, 58 percent described it as at least "sort of scientific", when it is not at all scientific. In 2002, at least 60 percent of US medical schools have at least some class time spent teaching alternative therapies. "Therapeutic touch" was taught at more than 100 colleges and universities in 75 countries before the practice was debunked by a nine-year-old child for a school science project. Prevalence of use of specific therapies The most common CAM therapies used in the US in 2002 were prayer (45%), herbalism (19%), breathing meditation (12%), meditation (8%), chiropractic medicine (8%), yoga (5–6%), body work (5%), diet-based therapy (4%), progressive relaxation (3%), mega-vitamin therapy (3%) and Visualization (2%) In Britain, the most often used alternative therapies were Alexander technique, aromatherapy, Bach and other flower remedies, body work therapies including massage, Counseling stress therapies, hypnotherapy, meditation, reflexology, Shiatsu, Ayurvedic medicine, nutritional medicine, and Yoga. Ayurvedic medicine remedies are mainly plant based with some use of animal materials. Safety concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. According to the National Health Service (England), the most commonly used complementary and alternative medicines (CAM) supported by the NHS in the UK are: acupuncture, aromatherapy, chiropractic, homeopathy, massage, osteopathy and clinical hypnotherapy. In palliative care Complementary therapies are often used in palliative care or by practitioners attempting to manage chronic pain in patients. Integrative medicine is considered more acceptable in the interdisciplinary approach used in palliative care than in other areas of medicine. "From its early experiences of care for the dying, palliative care took for granted the necessity of placing patient values and lifestyle habits at the core of any design and delivery of quality care at the end of life. If the patient desired complementary therapies, and as long as such treatments provided additional support and did not endanger the patient, they were considered acceptable." The non-pharmacologic interventions of complementary medicine can employ mind-body interventions designed to "reduce pain and concomitant mood disturbance and increase quality of life." Regulation The alternative medicine lobby has successfully pushed for alternative therapies to be subject to far less regulation than conventional medicine. Some professions of complementary/traditional/alternative medicine, such as chiropractic, have achieved full regulation in North America and other parts of the world and are regulated in a manner similar to that governing science-based medicine. In contrast, other approaches may be partially recognized and others have no regulation at all. In some cases, promotion of alternative therapies is allowed when there is demonstrably no effect, only a tradition of use. Despite laws making it illegal to market or promote alternative therapies for use in cancer treatment, many practitioners promote them. Regulation and licensing of alternative medicine ranges widely from country to country, and state to state. In Austria and Germany complementary and alternative medicine is mainly in the hands of doctors with MDs, and half or more of the American alternative practitioners are licensed MDs. In Germany herbs are tightly regulated: half are prescribed by doctors and covered by health insurance. Government bodies in the US and elsewhere have published information or guidance about alternative medicine. The U.S. Food and Drug Administration (FDA), has issued online warnings for consumers about medication health fraud. This includes a section on Alternative Medicine Fraud, such as a warning that Ayurvedic products generally have not been approved by the FDA before marketing. Risks and problems Negative outcomes According to the Institute of Medicine, use of alternative medical techniques may result in several types of harm: "Economic harm, which results in monetary loss but presents no health hazard;" "Indirect harm, which results in a delay of appropriate treatment, or in unreasonable expectations that discourage patients and their families from accepting and dealing effectively with their medical conditions;" "Direct harm, which results in adverse patient outcome." Interactions with conventional pharmaceuticals Forms of alternative medicine that are biologically active can be dangerous even when used in conjunction with conventional medicine. Examples include immuno-augmentation therapy, shark cartilage, bioresonance therapy, oxygen and ozone therapies, and insulin potentiation therapy. Some herbal remedies can cause dangerous interactions with chemotherapy drugs, radiation therapy, or anesthetics during surgery, among other problems. An example of these dangers was reported by Associate Professor Alastair MacLennan of Adelaide University, Australia regarding a patient who almost bled to death on the operating table after neglecting to mention that she had been taking "natural" potions to "build up her strength" before the operation, including a powerful anticoagulant that nearly caused her death. To ABC Online, MacLennan also gives another possible mechanism: Side-effects Conventional treatments are subjected to testing for undesired side-effects, whereas alternative therapies, in general, are not subjected to such testing at all. Any treatment – whether conventional or alternative – that has a biological or psychological effect on a patient may also have potential to possess dangerous biological or psychological side-effects. Attempts to refute this fact with regard to alternative therapies sometimes use the appeal to nature fallacy, i.e., "That which is natural cannot be harmful." Specific groups of patients such as patients with impaired hepatic or renal function are more susceptible to side effects of alternative remedies. An exception to the normal thinking regarding side-effects is Homeopathy. Since 1938, the U.S. Food and Drug Administration (FDA) has regulated homeopathic products in "several significantly different ways from other drugs." Homeopathic preparations, termed "remedies", are extremely dilute, often far beyond the point where a single molecule of the original active (and possibly toxic) ingredient is likely to remain. They are, thus, considered safe on that count, but "their products are exempt from good manufacturing practice requirements related to expiration dating and from finished product testing for identity and strength", and their alcohol concentration may be much higher than allowed in conventional drugs. Treatment delay Alternative medicine may discourage people from getting the best possible treatment. Those having experienced or perceived success with one alternative therapy for a minor ailment may be convinced of its efficacy and persuaded to extrapolate that success to some other alternative therapy for a more serious, possibly life-threatening illness. For this reason, critics argue that therapies that rely on the placebo effect to define success are very dangerous. According to mental health journalist Scott Lilienfeld in 2002, "unvalidated or scientifically unsupported mental health practices can lead individuals to forgo effective treatments" and refers to this as opportunity cost. Individuals who spend large amounts of time and money on ineffective treatments may be left with precious little of either, and may forfeit the opportunity to obtain treatments that could be more helpful. In short, even innocuous treatments can indirectly produce negative outcomes. Between 2001 and 2003, four children died in Australia because their parents chose ineffective naturopathic, homeopathic, or other alternative medicines and diets rather than conventional therapies. Unconventional cancer "cures" There have always been "many therapies offered outside of conventional cancer treatment centers and based on theories not found in biomedicine. These alternative cancer cures have often been described as 'unproven,' suggesting that appropriate clinical trials have not been conducted and that the therapeutic value of the treatment is unknown." However, "many alternative cancer treatments have been investigated in good-quality clinical trials, and they have been shown to be ineffective. ... The label 'unproven' is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been 'disproven'." Edzard Ernst has stated: Rejection of science Complementary and alternative medicine (CAM) is not as well researched as conventional medicine, which undergoes intense research before release to the public. Practitioners of science-based medicine also discard practices and treatments when they are shown ineffective, while alternative practitioners do not. Funding for research is also sparse making it difficult to do further research for effectiveness of CAM. Most funding for CAM is funded by government agencies. Proposed research for CAM are rejected by most private funding agencies because the results of research are not reliable. The research for CAM has to meet certain standards from research ethics committees, which most CAM researchers find almost impossible to meet. Even with the little research done on it, CAM has not been proven to be effective. Studies that have been done will be cited by CAM practitioners in an attempt to claim a basis in science. These studies tend to have a variety of problems, such as small samples, various biases, poor research design, lack of controls, negative results, etc. Even those with positive results can be better explained as resulting in false positives due to bias and noisy data. Alternative medicine may lead to a false understanding of the body and of the process of science. Steven Novella, a neurologist at Yale School of Medicine, wrote that government-funded studies of integrating alternative medicine techniques into the mainstream are "used to lend an appearance of legitimacy to treatments that are not legitimate." Marcia Angell considered that critics felt that healthcare practices should be classified based solely on scientific evidence, and if a treatment had been rigorously tested and found safe and effective, science-based medicine will adopt it regardless of whether it was considered "alternative" to begin with. It is possible for a method to change categories (proven vs. unproven), based on increased knowledge of its effectiveness or lack thereof. A prominent supporter of this position is George D. Lundberg, former editor of the Journal of the American Medical Association (JAMA). Writing in 1999 in CA: A Cancer Journal for Clinicians Barrie R. Cassileth mentioned a 1997 letter to the US Senate Subcommittee on Public Health and Safety, which had deplored the lack of critical thinking and scientific rigor in OAM-supported research, had been signed by four Nobel Laureates and other prominent scientists. (This was supported by the National Institutes of Health (NIH).) In March 2009, a staff writer for the Washington Post reported that the impending national discussion about broadening access to health care, improving medical practice and saving money was giving a group of scientists an opening to propose shutting down the National Center for Complementary and Alternative Medicine. They quoted one of these scientists, Steven Salzberg, a genome researcher and computational biologist at the University of Maryland, as saying "One of our concerns is that NIH is funding pseudoscience." They noted that the vast majority of studies were based on fundamental misunderstandings of physiology and disease, and had shown little or no effect. Writers such as Carl Sagan, a noted astrophysicist, advocate of scientific skepticism and the author of The Demon-Haunted World: Science as a Candle in the Dark (1996), have lambasted the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated. Sampson has also pointed out that CAM tolerated contradiction without thorough reason and experiment. Barrett has pointed out that there is a policy at the NIH of never saying something does not work, only that a different version or dose might give different results. Barrett also expressed concern that, just because some "alternatives" have merit, there is the impression that the rest deserve equal consideration and respect even though most are worthless, since they are all classified under the one heading of alternative medicine. Some critics of alternative medicine are focused upon health fraud, misinformation, and quackery as public health problems, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch. Grounds for opposing alternative medicine include that: It is usually based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud. Alternative therapies typically lack any scientific validation, and their effectiveness is either unproved or disproved. Treatments are not part of the conventional, science-based healthcare system. Research on alternative medicine is frequently of low quality and methodologically flawed. Where alternative therapies have replaced conventional science-based medicine, even with the safest alternative medicines, failure to use or delay in using conventional science-based medicine has caused deaths. Methods may incorporate or base themselves on traditional medicine, folk knowledge, spiritual beliefs, ignorance or misunderstanding of scientific principles, errors in reasoning, or newly conceived approaches claiming to heal. Many alternative medical treatments are not patentable, which may lead to less research funding from the private sector. In addition, in most countries, alternative therapies (in contrast to pharmaceuticals) can be marketed without any proof of efficacy – also a disincentive for manufacturers to fund scientific research. English evolutionary biologist Richard Dawkins, in his 2003 book A Devil's Chaplain, defined alternative medicine as a "set of practices that cannot be tested, refuse to be tested, or consistently fail tests." Dawkins argued that if a technique is demonstrated effective in properly performed trials then it ceases to be alternative and simply becomes medicine. CAM is also often less regulated than conventional medicine. There are ethical concerns about whether people who perform CAM have the proper knowledge to treat patients. CAM is often done by non-physicians who do not operate with the same medical licensing laws which govern conventional medicine, and it is often described as an issue of non-maleficence. According to two writers, Wallace Sampson and K. Butler, marketing is part of the training required in alternative medicine, and propaganda methods in alternative medicine have been traced back to those used by Hitler and Goebels in their promotion of pseudoscience in medicine. In November 2011 Edzard Ernst stated that the "level of misinformation about alternative medicine has now reached the point where it has become dangerous and unethical. So far, alternative medicine has remained an ethics-free zone. It is time to change this." Conflicts of interest Some commentators have said that special consideration must be given to the issue of conflicts of interest in alternative medicine. Edzard Ernst has said that most researchers into alternative medicine are at risk of "unidirectional bias" because of a generally uncritical belief in their chosen subject. Ernst cites as evidence the phenomenon whereby 100% of a sample of acupuncture trials originating in China had positive conclusions. David Gorski contrasts evidence-based medicine, in which researchers try to disprove hyphotheses, with what he says is the frequent practice in pseudoscience-based research, of striving to confirm pre-existing notions. Harriet Hall writes that there is a contrast between the circumstances of alternative medicine practitioners and disinterested scientists: in the case of acupuncture, for example, an acupuncturist would have "a great deal to lose" if acupuncture were rejected by research; but the disinterested skeptic would not lose anything if its effects were confirmed; rather their change of mind would enhance their skeptical credentials. Use of health and research resources Research into alternative therapies has been criticized for "diverting research time, money, and other resources from more fruitful lines of investigation in order to pursue a theory that has no basis in biology." Research methods expert and author of Snake Oil Science, R. Barker Bausell, has stated that "it's become politically correct to investigate nonsense." A commonly cited statistic is that the US National Institute of Health had spent $2.5 billion on investigating alternative therapies prior to 2009, with none being found to be effective. Gallery See also Allopathic medicine Conservation medicine Ethnomedicine Gallbladder flush Homeopathy Hypnotherapy Osteopathic medicine Psychic surgery Siddha medicine Vertebral subluxation Notes References Bibliography Further reading Reprinted in . World Health Organization Summary. Benchmarks for training in traditional / complementary and alternative medicine Journals Alternative Therapies in Health and Medicine. Aliso Viejo, California : InnoVision Communications, c1995- NLM ID: 9502013 Alternative Medicine Review: A Journal of Clinical Therapeutics. Sandpoint, Idaho : Thorne Research, c. 1996 NLM ID: 9705340 BMC Complementary and Alternative Medicine. London: BioMed Central, 2001 NLM ID: 101088661 Complementary Therapies in Medicine. Edinburgh; New York : Churchill Livingstone, c. 1993 NLM ID: 9308777 Evidence Based Complementary and Alternative Medicine: eCAM. New York: Hindawi, c. 2004 NLM ID: 101215021 Forschende Komplementärmedizin / Research in Complementary Medicine Journal for Alternative and Complementary Medicine New York : Mary Ann Liebert, c. 1995 Scientific Review of Alternative Medicine (SRAM) External links Pseudoscience
Alternative medicine
In evolutionary biology, adaptive radiation is a process in which organisms diversify rapidly from an ancestral species into a multitude of new forms, particularly when a change in the environment makes new resources available, alters biotic interactions or opens new environmental niches. Starting with a single ancestor, this process results in the speciation and phenotypic adaptation of an array of species exhibiting different morphological and physiological traits. The prototypical example of adaptive radiation is finch speciation on the Galapagos ("Darwin's finches"), but examples are known from around the world. Characteristics Four features can be used to identify an adaptive radiation: A common ancestry of component species: specifically a recent ancestry. Note that this is not the same as a monophyly in which all descendants of a common ancestor are included. A phenotype-environment correlation: a significant association between environments and the morphological and physiological traits used to exploit those environments. Trait utility: the performance or fitness advantages of trait values in their corresponding environments. Rapid speciation: presence of one or more bursts in the emergence of new species around the time that ecological and phenotypic divergence is underway. Conditions Adaptive radiations are thought to be triggered by an ecological opportunity or a new adaptive zone. Sources of ecological opportunity can be the loss of antagonists (competitors or predators), the evolution of a key innovation or dispersal to a new environment. Any one of these ecological opportunities has the potential to result in an increase in population size and relaxed stabilizing (constraining) selection. As genetic diversity is positively correlated with population size the expanded population will have more genetic diversity compared to the ancestral population. With reduced stabilizing selection phenotypic diversity can also increase. In addition, intraspecific competition will increase, promoting divergent selection to use a wider range of resources. This ecological release provides the potential for ecological speciation and thus adaptive radiation. Occupying a new environment might take place under the following conditions: A new habitat has opened up: a volcano, for example, can create new ground in the middle of the ocean. This is the case in places like Hawaii and the Galapagos. For aquatic species, the formation of a large new lake habitat could serve the same purpose; the tectonic movement that formed the East African Rift, ultimately leading to the creation of the Rift Valley Lakes, is an example of this. An extinction event could effectively achieve this same result, opening up niches that were previously occupied by species that no longer exist. This new habitat is relatively isolated. When a volcano erupts on the mainland and destroys an adjacent forest, it is likely that the terrestrial plant and animal species that used to live in the destroyed region will recolonize without evolving greatly. However, if a newly formed habitat is isolated, the species that colonize it will likely be somewhat random and uncommon arrivals. The new habitat has a wide availability of niche space. The rare colonist can only adaptively radiate into as many forms as there are niches. Relationship between mass-extinctions and mass adaptive radiations A 2020 study found there to be no direct causal relationship between the proportionally most comparable mass radiations and extinctions in terms of "co-occurrence of species", substantially challenging the hypothesis of "creative mass extinctions". Examples Darwin's finches Darwin's finches are an often-used textbook example of adaptive radiation. Today represented by approximately 15 species, Darwin's finches are Galapagos endemics famously adapted for a specialized feeding behavior (although one species, the Cocos finch (Pinaroloxias inornata), is not found in the Galapagos but on the island of Cocos south of Costa Rica). Darwin's finches are not actually finches in the true sense, but are members of the tanager family Thraupidae, and are derived from a single ancestor that arrived in the Galapagos from mainland South America perhaps just 3 million years ago. Excluding the Cocos finch, each species of Darwin's finch is generally widely distributed in the Galapagos and fills the same niche on each island. For the ground finches, this niche is a diet of seeds, and they have thick bills to facilitate the consumption of these hard materials. The ground finches are further specialized to eat seeds of a particular size: the large ground finch (Geospiza magnirostris) is the largest species of Darwin's finch and has the thickest beak for breaking open the toughest seeds, the small ground finch (Geospiza fuliginosa) has a smaller beak for eating smaller seeds, and the medium ground finch (Geospiza fortis) has a beak of intermediate size for optimal consumption of intermediately sized seeds (relative to G. magnirostris and G. fuliginosa). There is some overlap: for example, the most robust medium ground finches could have beaks larger than those of the smallest large ground finches. Because of this overlap, it can be difficult to tell the species apart by eye, though their songs differ. These three species often occur sympatrically, and during the rainy season in the Galapagos when food is plentiful, they specialize little and eat the same, easily accessible foods. It was not well-understood why their beaks were so adapted until Peter and Rosemary Grant studied their feeding behavior in the long dry season, and discovered that when food is scarce, the ground finches use their specialized beaks to eat the seeds that they are best suited to eat and thus avoid starvation. The other finches in the Galapagos are similarly uniquely adapted for their particular niche. The cactus finches (Geospiza sp.) have somewhat longer beaks than the ground finches that serve the dual purpose of allowing them to feed on Opuntia cactus nectar and pollen while these plants are flowering, but on seeds during the rest of the year. The warbler-finches (Certhidea sp.) have short, pointed beaks for eating insects. The woodpecker finch (Camarhynchus pallidus) has a slender beak which it uses to pick at wood in search of insects; it also uses small sticks to reach insect prey inside the wood, making it one of the few animals that use tools. The mechanism by which the finches initially diversified is still an area of active research. One proposition is that the finches were able to have a non-adaptive, allopatric speciation event on separate islands in the archipelago, such that when they reconverged on some islands, they were able to maintain reproductive isolation. Once they occurred in sympatry, niche specialization was favored so that the different species competed less directly for resources. This second, sympatric event was adaptive radiation. Cichlids of the African Great Lakes The haplochromine cichlid fishes in the Great Lakes of the East African Rift (particularly in Lake Tanganyika, Lake Malawi, and Lake Victoria) form the most speciose modern example of adaptive radiation. These lakes are believed to be home to about 2,000 different species of cichlid, spanning a wide range of ecological roles and morphological characteristics. Cichlids in these lakes fill nearly all of the roles typically filled by many fish families, including those of predators, scavengers, and herbivores, with varying dentitions and head shapes to match their dietary habits. In each case, the radiation events are only a few million years old, making the high level of speciation particularly remarkable. Several factors could be responsible for this diversity: the availability of a multitude of niches probably favored specialization, as few other fish taxa are present in the lakes (meaning that sympatric speciation was the most probable mechanism for initial specialization). Also, continual changes in the water level of the lakes during the Pleistocene (which often turned the largest lakes into several smaller ones) could have created the conditions for secondary allopatric speciation. Tanganyika cichlids Lake Tanganyika is the site from which nearly all the cichlid lineages of East Africa (including both riverine and lake species) originated. Thus, the species in the lake constitute a single adaptive radiation event but do not form a single monophyletic clade. Lake Tanganyika is also the least speciose of the three largest African Great Lakes, with only around 200 species of cichlid; however, these cichlids are more morphologically divergent and ecologically distinct than their counterparts in lakes Malawi and Victoria, an artifact of Lake Tanganyika's older cichlid fauna. Lake Tanganyika itself is believed to have formed 9–12 million years ago, putting a recent cap on the age of the lake's cichlid fauna. Many of Tanganyika's cichlids live very specialized lifestyles. The giant or emperor cichlid (Boulengerochromis microlepis) is a piscivore often ranked the largest of all cichlids (though it competes for this title with South America's Cichla temensis, the speckled peacock bass). It is thought that giant cichlids spawn only a single time, breeding in their third year and defending their young until they reach a large size, before dying of starvation some time thereafter. The three species of Altolamprologus are also piscivores, but with laterally compressed bodies and thick scales enabling them to chase prey into thin cracks in rocks without damaging their skin. Plecodus straeleni has evolved large, strangely curved teeth that are designed to scrape scales off of the sides of other fish, scales being its main source of food. Gnathochromis permaxillaris possesses a large mouth with a protruding upper lip, and feeds by opening this mouth downward onto the sandy lake bottom, sucking in small invertebrates. A number of Tanganyika's cichlids are shell-brooders, meaning that mating pairs lay and fertilize their eggs inside of empty shells on the lake bottom. Lamprologus callipterus is a unique egg-brooding species, with 15 cm-long males amassing collections of shells and guarding them in the hopes of attracting females (about 6 cm in length) to lay eggs in these shells. These dominant males must defend their territories from three types of rival: (1) other dominant males looking to steal shells; (2) younger, "sneaker" males looking to fertilize eggs in a dominant male's territory; and (3) tiny, 2–4 cm "parasitic dwarf" males that also attempt to rush in and fertilize eggs in the dominant male's territory. These parasitic dwarf males never grow to the size of dominant males, and the male offspring of dominant and parasitic dwarf males grow with 100% fidelity into the form of their fathers. A number of other highly specialized Tanganyika cichlids exist aside from these examples, including those adapted for life in open lake water up to 200m deep. Malawi cichlids The cichlids of Lake Malawi constitute a "species flock" of up to 1000 endemic species. Only seven cichlid species in Lake Malawi are not a part of the species flock: the Eastern happy (Astatotilapia calliptera), the sungwa (Serranochromis robustus), and five tilapia species (genera Oreochromis and Coptodon). All of the other cichlid species in the lake are descendants of a single original colonist species, which itself was descended from Tanganyikan ancestors. The common ancestor of Malawi's species flock is believed to have reached the lake 3.4 million years ago at the earliest, making Malawi cichlids' diversification into their present numbers particularly rapid. Malawi's cichlids span a similarly range of feeding behaviors to those of Tanganyika, but also show signs of a much more recent origin. For example, all members of the Malawi species flock are mouth-brooders, meaning the female keeps her eggs in her mouth until they hatch; in almost all species, the eggs are also fertilized in the female's mouth, and in a few species, the females continue to guard their fry in their mouth after they hatch. Males of most species display predominantly blue coloration when mating. However, a number of particularly divergent species are known from Malawi, including the piscivorous Nimbochromis livingtonii, which lies on its side in the substrate until small cichlids, perhaps drawn to its broken white patterning, come to inspect the predator - at which point they are swiftly eaten. Victoria cichlids Lake Victoria's cichlids are also a species flock, once composed of some 500 or more species. The deliberate introduction of the Nile Perch (Lates niloticus) in the 1950s proved disastrous for Victoria cichlids, and the collective biomass of the Victoria cichlid species flock has decreased substantially and an unknown number of species have become extinct. However, the original range of morphological and behavioral diversity seen in the lake's cichlid fauna is still mostly present today, if endangered. These again include cichlids specialized for niches across the trophic spectrum, as in Tanganyika and Malawi, but again, there are standouts. Victoria is famously home to many piscivorous cichlid species, some of which feed by sucking the contents out of mouthbrooding females' mouths. Victoria's cichlids constitute a far younger radiation than even that of Lake Malawi, with estimates of the age of the flock ranging from 200,000 years to as little as 14,000. Adaptive radiation in Hawaii Hawaii has served as the site of a number of adaptive radiation events, owing to its isolation, recent origin, and large land area. The three most famous examples of these radiations are presented below, though insects like the Hawaiian drosophilid flies and Hyposmocoma moths have also undergone adaptive radiation. Hawaiian honeycreepers The Hawaiian honeycreepers form a large, highly morphologically diverse species group of birds that began radiating in the early days of the Hawaiian archipelago. While today only 17 species are known to persist in Hawaii (3 more may or may not be extinct), there were more than 50 species prior to Polynesian colonization of the archipelago (between 18 and 21 species have gone extinct since the discovery of the islands by westerners). The Hawaiian honeycreepers are known for their beaks, which are specialized to satisfy a wide range of dietary needs: for example, the beak of the ʻakiapōlāʻau (Hemignathus wilsoni) is characterized by a short, sharp lower mandible for scraping bark off of trees, and the much longer, curved upper mandible is used to probe the wood underneath for insects. Meanwhile, the ʻiʻiwi (Drepanis coccinea) has a very long curved beak for reaching nectar deep in Lobelia flowers. An entire clade of Hawaiian honeycreepers, the tribe Psittirostrini, is composed of thick-billed, mostly seed-eating birds, like the Laysan finch (Telespiza cantans). In at least some cases, similar morphologies and behaviors appear to have evolved convergently among the Hawaiian honeycreepers; for example, the short, pointed beaks of Loxops and Oreomystis evolved separately despite once forming the justification for lumping the two genera together. The Hawaiian honeycreepers are believed to have descended from a single common ancestor some 15 to 20 million years ago, though estimates range as low as 3.5 million years. Hawaiian silverswords Adaptive radiation is not a strictly vertebrate phenomenon, and examples are also known from among plants. The most famous example of adaptive radiation in plants is quite possibly the Hawaiian silverswords, named for alpine desert-dwelling Argyroxiphium species with long, silvery leaves that live for up to 20 years before growing a single flowering stalk and then dying. The Hawaiian silversword alliance consists of twenty-eight species of Hawaiian plants which, aside from the namesake silverswords, includes trees, shrubs, vines, cushion plants, and more. The silversword alliance is believed to have originated in Hawaii no more than 6 million years ago, making this one of Hawaii's youngest adaptive radiation events. This means that the silverswords evolved on Hawaii's modern high islands, and descended from a single common ancestor that arrived on Kauai from western North America. The closest modern relatives of the silverswords today are California tarweeds of the family Asteraceae. Hawaiian lobelioids Hawaii is also the site of a separate major floral adaptive radiation event: the Hawaiian lobelioids. The Hawaiian lobelioids are significantly more speciose than the silverswords, perhaps because they have been present in Hawaii for so much longer: they descended from a single common ancestor who arrived in the archipelago up to 15 million years ago. Today the Hawaiian lobelioids form a clade of over 125 species, including succulents, trees, shrubs, epiphytes, etc. Many species have been lost to extinction and many of the surviving species endangered. Caribbean anoles Anole lizards are distributed broadly in the New World, from the Southeastern US to South America. With over 400 species currently recognized, often placed in a single genus (Anolis), they constitute one of the largest radiation events among all lizards. Anole radiation on the mainland has largely been a process of speciation, and is not adaptive to any great degree, but anoles on each of the Greater Antilles (Cuba, Hispaniola, Puerto Rico, and Jamaica) have adaptively radiated in separate, convergent ways. On each of these islands, anoles have evolved with such a consistent set of morphological adaptations that each species can be assigned to one of six "ecomorphs": trunk–ground, trunk–crown, grass–bush, crown–giant, twig, and trunk. Take, for example, crown–giants from each of these islands: the Cuban Anolis luteogularis, Hispaniola's Anolis ricordii, Puerto Rico's Anolis cuvieri, and Jamaica's Anolis garmani (Cuba and Hispaniola are both home to more than one species of crown–giant). These anoles are all large, canopy-dwelling species with large heads and large lamellae (scales on the undersides of the fingers and toes that are important for traction in climbing), and yet none of these species are particularly closely related and appear to have evolved these similar traits independently. The same can be said of the other five ecomorphs across the Caribbean's four largest islands. Much like in the case of the cichlids of the three largest African Great Lakes, each of these islands is home to its own convergent Anolis adaptive radiation event. Other examples Presented above are the most well-documented examples of modern adaptive radiation, but other examples are known. Populations of three-spined sticklebacks have repeatedly diverged and evolved into distinct ecotypes. On Madagascar, birds of the family Vangidae are marked by very distinct beak shapes to suit their ecological roles. Madagascan mantellid frogs have radiated into forms that mirror other tropical frog faunas, with the brightly colored mantellas (Mantella) having evolved convergently with the Neotropical poison dart frogs of Dendrobatidae, while the arboreal Boophis species are the Madagascan equivalent of tree frogs and glass frogs. The pseudoxyrhophiine snakes of Madagascar have evolved into fossorial, arboreal, terrestrial, and semi-aquatic forms that converge with the colubroid faunas in the rest of the world. These Madagascan examples are significantly older than most of the other examples presented here: Madagascar's fauna has been evolving in isolation since the island split from India some 88 million years ago, and the Mantellidae originated around 50 mya. Older examples are known: the K-Pg extinction event, which caused the disappearance of the dinosaurs and most other reptilian megafauna 65 million years ago, is seen as having triggered a global adaptive radiation event that created the mammal diversity that exists today. See also Cambrian explosion—the most notable evolutionary radiation event Evolutionary radiation—a more general term to describe any radiation List of adaptive radiated Hawaiian honeycreepers by form List of adaptive radiated marsupials by form Nonadaptive radiation References Further reading Wilson, E. et al. Life on Earth, by Wilson, E.; Eisner, T.; Briggs, W.; Dickerson, R.; Metzenberg, R.; O'Brien, R.; Susman, M.; Boggs, W. (Sinauer Associates, Inc., Publishers, Stamford, Connecticut), c 1974. Chapters: The Multiplication of Species; Biogeography, pp 824–877. 40 Graphs, w species pictures, also Tables, Photos, etc. Includes Galápagos Islands, Hawaii, and Australia subcontinent, (plus St. Helena Island, etc.). Leakey, Richard. The Origin of Humankind—on adaptive radiation in biology and human evolution, pp. 28–32, 1994, Orion Publishing. Grant, P.R. 1999. The ecology and evolution of Darwin's Finches. Princeton University Press, Princeton, NJ. Mayr, Ernst. 2001. What evolution is. Basic Books, New York, NY. Gavrilets, S. and A. Vose. 2009. Dynamic patterns of adaptive radiation: evolution of mating preferences. In Butlin, R.K., J. Bridle, and D. Schluter (eds) Speciation and Patterns of Diversity, Cambridge University Press, page. 102–126. Pinto, Gabriel, Luke Mahler, Luke J. Harmon, and Jonathan B. Losos. "Testing the Island Effect in Adaptive Radiation: Rates and Patterns of Morphological Diversification in Caribbean and Mainland Anolis Lizards." NCBI (2008): n. pag. Web. 28 Oct. 2014. Schluter, Dolph. The ecology of adaptive radiation. Oxford University Press, 2000. Speciation Evolutionary biology terminology
Adaptive radiation
Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix. Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer. Properties of agarose gel Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of 35–42 °C and a melting temperature of 85–95 °C. Low-melting and low-gelling agaroses made through chemical modifications are also available. Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute as 0.15% to form a slab for gel electrophoresis. Low-concentration gels (0.1–0.2%) however are fragile and therefore hard to handle. Agarose gel has lower resolving power than polyacrylamide gel for DNA but has a greater range of separation, and is therefore used for DNA fragments of usually 50–20,000 bp in size. The limit of resolution for standard agarose gel electrophoresis is around 750 kb, but resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large proteins, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5–10 nm. A 0.9% agarose gel has pores large enough for the entry of bacteriophage T4. The agarose polymer contains charged groups, in particular pyruvate and sulphate. These negatively charged groups create a flow of water in the opposite direction to the movement of DNA in a process called electroendosmosis (EEO), and can therefore retard the movement of DNA and cause blurring of bands. Higher concentration gels would have higher electroendosmotic flow. Low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids, but high EEO agarose may be used for other purposes. The lower sulphate content of low EEO agarose, particularly low-melting point (LMP) agarose, is also beneficial in cases where the DNA extracted from gel is to be used for further manipulation as the presence of contaminating sulphates may affect some subsequent procedures, such as ligation and PCR. Zero EEO agaroses however are undesirable for some applications as they may be made by adding positively charged groups and such groups can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used in preference to agar as the agaropectin component in agar contains a significant amount of negatively charged sulphate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum proteins, a high EEO may be desirable, and agaropectin may be added in the gel used. Migration of nucleic acids in agarose gel Factors affecting migration of nucleic acid in gel A number of factors can affect the migration of nucleic acids: the dimension of the gel pores (gel concentration), size of DNA being electrophoresed, the voltage used, the ionic strength of the buffer, and the concentration of intercalating dye such as ethidium bromide if used during electrophoresis. Smaller molecules travel faster than larger molecules in gel, and double-stranded DNA moves at a rate that is inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments, and separation of very large DNA fragments requires the use of pulsed field gel electrophoresis (PFGE), which applies alternating current from different directions and the large DNA fragments are separated as they reorient themselves with the changing field. For standard agarose gel electrophoresis, larger molecules are resolved better using a low concentration gel while smaller molecules separate better at high concentration gel. Higher concentration gels, however, require longer run times (sometimes days). The movement of the DNA may be affected by the conformation of the DNA molecule, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present. Gel electrophoresis of the plasmids would normally show the negatively supercoiled form as the main band, while nicked DNA (open circular form) and the relaxed closed circular form appears as minor bands. The rate at which the various forms move however can change using different electrophoresis conditions, and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. Ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence in gel during electrophoresis can affect its movement. For example, the positive charge of ethidium bromide can reduce the DNA movement by 15%. Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology. DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way. The rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. The resolution of large DNA fragments however is lower at high voltage. The mobility of DNA may also change in an unsteady field – in a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. This phenomenon can result in band inversion in field inversion gel electrophoresis (FIGE), whereby larger DNA fragments move faster than smaller ones. Migration anomalies "Smiley" gels - this edge effect is caused when the voltage applied is too high for the gel concentration used. Overloading of DNA - overloading of DNA slows down the migration of DNA fragments. Contamination - presence of impurities, such as salts or proteins can affect the movement of the DNA. Mechanism of migration and separation The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis. The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, and a number of models exist to explain the mechanism of separation of biomolecules in gel matrix. A widely accepted one is the Ogston model which treats the polymer matrix as a sieve. A globular protein or a random coil DNA moves through the interconnected pores, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this sieving process. The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. A biased reptation model applies at higher electric field strength, whereby the leading end of the molecule become strongly biased in the forward direction and pulls the rest of the molecule along. Real-time fluorescence microscopy of stained molecules, however, showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres. General procedure The details of an agarose gel electrophoresis experiment may vary depending on methods, but most follow a general procedure. Casting of gel The gel is prepared by dissolving the agarose powder in an appropriate buffer, such as TAE or TBE, to be used in electrophoresis. The agarose is dispersed in the buffer before heating it to near-boiling point, but avoid boiling. The melted agarose is allowed to cool sufficiently before pouring the solution into a cast as the cast may warp or crack if the agarose solution is too hot. A comb is placed in the cast to create wells for loading sample, and the gel should be completely set before use. The concentration of gel affects the resolution of DNA separation. The agarose gel is composed of microscopic pores through which the molecules travel, and there is an inverse relationship between the pore size of the agarose gel and the concentration – pore size decreases as the density of agarose fibers increases. High gel concentration improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. The process allows fragments ranging from 50 base pairs to several mega bases to be separated depending on the gel concentration used. The concentration is measured in weight of agarose over volume of buffer used (g/ml). For a standard agarose gel electrophoresis, a 0.8% gel gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel gives good resolution for small 0.2–1kb fragments. 1% gels is often used for a standard electrophoresis. High percentage gels are often brittle and may not set evenly, while low percentage gels (0.1-0.2%) are fragile and not easy to handle. Low-melting-point (LMP) agarose gels are also more fragile than normal agarose gel. Low-melting point agarose may be used on its own or simultaneously with standard agarose for the separation and isolation of DNA. PFGE and FIGE are often done with high percentage agarose gels. Loading of samples Once the gel has set, the comb is removed, leaving wells where DNA samples can be loaded. Loading buffer is mixed with the DNA sample before the mixture is loaded into the wells. The loading buffer contains a dense compound, which may be glycerol, sucrose, or Ficoll, that raises the density of the sample so that the DNA sample may sink to the bottom of the well. If the DNA sample contains residual ethanol after its preparation, it may float out of the well. The loading buffer also includes colored dyes such as xylene cyanol and bromophenol blue used to monitor the progress of the electrophoresis. The DNA samples are loaded using a pipette. Electrophoresis Agarose gel electrophoresis is most commonly done horizontally in a submarine mode whereby the slab gel is completely submerged in buffer during electrophoresis. It is also possible, but less common, to perform the electrophoresis vertically, as well as horizontally with the gel raised on agarose legs using an appropriate apparatus. The buffer used in the gel is the same as the running buffer in the electrophoresis tank, which is why electrophoresis in the submarine mode is possible with agarose gel. For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended (the distance in cm refers to the distance between electrodes, therefore this recommended voltage would be 5 to 8 multiplied by the distance between the electrodes in cm). Voltage may also be limited by the fact that it heats the gel and may cause the gel to melt if it is run at high voltage for a prolonged period, especially if the gel used is LMP agarose gel. Too high a voltage may also reduce resolution, as well as causing band streaking for large DNA molecules. Too low a voltage may lead to broadening of band for small DNA fragments due to dispersion and diffusion. Since DNA is not visible in natural light, the progress of the electrophoresis is monitored using colored dyes. Xylene cyanol (light blue color) comigrates large DNA fragments, while Bromophenol blue (dark blue) comigrates with the smaller fragments. Less commonly used dyes include Cresol Red and Orange G which migrate ahead of bromophenol blue. A DNA marker is also run together for the estimation of the molecular weight of the DNA fragments. Note however that the size of a circular DNA like plasmids cannot be accurately gauged using standard markers unless it has been linearized by restriction digest, alternatively a supercoiled DNA marker may be used. Staining and visualization DNA as well as RNA are normally visualized by staining with ethidium bromide, which intercalates into the major grooves of the DNA and fluoresces under UV light. The intercalation depends on the concentration of DNA and thus, a band with high intensity will indicate a higher amount of DNA compared to a band of less intensity. The ethidium bromide may be added to the agarose solution before it gels, or the DNA gel may be stained later after electrophoresis. Destaining of the gel is not necessary but may produce better images. Other methods of staining are available; examples are SYBR Green, GelRed, methylene blue, brilliant cresyl blue, Nile blue sulphate, and crystal violet. SYBR Green, GelRed and other similar commercial products are sold as safer alternatives to ethidium bromide as it has been shown to be mutagenic in Ames test, although the carcinogenicity of ethidium bromide has not actually been established. SYBR Green requires the use of a blue-light transilluminator. DNA stained with crystal violet can be viewed under natural light without the use of a UV transilluminator which is an advantage, however it may not produce a strong band. When stained with ethidium bromide, the gel is viewed with an ultraviolet (UV) transilluminator. The UV light excites the electrons within the aromatic ring of ethidium bromide, and once they return to the ground state, light is released, making the DNA and ethidium bromide complex fluoresce. Standard transilluminators use wavelengths of 302/312-nm (UV-B), however exposure of DNA to UV radiation for as little as 45 seconds can produce damage to DNA and affect subsequent procedures, for example reducing the efficiency of transformation, in vitro transcription, and PCR. Exposure of the DNA to UV radiation therefore should be limited. Using a higher wavelength of 365 nm (UV-A range) causes less damage to the DNA but also produces much weaker fluorescence with ethidium bromide. Where multiple wavelengths can be selected in the transillumintor, the shorter wavelength would be used to capture images, while the longer wavelength should be used if it is necessary to work on the gel for any extended period of time. The transilluminator apparatus may also contain image capture devices, such as a digital or polaroid camera, that allow an image of the gel to be taken or printed. For gel electrophoresis of protein, the bands may be visualised with Coomassie or silver stains. Downstream procedures The separated DNA bands are often used for further procedures, and a DNA band may be cut out of the gel as a slice, dissolved and purified. Contaminants however may affect some downstream procedures such as PCR, and low melting point agarose may be preferred in some cases as it contains fewer of the sulphates that can affect some enzymatic reactions. The gels may also be used for blotting techniques. Buffers In general, the ideal buffer should have good conductivity, produce less heat and have a long life. There are a number of buffers used for agarose electrophoresis; common ones for nucleic acids include Tris/Acetate/EDTA (TAE) and Tris/Borate/EDTA (TBE). The buffers used contain EDTA to inactivate many nucleases which require divalent cation for their function. The borate in TBE buffer can be problematic as borate can polymerize, and/or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity, but it provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. Many other buffers have been proposed, e.g. lithium borate (LB), iso electric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) and or matched ion mobilities, which leads to longer buffer life. Tris-phosphate buffer has high buffering capacity but cannot be used if DNA extracted is to be used in phosphate sensitive reaction. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM lithium borate). Other buffering system may be used in specific applications, for example, barbituric acid-sodium barbiturate or Tris-barbiturate buffers may be used for in agarose gel electrophoresis of proteins, for example in the detection of abnormal distribution of proteins. Applications Estimation of the size of DNA molecules following digestion with restriction enzymes, e.g., in restriction mapping of cloned DNA. Estimation of the DNA concentration by comparing the intensity of the nucleic acid band with the corresponding band of the size marker. Analysis of products of a polymerase chain reaction (PCR), e.g., in molecular genetic diagnosis or genetic fingerprinting Separation of DNA fragments for extraction and purification. Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer. Separation of proteins, for example, screening of protein abnormalities in clinical chemistry. Agarose gels are easily cast and handled compared to other matrices and nucleic acids are not chemically altered during electrophoresis. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons. History In the mid- to late 1960s, agarose and related gels were first found to be effective matrices for DNA and RNA electrophoresis. See also Gel electrophoresis Immunodiffusion, Immunoelectrophoresis SDD-AGE Northern blot SDS-polyacrylamide gel electrophoresis Southern blot References External links How to run a DNA or RNA gel Animation of gel analysis of DNA restriction fragments Video and article of agarose gel electrophoresis Step by step photos of running a gel and extracting DNA Drinking straw electrophoresis! A typical method from wikiversity Building a gel electrophoresis chamber Biological techniques and tools Molecular biology Electrophoresis Polymerase chain reaction Articles containing video clips
Agarose gel electrophoresis
Antimicrobial resistance (AMR) occurs when microbes evolve mechanisms that protect them from the effects of antimicrobials. Antibiotic resistance is a subset of AMR, that applies specifically to bacteria that become resistant to antibiotics. Infections due to AMR cause millions of deaths each year. Infections caused by resistant microbes are more difficult to treat, requiring higher doses of antimicrobial drugs, or alternative medications which may prove more toxic. These approaches may also be more expensive. Microbes resistant to multiple antimicrobials are called multidrug resistant (MDR). All classes of microbes can evolve resistance. Fungi evolve antifungal resistance. Viruses evolve antiviral resistance. Protozoa evolve antiprotozoal resistance, and bacteria evolve antibiotic resistance. Those bacteria that are considered extensively drug resistant (XDR) or totally drug-resistant (TDR) are sometimes called "superbugs". Resistance in bacteria can arise naturally by genetic mutation, or by one species acquiring resistance from another. Resistance can appear spontaneously because of random mutations. However, extended use of antimicrobials appears to encourage selection for mutations which can render antimicrobials ineffective. The prevention of antibiotic misuse, which can lead to antibiotic resistance, includes taking antibiotics only when prescribed. Narrow-spectrum antibiotics are preferred over broad-spectrum antibiotics when possible, as effectively and accurately targeting specific organisms is less likely to cause resistance, as well as side effects. For people who take these medications at home, education about proper use is essential. Health care providers can minimize spread of resistant infections by use of proper sanitation and hygiene, including handwashing and disinfecting between patients, and should encourage the same of the patient, visitors, and family members. Rising drug resistance is caused mainly by use of antimicrobials in humans and other animals, and spread of resistant strains between the two. Growing resistance has also been linked to releasing inadequately treated effluents from the pharmaceutical industry, especially in countries where bulk drugs are manufactured. Antibiotics increase selective pressure in bacterial populations, causing vulnerable bacteria to die; this increases the percentage of resistant bacteria which continue growing. Even at very low levels of antibiotic, resistant bacteria can have a growth advantage and grow faster than vulnerable bacteria. As resistance to antibiotics becomes more common there is greater need for alternative treatments. Calls for new antibiotic therapies have been issued, but new drug development is becoming rarer. Antimicrobial resistance is increasing globally due to increased prescription and dispensing of antibiotic drugs in developing countries. Estimates are that 700,000 to several million deaths result per year and continues to pose a major public health threat worldwide. Each year in the United States, at least 2.8 million people become infected with bacteria that are resistant to antibiotics and at least 35,000 people die and US$55 billion in increased health care costs and lost productivity. According to World Health Organization (WHO) estimates, 350 million deaths could be caused by AMR by 2050. By then, the yearly death toll will be 10 million, according to a United Nations report. There are public calls for global collective action to address the threat that include proposals for international treaties on antimicrobial resistance. Worldwide antibiotic resistance is not completely identified, but poorer countries with weaker healthcare systems are more affected. During the COVID-19 pandemic, action against antimicrobial resistance slowed due to scientists focusing more on SARS-CoV-2 research. Definition The WHO defines antimicrobial resistance as a microorganism's resistance to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe. Antibiotic resistance is a subset of antimicrobial resistance. This more specified resistance is linked to pathogenic bacteria and thus broken down into two further subsets, microbiological and clinical. Resistance linked microbiologically is the most common and occurs from genes, mutated or inherited, that allow the bacteria to resist the mechanism associated with certain antibiotics. Clinical resistance is shown through the failure of many therapeutic techniques where the bacteria that are normally susceptible to a treatment become resistant after surviving the outcome of the treatment. In both cases of acquired resistance, the bacteria can pass the genetic catalyst for resistance through conjugation, transduction, or transformation. This allows the resistance to spread across the same pathogen or even similar bacterial pathogens. Overview WHO report released April 2014 stated, "this serious threat is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country. Antibiotic resistance—when bacteria change so antibiotics no longer work in people who need them to treat infections—is now a major threat to public health." Global deaths attributable to AMR numbered 1.27 million in 2019. That year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old. In 2018, WHO considered antibiotic resistance to be one of the biggest threats to global health, food security and development. Deaths attributable to AMR vary by area: The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings. Causes Antimicrobial resistance is mainly caused by the overuse of antimicrobials. This leads to microbes either evolving a defense against drugs used to treat them, or certain strains of microbes that have a natural resistance to antimicrobials becoming much more prevalent than the ones that are easily defeated with medication.  While antimicrobial resistance does occur naturally over time, the use of antimicrobial agents in a variety of settings both within the healthcare industry and outside of has led to antimicrobial resistance becoming increasingly more prevalent. Natural occurrence Antimicrobial resistance can evolve naturally due to continued exposure to antimicrobials. Natural selection means that organisms that are able to adapt to their environment, survive, and continue to produce offspring. As a result, the types of microorganisms that are able to survive over time with continued attack by certain antimicrobial agents will naturally become more prevalent in the environment, and those without this resistance will become obsolete. Some contemporary antibiotic resistances have also evolved naturally before the use of antibiotics or human clinical use of respective antimicrobials. For instance, methicillin-resistance evolved in a pathogen of hedgehogs, possibly as a co-evolutionary adaptation of the pathogen to hedgehogs that are infected by a dermatophyte that naturally produces antibiotics. Over time, most of the strains of bacteria and infections present will be the type resistant to the antimicrobial agent being used to treat them, making this agent now ineffective to defeat most microbes. With the increased use of antimicrobial agents, there is a speeding up of this natural process. Self-medication Self-medication by consumers is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional", and it has been identified as one of the primary reasons for the evolution of antimicrobial resistance. In an effort to manage their own illness, patients take the advice of false media sources, friends, and family causing them to take antimicrobials unnecessarily or in excess. Many people resort to this out of necessity, when they have a limited amount of money to see a doctor, or in many developing countries a poorly developed economy and lack of doctors are the cause of self-medication. In these developing countries, governments resort to allowing the sale of antimicrobials as over the counter medications so people could have access to them without having to find or pay to see a medical professional. This increased access makes it extremely easy to obtain antimicrobials without the advice of a physician, and as a result many antimicrobials are taken incorrectly leading to resistant microbial strains. One major example of a place that faces these challenges is India, where in the state of Punjab 73% of the population resorted to treating their minor health issues and chronic illnesses through self-medication. The major issue with self-medication is the lack of knowledge of the public on the dangerous effects of antimicrobial resistance, and how they can contribute to it through mistreating or misdiagnosing themselves.  In order to determine the public's knowledge and preconceived notions on antibiotic resistance, a major type of antimicrobial resistance, a screening of 3537 articles published in Europe, Asia, and North America was done.  Of the 55,225 total people surveyed, 70% had heard of antibiotic resistance previously, but 88% of those people thought it referred to some type of physical change in the body.  With so many people around the world with the ability to self-medicate using antibiotics, and a vast majority unaware of what antimicrobial resistance is, it makes the increase of antimicrobial resistance much more likely. Clinical misuse Clinical misuse by healthcare professionals is another cause leading to increased antimicrobial resistance. Studies done by the CDC show that the indication for treatment of antibiotics, choice of the agent used, and the duration of therapy was incorrect in up to 50% of the cases studied.  In another study done in an intensive care unit in a major hospital in France, it was shown that 30% to 60% of prescribed antibiotics were unnecessary. These inappropriate uses of antimicrobial agents promote the evolution of antimicrobial resistance by supporting the bacteria in developing genetic alterations that lead to resistance. In a study done by the American Journal of Infection Control aimed to evaluate physicians’ attitudes and knowledge on antimicrobial resistance in ambulatory settings, only 63% of those surveyed reported antibiotic resistance as a problem in their local practices, while 23% reported the aggressive prescription of antibiotics as necessary to avoid failing to provide adequate care.  This demonstrates how a majority of doctors underestimate the impact that their own prescribing habits have on antimicrobial resistance as a whole. It also confirms that some physicians may be overly cautious when it comes to prescribing antibiotics for both medical or legal reasons, even when indication for use for these medications is not always confirmed. This can lead to unnecessary antimicrobial use. Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. Pandemics, disinfectants and healthcare systems Increased antibiotic use during the COVID-19 pandemic may exacerbate this global health challenge. Moreover, pandemic burdens on some healthcare systems may contribute to antibiotic-resistant infections. On the other hand, a study suggests that "increased hand hygiene, decreased international travel, and decreased elective hospital procedures may reduce AMR pathogen selection and spread in the short term". Disinfectants such as in various forms of use of alcohol-based hand sanitizers, and antiseptic hand wash may also have the potential to increase antimicrobial resistance. According to a study, "Extensive disinfectant use leads to mutations that induce antimicrobial resistance". Environmental pollution Untreated effluents from pharmaceutical manufacturing industries, hospitals and clinics, and inappropriate disposal of unused or expired medication can expose microbes in the environment to antibiotics and trigger the evolution of resistance. Food production Livestock The antimicrobial resistance crisis also extends to the food industry, specifically with food producing animals.  Antibiotics are fed to livestock to act as growth supplements, and a preventative measure to decrease the likelihood of infections.  This results in the transfer of resistant bacterial strains into the food that humans eat, causing potentially fatal transfer of disease.  While this practice does result in better yields and meat products, it is a major issue in terms of preventing antimicrobial resistance. Though the evidence linking antimicrobial usage in livestock to antimicrobial resistance is limited, the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance strongly recommended the reduction of use of medically important antimicrobials in livestock. Additionally, the Advisory Group stated that such antimicrobials should be expressly prohibited for both growth promotion and disease prevention. In a study published by the National Academy of Sciences mapping antimicrobial consumption in livestock globally, it was predicted that in the 228 countries studied, there would be a total 67% increase in consumption of antibiotics by livestock by 2030. In some countries such as Brazil, Russia, India, China, and South Africa it is predicted that a 99% increase will occur. Several countries have restricted the use of antibiotics in livestock, including Canada, China, Japan, and the US. These restrictions are sometimes associated with a reduction of the prevalence of antimicrobial resistance in humans. Pesticides Most pesticides protect crops against insects and plants, but in some cases antimicrobial pesticides are used to protect against various microorganisms such as bacteria, viruses, fungi, algae, and protozoa. The overuse of many pesticides in an effort to have a higher yield of crops has resulted in many of these microbes evolving a tolerance against these antimicrobial agents. Currently there are over 4000 antimicrobial pesticides registered with the EPA and sold to market, showing the widespread use of these agents. It is estimated that for every single meal a person consumes, 0.3  g of pesticides is used, as 90% of all pesticide use is used on agriculture. A majority of these products are used to help defend against the spread of infectious diseases, and hopefully protect public health. But out of the large amount of pesticides used, it is also estimated that less than 0.1% of those antimicrobial agents, actually reach their targets. That leaves over 99% of all pesticides used available to contaminate other resources. In soil, air, and water these antimicrobial agents are able to spread, coming in contact with more microorganisms and leading to these microbes evolving mechanisms to tolerate and further resist pesticides. Prevention There have been increasing public calls for global collective action to address the threat, including a proposal for international treaty on antimicrobial resistance. Further detail and attention is still needed in order to recognize and measure trends in resistance on the international level; the idea of a global tracking system has been suggested but implementation has yet to occur. A system of this nature would provide insight to areas of high resistance as well as information necessary for evaluating programs and other changes made to fight or reverse antibiotic resistance. Duration of antibiotics Antibiotic treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some, therefore, feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better. Monitoring and mapping There are multiple national and international monitoring programs for drug-resistant threats, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), extended spectrum beta-lactamase (ESBL), vancomycin-resistant Enterococcus (VRE), and multidrug-resistant Acinetobacter baumannii (MRAB). ResistanceOpen is an online global map of antimicrobial resistance developed by HealthMap which displays aggregated data on antimicrobial resistance from publicly available and user submitted data. The website can display data for a radius from a location. Users may submit data from antibiograms for individual hospitals or laboratories. European data is from the EARS-Net (European Antimicrobial Resistance Surveillance Network), part of the ECDC. ResistanceMap is a website by the Center for Disease Dynamics, Economics & Policy and provides data on antimicrobial resistance on a global level. Limiting antibiotic use Antibiotic stewardship programmes appear useful in reducing rates of antibiotic resistance. The antibiotic stewardship program will also provide pharmacists with the knowledge to educate patients that antibiotics will not work for a virus. Excessive antibiotic use has become one of the top contributors to the evolution of antibiotic resistance. Since the beginning of the antibiotic era, antibiotics have been used to treat a wide range of disease. Overuse of antibiotics has become the primary cause of rising levels of antibiotic resistance. The main problem is that doctors are willing to prescribe antibiotics to ill-informed individuals who believe that antibiotics can cure nearly all illnesses, including viral infections like the common cold. In an analysis of drug prescriptions, 36% of individuals with a cold or an upper respiratory infection (both viral in origin) were given prescriptions for antibiotics. These prescriptions accomplished nothing other than increasing the risk of further evolution of antibiotic resistant bacteria. Using antibiotics without prescription is another driving force leading to the overuse of antibiotics to self-treat diseases like the common cold, cough, fever, and dysentery resulting in a epidemic of antibiotic resistance in countries like Bangladesh, risking its spread around the globe. Introducing strict antibiotic stewardship in the outpatient setting may reduce the emerging bacterial resistance. At the hospital level Antimicrobial stewardship teams in hospitals are encouraging optimal use of antimicrobials. The goals of antimicrobial stewardship are to help practitioners pick the right drug at the right dose and duration of therapy while preventing misuse and minimizing the development of resistance. Stewardship may reduce the length of stay by an average of slightly over 1 day while not increasing the risk of death. At the farming level It is established that the use of antibiotics in animal husbandry can give rise to AMR resistances in bacteria found in food animals to the antibiotics being administered (through injections or medicated feeds). For this reason only antimicrobials that are deemed "not-clinically relevant" are used in these practices. Recent studies have shown that the prophylactic use of "non-priority" or "non-clinically relevant" antimicrobials in feeds can potentially, under certain conditions, lead to co-selection of environmental AMR bacteria with resistance to medically important antibiotics. The possibility for co-selection of AMR resistances in the food chain pipeline may have far-reaching implications for human health. At the level of GP Given the volume of care provided in primary care (General Practice), recent strategies have focused on reducing unnecessary antibiotic prescribing in this setting. Simple interventions, such as written information explaining the futility of antibiotics for common infections such as upper respiratory tract infections, have been shown to reduce antibiotic prescribing. The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Cultures should be taken before treatment when indicated and treatment potentially changed based on the susceptibility report. About a third of antibiotic prescriptions written in outpatient settings in the United States were not appropriate in 2010 and 2011. Doctors in the U.S. wrote 506 annual antibiotic scripts for every 1,000 people, with 353 being medically necessary. Health workers and pharmacists can help tackle resistance by: enhancing infection prevention and control; only prescribing and dispensing antibiotics when they are truly needed; prescribing and dispensing the right antibiotic(s) to treat the illness. At the individual level People can help tackle resistance by using antibiotics only when prescribed by a doctor; completing the full prescription, even if they feel better; never sharing antibiotics with others or using leftover prescriptions. Country examples The Netherlands has the lowest rate of antibiotic prescribing in the OECD, at a rate of 11.4 defined daily doses (DDD) per 1,000 people per day in 2011. Germany and Sweden also have lower prescribing rates, with Sweden's rate having been declining since 2007. Greece, France and Belgium have high prescribing rates of more than 28 DDD. Water, sanitation, hygiene Infectious disease control through improved water, sanitation and hygiene (WASH) infrastructure needs to be included in the antimicrobial resistance (AMR) agenda. The "Interagency Coordination Group on Antimicrobial Resistance" stated in 2018 that "the spread of pathogens through unsafe water results in a high burden of gastrointestinal disease, increasing even further the need for antibiotic treatment." This is particularly a problem in developing countries where the spread of infectious diseases caused by inadequate WASH standards is a major driver of antibiotic demand. Growing usage of antibiotics together with persistent infectious disease levels have led to a dangerous cycle in which reliance on antimicrobials increases while the efficacy of drugs diminishes. The proper use of infrastructure for water, sanitation and hygiene (WASH) can result in a 47–72 percent decrease of diarrhea cases treated with antibiotics depending on the type of intervention and its effectiveness. A reduction of the diarrhea disease burden through improved infrastructure would result in large decreases in the number of diarrhea cases treated with antibiotics. This was estimated as ranging from 5 million in Brazil to up to 590 million in India by the year 2030. The strong link between increased consumption and resistance indicates that this will directly mitigate the accelerating spread of AMR. Sanitation and water for all by 2030 is Goal Number 6 of the Sustainable Development Goals. An increase in hand washing compliance by hospital staff results in decreased rates of resistant organisms. Water supply and sanitation infrastructure in health facilities offer significant co-benefits for combatting AMR, and investment should be increased. There is much room for improvement: WHO and UNICEF estimated in 2015 that globally 38% of health facilities did not have a source of water, nearly 19% had no toilets and 35% had no water and soap or alcohol-based hand rub for handwashing. Industrial wastewater treatment Manufacturers of antimicrobials need to improve the treatment of their wastewater (by using industrial wastewater treatment processes) to reduce the release of residues into the environment. Management in animal use Europe In 1997, European Union health ministers voted to ban avoparcin and four additional antibiotics used to promote animal growth in 1999. In 2006 a ban on the use of antibiotics in European feed, with the exception of two antibiotics in poultry feeds, became effective. In Scandinavia, there is evidence that the ban has led to a lower prevalence of antibiotic resistance in (nonhazardous) animal bacterial populations. As of 2004, several European countries established a decline of antimicrobial resistance in humans through limiting the use of antimicrobials in agriculture and food industries without jeopardizing animal health or economic cost. United States The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA) collect data on antibiotic use in humans and in a more limited fashion in animals. The FDA first determined in 1977 that there is evidence of emergence of antibiotic-resistant bacterial strains in livestock. The long-established practice of permitting OTC sales of antibiotics (including penicillin and other drugs) to lay animal owners for administration to their own animals nonetheless continued in all states. In 2000, the FDA announced their intention to revoke approval of fluoroquinolone use in poultry production because of substantial evidence linking it to the emergence of fluoroquinolone-resistant Campylobacter infections in humans. Legal challenges from the food animal and pharmaceutical industries delayed the final decision to do so until 2006. Fluroquinolones have been banned from extra-label use in food animals in the USA since 2007. However, they remain widely used in companion and exotic animals. Global action plans and awareness The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences. These objectives are as follows: improve awareness and understanding of antimicrobial resistance through effective communication, education and training. strengthen the knowledge and evidence base through surveillance and research. reduce the incidence of infection through effective sanitation, hygiene and infection prevention measures. optimize the use of antimicrobial medicines in human and animal health. develop the economic case for sustainable investment that takes account of the needs of all countries and to increase investment in new medicines, diagnostic tools, vaccines and other interventions. Steps towards progress React based in Sweden has produced informative material on AMR for the general public. Videos are being produced for the general public to generate interest and awareness. The Irish Department of Health published a National Action Plan on Antimicrobial Resistance in October 2017. The Strategy for the Control of Antimicrobial Resistance in Ireland (SARI), Iaunched in 2001 developed Guidelines for Antimicrobial Stewardship in Hospitals in Ireland in conjunction with the Health Protection Surveillance Centre, these were published in 2009. Following their publication a public information campaign 'Action on Antibiotics' was launched to highlight the need for a change in antibiotic prescribing. Despite this, antibiotic prescribing remains high with variance in adherence to guidelines. Antibiotic Awareness Week The World Health Organization has promoted the first World Antibiotic Awareness Week running from 16 to 22 November 2015. The aim of the week is to increase global awareness of antibiotic resistance. It also wants to promote the correct usage of antibiotics across all fields in order to prevent further instances of antibiotic resistance. World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance. United Nations In 2016 the Secretary-General of the United Nations convened the Interagency Coordination Group (IACG) on Antimicrobial Resistance. The IACG worked with international organizations and experts in human, animal, and plant health to create a plan to fight antimicrobial resistance. Their report released in April 2019 highlights the seriousness of antimicrobial resistance and the threat it poses to world health. It suggests five recommendations for member states to follow in order to tackle this increasing threat. The IACG recommendations are as follows: Accelerate progress in countries Innovate to secure the future Collaborate for more effective action Invest for a sustainable response Strengthen accountability and global governance Mechanisms and organisms Bacteria The five main mechanisms by which bacteria exhibit resistance to antibiotics are: Drug inactivation or modification: for example, enzymatic deactivation of penicillin G in some penicillin-resistant bacteria through the production of β-lactamases. Drugs may also be chemically modified through the addition of functional groups by transferase enzymes; for example, acetylation, phosphorylation, or adenylation are common resistance mechanisms to aminoglycosides. Acetylation is the most widely used mechanism and can affect a number of drug classes. Alteration of target- or binding site: for example, alteration of PBP—the binding target site of penicillins—in MRSA and other penicillin-resistant bacteria. Another protective mechanism found among bacterial species is ribosomal protection proteins. These proteins protect the bacterial cell from antibiotics that target the cell's ribosomes to inhibit protein synthesis. The mechanism involves the binding of the ribosomal protection proteins to the ribosomes of the bacterial cell, which in turn changes its conformational shape. This allows the ribosomes to continue synthesizing proteins essential to the cell while preventing antibiotics from binding to the ribosome to inhibit protein synthesis. Alteration of metabolic pathway: for example, some sulfonamide-resistant bacteria do not require para-aminobenzoic acid (PABA), an important precursor for the synthesis of folic acid and nucleic acids in bacteria inhibited by sulfonamides, instead, like mammalian cells, they turn to using preformed folic acid. Reduced drug accumulation: by decreasing drug permeability or increasing active efflux (pumping out) of the drugs across the cell surface These pumps within the cellular membrane of certain bacterial species are used to pump antibiotics out of the cell before they are able to do any damage. They are often activated by a specific substrate associated with an antibiotic, as in fluoroquinolone resistance. Ribosome splitting and recycling: for example, drug-mediated stalling of the ribosome by lincomycin and erythromycin unstalled by a heat shock protein found in Listeria monocytogenes, which is a homologue of HflX from other bacteria. Liberation of the ribosome from the drug allows further translation and consequent resistance to the drug. There are several different types of germs that have developed a resistance over time. The six pathogens causing most deaths associated with resistance are Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa. They were responsible for 929,000 deaths attributable to resistance and 3.57 million deaths associated with resistance in 2019. Penicillinase-producing Neisseria gonorrhoeae developed a resistance to penicillin in 1976. Another example is Azithromycin-resistant Neisseria gonorrhoeae, which developed a resistance to azithromycin in 2011. In gram-negative bacteria, plasmid-mediated resistance genes produce proteins that can bind to DNA gyrase, protecting it from the action of quinolones. Finally, mutations at key sites in DNA gyrase or topoisomerase IV can decrease their binding affinity to quinolones, decreasing the drug's effectiveness. Some bacteria are naturally resistant to certain antibiotics; for example, gram-negative bacteria are resistant to most β-lactam antibiotics due to the presence of β-lactamase. Antibiotic resistance can also be acquired as a result of either genetic mutation or horizontal gene transfer. Although mutations are rare, with spontaneous mutations in the pathogen genome occurring at a rate of about 1 in 105 to 1 in 108 per chromosomal replication, the fact that bacteria reproduce at a high rate allows for the effect to be significant. Given that lifespans and production of new generations can be on a timescale of mere hours, a new (de novo) mutation in a parent cell can quickly become an inherited mutation of widespread prevalence, resulting in the microevolution of a fully resistant colony. However, chromosomal mutations also confer a cost of fitness. For example, a ribosomal mutation may protect a bacterial cell by changing the binding site of an antibiotic but may result in slower growth rate. Moreover, some adaptive mutations can propagate not only through inheritance but also through horizontal gene transfer. The most common mechanism of horizontal gene transfer is the transferring of plasmids carrying antibiotic resistance genes between bacteria of the same or different species via conjugation. However, bacteria can also acquire resistance through transformation, as in Streptococcus pneumoniae uptaking of naked fragments of extracellular DNA that contain antibiotic resistance genes to streptomycin, through transduction, as in the bacteriophage-mediated transfer of tetracycline resistance genes between strains of S. pyogenes, or through gene transfer agents, which are particles produced by the host cell that resemble bacteriophage structures and are capable of transferring DNA. Antibiotic resistance can be introduced artificially into a microorganism through laboratory protocols, sometimes used as a selectable marker to examine the mechanisms of gene transfer or to identify individuals that absorbed a piece of DNA that included the resistance gene and another gene of interest. Recent findings show no necessity of large populations of bacteria for the appearance of antibiotic resistance. Small populations of Escherichia coli in an antibiotic gradient can become resistant. Any heterogeneous environment with respect to nutrient and antibiotic gradients may facilitate antibiotic resistance in small bacterial populations. Researchers hypothesize that the mechanism of resistance evolution is based on four SNP mutations in the genome of E. coli produced by the gradient of antibiotic. In one study, which has implications for space microbiology, a non-pathogenic strain E. coli MG1655 was exposed to trace levels of the broad spectrum antibiotic chloramphenicol, under simulated microgravity (LSMMG, or, Low Shear Modeled Microgravity) over 1000 generations. The adapted strain acquired resistance to not only chloramphenicol, but also cross-resistance to other antibiotics; this was in contrast to the observation on the same strain, which was adapted to over 1000 generations under LSMMG, but without any antibiotic exposure; the strain in this case did not acquire any such resistance. Thus, irrespective of where they are used, the use of an antibiotic would likely result in persistent resistance to that antibiotic, as well as cross-resistance to other antimicrobials. In recent years, the emergence and spread of β-lactamases called carbapenemases has become a major health crisis. One such carbapenemase is New Delhi metallo-beta-lactamase 1 (NDM-1), an enzyme that makes bacteria resistant to a broad range of beta-lactam antibiotics. The most common bacteria that make this enzyme are gram-negative such as E. coli and Klebsiella pneumoniae, but the gene for NDM-1 can spread from one strain of bacteria to another by horizontal gene transfer. Viruses Specific antiviral drugs are used to treat some viral infections. These drugs prevent viruses from reproducing by inhibiting essential stages of the virus's replication cycle in infected cells. Antivirals are used to treat HIV, hepatitis B, hepatitis C, influenza, herpes viruses including varicella zoster virus, cytomegalovirus and Epstein-Barr virus. With each virus, some strains have become resistant to the administered drugs. Antiviral drugs typically target key components of viral reproduction; for example, oseltamivir targets influenza neuraminidase, while guanosine analogs inhibit viral DNA polymerase. Resistance to antivirals is thus acquired through mutations in the genes that encode the protein targets of the drugs. Resistance to HIV antivirals is problematic, and even multi-drug resistant strains have evolved. One source of resistance is that many current HIV drugs, including NRTIs and NNRTIs, target reverse transcriptase; however, HIV-1 reverse transcriptase is highly error prone and thus mutations conferring resistance arise rapidly. Resistant strains of the HIV virus emerge rapidly if only one antiviral drug is used. Using three or more drugs together, termed combination therapy, has helped to control this problem, but new drugs are needed because of the continuing emergence of drug-resistant HIV strains. Fungi Infections by fungi are a cause of high morbidity and mortality in immunocompromised persons, such as those with HIV/AIDS, tuberculosis or receiving chemotherapy. The fungi candida, Cryptococcus neoformans and Aspergillus fumigatus cause most of these infections and antifungal resistance occurs in all of them. Multidrug resistance in fungi is increasing because of the widespread use of antifungal drugs to treat infections in immunocompromised individuals. Of particular note, Fluconazole-resistant Candida species have been highlighted as a growing problem by the CDC. More than 20 species of Candida can cause Candidiasis infection, the most common of which is Candida albicans. Candida yeasts normally inhabit the skin and mucous membranes without causing infection. However, overgrowth of Candida can lead to Candidiasis. Some Candida strains are becoming resistant to first-line and second-line antifungal agents such as azoles and echinocandins. Parasites The protozoan parasites that cause the diseases malaria, trypanosomiasis, toxoplasmosis, cryptosporidiosis and leishmaniasis are important human pathogens. Malarial parasites that are resistant to the drugs that are currently available to infections are common and this has led to increased efforts to develop new drugs. Resistance to recently developed drugs such as artemisinin has also been reported. The problem of drug resistance in malaria has driven efforts to develop vaccines. Trypanosomes are parasitic protozoa that cause African trypanosomiasis and Chagas disease (American trypanosomiasis). There are no vaccines to prevent these infections so drugs such as pentamidine and suramin, benznidazole and nifurtimox are used to treat infections. These drugs are effective but infections caused by resistant parasites have been reported. Leishmaniasis is caused by protozoa and is an important public health problem worldwide, especially in sub-tropical and tropical countries. Drug resistance has "become a major concern". History The 1950s to 1970s represented the golden age of antibiotic discovery, where countless new classes of antibiotics were discovered to treat previously incurable diseases such as tuberculosis and syphilis. However, since that time the discovery of new classes of antibiotics has been almost nonexistent, and represents a situation that is especially problematic considering the resiliency of bacteria shown over time and the continued misuse and overuse of antibiotics in treatment. The phenomenon of antimicrobial resistance caused by overuse of antibiotics was predicted as early as 1945 by Alexander Fleming who said "The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily under-dose himself and by exposing his microbes to nonlethal quantities of the drug make them resistant." Without the creation of new and stronger antibiotics an era where common infections and minor injuries can kill, and where complex procedures such as surgery and chemotherapy become too risky, is a very real possibility. Antimicrobial resistance threatens the world as we know it, and can lead to epidemics of enormous proportions if preventive actions are not taken. In this day and age current antimicrobial resistance leads to longer hospital stays, higher medical costs, and increased mortality. Society and culture Since the mid-1980s pharmaceutical companies have invested in medications for cancer or chronic disease that have greater potential to make money and have "de-emphasized or dropped development of antibiotics". On 20 January 2016 at the World Economic Forum in Davos, Switzerland, more than "80 pharmaceutical and diagnostic companies" from around the world called for "transformational commercial models" at a global level to spur research and development on antibiotics and on the "enhanced use of diagnostic tests that can rapidly identify the infecting organism". Legal frameworks Some global health scholars have argued that a global, legal framework is needed to prevent and control antimicrobial resistance. For instance, binding global policies could be used to create antimicrobial use standards, regulate antibiotic marketing, and strengthen global surveillance systems. Ensuring compliance of involved parties is a challenge. Global antimicrobial resistance policies could take lessons from the environmental sector by adopting strategies that have made international environmental agreements successful in the past such as: sanctions for non-compliance, assistance for implementation, majority vote decision-making rules, an independent scientific panel, and specific commitments. United States For the United States 2016 budget, U.S. president Barack Obama proposed to nearly double the amount of federal funding to "combat and prevent" antibiotic resistance to more than $1.2 billion. Many international funding agencies like USAID, DFID, SIDA and Bill & Melinda Gates Foundation have pledged money for developing strategies to counter antimicrobial resistance. On 27 March 2015, the White House released a comprehensive plan to address the increasing need for agencies to combat the rise of antibiotic-resistant bacteria. The Task Force for Combating Antibiotic-Resistant Bacteria developed The National Action Plan for Combating Antibiotic-Resistant Bacteria with the intent of providing a roadmap to guide the US in the antibiotic resistance challenge and with hopes of saving many lives. This plan outlines steps taken by the Federal government over the next five years needed in order to prevent and contain outbreaks of antibiotic-resistant infections; maintain the efficacy of antibiotics already on the market; and to help to develop future diagnostics, antibiotics, and vaccines. The Action Plan was developed around five goals with focuses on strengthening health care, public health veterinary medicine, agriculture, food safety and research, and manufacturing. These goals, as listed by the White House, are as follows: Slow the Emergence of Resistant Bacteria and Prevent the Spread of Resistant Infections Strengthen National One-Health Surveillance Efforts to Combat Resistance Advance Development and use of Rapid and Innovative Diagnostic Tests for Identification and Characterization of Resistant Bacteria Accelerate Basic and Applied Research and Development for New Antibiotics, Other Therapeutics, and Vaccines Improve International Collaboration and Capacities for Antibiotic Resistance Prevention, Surveillance, Control and Antibiotic Research and Development The following are goals set to meet by 2020: Establishment of antimicrobial programs within acute care hospital settings Reduction of inappropriate antibiotic prescription and use by at least 50% in outpatient settings and 20% inpatient settings Establishment of State Antibiotic Resistance (AR) Prevention Programs in all 50 states Elimination of the use of medically important antibiotics for growth promotion in food-producing animals. United Kingdom Public Health England reported that the total number of antibiotic resistant infections in England rose by 9% from 55,812 in 2017 to 60,788 in 2018, but antibiotic consumption had fallen by 9% from 20.0 to 18.2 defined daily doses per 1,000 inhabitants per day between 2014 and 2018. Policies According to World Health Organization, policymakers can help tackle resistance by strengthening resistance-tracking and laboratory capacity and by regulating and promoting the appropriate use of medicines. Policymakers and industry can help tackle resistance by: fostering innovation and research and development of new tools; and promoting cooperation and information sharing among all stakeholders. Further research Rapid viral testing Clinical investigation to rule out bacterial infections are often done for patients with pediatric acute respiratory infections. Currently it is unclear if rapid viral testing affects antibiotic use in children. Vaccines Microorganisms usually do not develop resistance to vaccines because vaccines reduce the spread of the infection and target the pathogen in multiple ways in the same host and possibly in different ways between different hosts. Furthermore, if the use of vaccines increases, there is evidence that antibiotic resistant strains of pathogens will decrease; the need for antibiotics will naturally decrease as vaccines prevent infection before it occurs. However, there are well documented cases of vaccine resistance, although these are usually much less of a problem than antimicrobial resistance. While theoretically promising, antistaphylococcal vaccines have shown limited efficacy, because of immunological variation between Staphylococcus species, and the limited duration of effectiveness of the antibodies produced. Development and testing of more effective vaccines is underway. Two registrational trials have evaluated vaccine candidates in active immunization strategies against S. aureus infection. In a phase II trial, a bivalent vaccine of capsular proteins 5 & 8 was tested in 1804 hemodialysis patients with a primary fistula or synthetic graft vascular access. After 40 weeks following vaccination a protective effect was seen against S. aureus bacteremia, but not at 54 weeks following vaccination. Based on these results, a second trial was conducted which failed to show efficacy. Merck tested V710, a vaccine targeting IsdB, in a blinded randomized trial in patients undergoing median sternotomy. The trial was terminated after a higher rate of multiorgan system failure–related deaths was found in the V710 recipients. Vaccine recipients who developed S. aureus infection were 5 times more likely to die than control recipients who developed S. aureus infection. Numerous investigators have suggested that a multiple-antigen vaccine would be more effective, but a lack of biomarkers defining human protective immunity keep these proposals in the logical, but strictly hypothetical arena. Alternating therapy Alternating therapy is a proposed method in which two or three antibiotics are taken in a rotation versus taking just one antibiotic such that bacteria resistant to one antibiotic are killed when the next antibiotic is taken. Studies have found that this method reduces the rate at which antibiotic resistant bacteria emerge in vitro relative to a single drug for the entire duration. Studies have found that bacteria that evolve antibiotic resistance towards one group of antibiotic may become more sensitive to others. This phenomenon can be used to select against resistant bacteria using an approach termed collateral sensitivity cycling, which has recently been found to be relevant in developing treatment strategies for chronic infections caused by Pseudomonas aeruginosa. Despite its promise, large-scale clinical and experimental studies revealed limited evidence of susceptibility to antibiotic cycling across various pathogens. Development of new drugs Since the discovery of antibiotics, research and development (R&D) efforts have provided new drugs in time to treat bacteria that became resistant to older antibiotics, but in the 2000s there has been concern that development has slowed enough that seriously ill people may run out of treatment options. Another concern is that doctors may become reluctant to perform routine surgeries because of the increased risk of harmful infection. Backup treatments can have serious side-effects; for example, treatment of multi-drug-resistant tuberculosis can cause deafness or psychological disability. The potential crisis at hand is the result of a marked decrease in industry R&D. Poor financial investment in antibiotic research has exacerbated the situation. The pharmaceutical industry has little incentive to invest in antibiotics because of the high risk and because the potential financial returns are less likely to cover the cost of development than for other pharmaceuticals. In 2011, Pfizer, one of the last major pharmaceutical companies developing new antibiotics, shut down its primary research effort, citing poor shareholder returns relative to drugs for chronic illnesses. However, small and medium-sized pharmaceutical companies are still active in antibiotic drug research. In particular, apart from classical synthetic chemistry methodologies, researchers have developed a combinatorial synthetic biology platform on single cell level in a high-throughput screening manner to diversify novel lanthipeptides. In the United States, drug companies and the administration of President Barack Obama had been proposing changing the standards by which the FDA approves antibiotics targeted at resistant organisms. On 18 September 2014 Obama signed an executive order to implement the recommendations proposed in a report by the President's Council of Advisors on Science and Technology (PCAST) which outlines strategies to stream-line clinical trials and speed up the R&D of new antibiotics. Among the proposals: Create a 'robust, standing national clinical trials network for antibiotic testing' which will promptly enroll patients once identified to be suffering from dangerous bacterial infections. The network will allow testing multiple new agents from different companies simultaneously for their safety and efficacy. Establish a 'Special Medical Use (SMU)' pathway for FDA to approve new antimicrobial agents for use in limited patient populations, shorten the approval timeline for new drug so patients with severe infections could benefit as quickly as possible. Provide economic incentives, especially for development of new classes of antibiotics, to offset the steep R&D costs which drive away the industry to develop antibiotics. Scientists have started using advanced computational approaches with supercomputers for the development of new antibiotic derivatives to deal with antimicrobial resistance. Biomaterials Using antibiotic-free alternatives in bone infection treatment may help decrease the use of antibiotics and thus antimicrobial resistance. The bone regeneration material bioactive glass S53P4 has shown to effectively inhibit the bacterial growth of up to 50 clinically relevant bacteria including MRSA and MRSE. Nanomaterials During the last decades, copper and silver nanomaterials have demonstrated appealing features for the development of a new family of antimicrobial agents. Rediscovery of ancient treatments Similar to the situation in malaria therapy, where successful treatments based on ancient recipes have been found, there has already been some success in finding and testing ancient drugs and other treatments that are effective against AMR bacteria. Rapid diagnostics Distinguishing infections requiring antibiotics from self-limiting ones is clinically challenging. In order to guide appropriate use of antibiotics and prevent the evolution and spread of antimicrobial resistance, diagnostic tests that provide clinicians with timely, actionable results are needed. Acute febrile illness is a common reason for seeking medical care worldwide and a major cause of morbidity and mortality. In areas with decreasing malaria incidence, many febrile patients are inappropriately treated for malaria, and in the absence of a simple diagnostic test to identify alternative causes of fever, clinicians presume that a non-malarial febrile illness is most likely a bacterial infection, leading to inappropriate use of antibiotics. Multiple studies have shown that the use of malaria rapid diagnostic tests without reliable tools to distinguish other fever causes has resulted in increased antibiotic use. Antimicrobial susceptibility testing (AST) can help practitioners avoid prescribing unnecessary antibiotics in the style of precision medicine, and help them prescribe effective antibiotics, but with the traditional approach it could take 12 to 48 hours. Rapid testing, possible from molecular diagnostics innovations, is defined as "being feasible within an 8-h working shift". Progress has been slow due to a range of reasons including cost and regulation. Optical techniques such as phase contrast microscopy in combination with single-cell analysis are another powerful method to monitor bacterial growth. In 2017, scientists from Sweden published a method that applies principles of microfluidics and cell tracking, to monitor bacterial response to antibiotics in less than 30 minutes overall manipulation time. Recently, this platform has been advanced by coupling microfluidic chip with optical tweezing in order to isolate bacteria with altered phenotype directly from the analytical matrix. Phage therapy Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture. Phage therapy relies on the use of naturally-occurring bacteriophages to infect and lyse bacteria at the site of infection in a host. Due to current advances in genetics and biotechnology these bacteriophages can possibly be manufactured to treat specific infections. Phages can be bioengineered to target multidrug-resistant bacterial infections, and their use involves the added benefit of preventing the elimination of beneficial bacteria in the human body. Phages destroy bacterial cell walls and membrane through the use of lytic proteins which kill bacteria by making many holes from the inside out. Bacteriophages can even possess the ability to digest the biofilm that many bacteria develop that protect them from antibiotics in order to effectively infect and kill bacteria. Bioengineering can play a role in creating successful bacteriophages. Understanding the mutual interactions and evolutions of bacterial and phage populations in the environment of a human or animal body is essential for rational phage therapy. Bacteriophagics are used against antibiotic resistant bacteria in Georgia (George Eliava Institute) and in one institute in Wrocław, Poland. Bacteriophage cocktails are common drugs sold over the counter in pharmacies in eastern countries. In Belgium, four patients with severe musculoskeletal infections received bacteriophage therapy with concomitant antibiotics. After a single course of phage therapy, no recurrence of infection occurred and no severe side-effects related to the therapy were detected. See also References Books Journals 16-minute film about a post-antibiotic world. Review: External links Animation of Antibiotic Resistance CDC Guideline "Management of Multidrug-Resistant Organisms in Healthcare Settings, 2006" Antimicrobial Stewardship Project, at the Center for Infectious Disease Research and Policy (CIDRAP), University of Minnesota AMR Industry Alliance, "members from large R&D pharma, generic manufacturers, biotech, and diagnostic companies" Why won’t antibiotics cure us anymore? - prof. dr. Nathaniel Martin (Universiteit Leiden) Evolutionary biology Health disasters Pharmaceuticals policy Veterinary medicine Global issues
Antimicrobial resistance
The word aeon , also spelled eon (in American and Australian English), originally meant "life", "vital force" or "being", "generation" or "a period of time", though it tended to be translated as "age" in the sense of "ages", "forever", "timeless" or "for eternity". It is a Latin transliteration from the koine Greek word (ho aion), from the archaic (aiwon). In Homer it typically refers to life or lifespan. Its latest meaning is more or less similar to the Sanskrit word kalpa and Hebrew word olam. A cognate Latin word aevum or aeuum (cf. ) for "age" is present in words such as longevity and mediaeval. Although the term aeon may be used in reference to a period of a million years (especially in geology, cosmology and astronomy), its more common usage is for any long, indefinite period. Aeon can also refer to the four aeons on the geologic time scale that make up the Earth's history, the Hadean, Archean, Proterozoic, and the current aeon, Phanerozoic. Astronomy and cosmology In astronomy an aeon is defined as a billion years (109 years, abbreviated AE). Roger Penrose uses the word aeon to describe the period between successive and cyclic Big Bangs within the context of conformal cyclic cosmology. Philosophy and mysticism In Buddhism, an "aeon" is defined as a 1.334.240.000 years, is the life cycle of the earth. Plato used the word aeon to denote the eternal world of ideas, which he conceived was "behind" the perceived world, as demonstrated in his famous allegory of the cave. Christianity's idea of "eternal life" comes from the word for life, zoe, and a form of aeon, which could mean life in the next aeon, the Kingdom of God, or Heaven, just as much as immortality, as in . According to the Christian doctrine of universal reconciliation, the Greek New Testament scriptures use the word "aeon" to mean a long period (perhaps 1000 years) and the word "aeonian" to mean "during a long period"; Thus there was a time before the aeons, and the aeonian period is finite. After each man's mortal life ends, he is judged worthy of aeonian life or aeonian punishment. That is, after the period of the aeons, all punishment will cease and death is overcome and then God becomes the all in each one (). This contrasts with the conventional Christian belief in eternal life and eternal punishment. Occultists of the Thelema and O.T.O. traditions sometimes speak of a "magical Aeon" that may last for far less time, perhaps as little as 2,000 years. Aeon may also be an archaic name for omnipotent beings, such as gods. Gnosticism In many Gnostic systems, the various emanations of God, who is also known by such names as the One, the Monad, Aion teleos ( "The Broadest Aeon"), Bythos ("depth or profundity", Greek ), Proarkhe ("before the beginning", Greek ), the Arkhe ("the beginning", Greek ), "Sophia" (wisdom), Christos (the Anointed One) are called Aeons. In the different systems these emanations are differently named, classified, and described, but the emanation theory itself is common to all forms of Gnosticism. In the Basilidian Gnosis they are called sonships (υἱότητες huiotetes; sing.: huiotes); according to Marcus, they are numbers and sounds; in Valentinianism they form male/female pairs called "syzygies" (Greek , from σύζυγοι syzygoi). See also Aion (deity) Kalpa (aeon) Saeculum – comparable Latin concept References New Testament Greek words and phrases Units of time Latin words and phrases
Aeon
Sir Andrew John Wiles (born 11 April 1953) is an English mathematician and a Royal Society Research Professor at the University of Oxford, specializing in number theory. He is best known for proving Fermat's Last Theorem, for which he was awarded the 2016 Abel Prize and the 2017 Copley Medal by the Royal Society. He was appointed Knight Commander of the Order of the British Empire in 2000, and in 2018, was appointed as the first Regius Professor of Mathematics at Oxford. Wiles is also a 1997 MacArthur Fellow. Education and early life Wiles was born on 11 April 1953 in Cambridge, England, the son of Maurice Frank Wiles (1923–2005) and Patricia Wiles (née Mowll). From 1952-1955, his father worked as the chaplain at Ridley Hall, Cambridge, and later became the Regius Professor of Divinity at the University of Oxford. Wiles attended King's College School, Cambridge, and The Leys School, Cambridge. Wiles states that he came across Fermat's Last Theorem on his way home from school when he was 10 years old. He stopped at his local library where he found a book The Last Problem, by Eric Temple Bell, about the theorem. Fascinated by the existence of a theorem that was so easy to state that he, a ten-year-old, could understand it, but that no one had proven, he decided to be the first person to prove it. However, he soon realised that his knowledge was too limited, so he abandoned his childhood dream until it was brought back to his attention at the age of 33 by Ken Ribet's 1986 proof of the epsilon conjecture, which Gerhard Frey had previously linked to Fermat's famous equation. Career and research In 1974, Wiles earned his bachelor's degree in mathematics at Merton College, Oxford. Wiles's graduate research was guided by John Coates, beginning in the summer of 1975. Together they worked on the arithmetic of elliptic curves with complex multiplication by the methods of Iwasawa theory. He further worked with Barry Mazur on the main conjecture of Iwasawa theory over the rational numbers, and soon afterward, he generalized this result to totally real fields. In 1980, Wiles earned a PhD while at Clare College, Cambridge. After a stay at the Institute for Advanced Study in Princeton, New Jersey, in 1981, Wiles became a Professor of Mathematics at Princeton University. In 1985–86, Wiles was a Guggenheim Fellow at the Institut des Hautes Études Scientifiques near Paris and at the École Normale Supérieure. From 1988 to 1990, Wiles was a Royal Society Research Professor at the University of Oxford, and then he returned to Princeton. From 1994 to 2009, Wiles was a Eugene Higgins Professor at Princeton. He rejoined Oxford in 2011 as Royal Society Research Professor. In May 2018, Wiles was appointed Regius Professor of Mathematics at Oxford, the first in the university's history. Proof of Fermat's Last Theorem Starting in mid-1986, based on successive progress of the previous few years of Gerhard Frey, Jean-Pierre Serre and Ken Ribet, it became clear that Fermat's Last Theorem could be proven as a corollary of a limited form of the modularity theorem (unproven at the time and then known as the "Taniyama–Shimura–Weil conjecture"). The modularity theorem involved elliptic curves, which was also Wiles's own specialist area. The conjecture was seen by contemporary mathematicians as important, but extraordinarily difficult or perhaps impossible to prove. For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible", adding that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]." Despite this, Wiles, with his from-childhood fascination with Fermat's Last Theorem, decided to undertake the challenge of proving the conjecture, at least to the extent needed for Frey's curve. He dedicated all of his research time to this problem for over six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife. In June 1993, he presented his proof to the public for the first time at a conference in Cambridge. In August 1993, it was discovered that the proof contained a flaw in one area. Wiles tried and failed for over a year to repair his proof. According to Wiles, the crucial idea for circumventing, rather than closing this area came to him on 19 September 1994, when he was on the verge of giving up. Together with his former student Richard Taylor, he published a second paper which circumvented the problem and thus completed the proof. Both papers were published in May 1995 in a dedicated issue of the Annals of Mathematics. Awards and honours Wiles's proof of Fermat's Last Theorem has stood up to the scrutiny of the world's other mathematical experts. Wiles was interviewed for an episode of the BBC documentary series Horizon about Fermat's Last Theorem. This was broadcast as an episode of the PBS science television series Nova with the title "The Proof". His work and life are also described in great detail in Simon Singh's popular book Fermat's Last Theorem. Wiles has been awarded a number of major prizes in mathematics and science: Junior Whitehead Prize of the London Mathematical Society (1988) Elected a Fellow of the Royal Society (FRS) in 1989 Elected member of the American Academy of Arts and Sciences (1994) Schock Prize (1995) Fermat Prize (1995) Wolf Prize in Mathematics (1995/6) Elected a Foreign Associate of the National Academy of Sciences (1996) NAS Award in Mathematics from the National Academy of Sciences (1996) Royal Medal (1996) Ostrowski Prize (1996) Cole Prize (1997) MacArthur Fellowship (1997) Wolfskehl Prize (1997) – see Paul Wolfskehl Elected member of the American Philosophical Society (1997) A silver plaque from the International Mathematical Union (1998) recognising his achievements, in place of the Fields Medal, which is restricted to those under 40 (Wiles was 41 when he proved the theorem in 1994) King Faisal Prize (1998) Clay Research Award (1999) Premio Pitagora (Croton, 2004) Shaw Prize (2005) The asteroid 9999 Wiles was named after Wiles in 1999. Knight Commander of the Order of the British Empire (2000) The building at the University of Oxford housing the Mathematical Institute is named after Wiles. Abel Prize (2016) Copley Medal (2017) Wiles's 1987 certificate of election to the Royal Society reads: References External links Profile from Oxford Profile from Princeton 1953 births Living people 20th-century mathematicians 21st-century mathematicians Abel Prize laureates Alumni of Clare College, Cambridge Alumni of King's College, Cambridge Alumni of Merton College, Oxford Clay Research Award recipients English mathematicians Fellows of Merton College, Oxford Fellows of the Royal Society Fermat's Last Theorem Foreign associates of the National Academy of Sciences Institute for Advanced Study visiting scholars Knights Commander of the Order of the British Empire MacArthur Fellows Members of the American Philosophical Society Members of the French Academy of Sciences Number theorists People educated at The Leys School People from Cambridge Princeton University faculty Recipients of the Copley Medal Regius Professors of Mathematics (University of Oxford) Rolf Schock Prize laureates Royal Medal winners Trustees of the Institute for Advanced Study Whitehead Prize winners Wolf Prize in Mathematics laureates
Andrew Wiles
Avionics (a blend of aviation and electronics) are the electronic systems used on aircraft, artificial satellites, and spacecraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform. History The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics". Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the US Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy, so they required two-seat aircraft with a second crewman to tap on a telegraph key to spell out messages by Morse code. During World War I, AM voice two way radio sets were made possible in 1917 by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying. Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its US ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics. The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented. Modern avionics Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas: Published Routes and Procedures – Improved navigation and routing Negotiated Trajectories – Adding data communications to create preferred routes dynamically Delegated Separation – Enhanced situational awareness in the air and on the ground LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure Surface Operations – To increase safety in approach and departure ATM Efficiencies – Improving the ATM process Market The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach. Aircraft avionics The cockpit of an aircraft is a typical location for avionic equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 400 Hz, 115 volts AC. There are several major vendors of flight avionics, including Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo S.p.A.), Shadin Avionics, and Avidyne Corporation. International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC. Communications Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms. The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication. Navigation Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays. Monitoring The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls. Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed. Aircraft flight-control system Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff. The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested. Fuel Systems Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board. Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks. Refuelling control to upload to a certain total mass of fuel and distribute it automatically. Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks Centre of gravity control transfers from the tail (Trim) tanks forward to the wings as fuel is expended Maintaining fuel in the wing tips (to help stop the wings bending due to lift in flight) & transferring to the main tanks after landing Controlling fuel jettison during an emergency to reduce the aircraft weight. Collision-avoidance systems To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution. To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS). Flight recorders Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident. Weather systems Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas. Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation. Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed. Aircraft management systems There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement. The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners. Mission or tactical avionics Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers. Police and EMS aircraft also carry sophisticated tactical sensors. Military communications While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.). Radar Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar. The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft. Sonar Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines. Electro-optics Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition. ESM/DAS Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it. Aircraft networks The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include: Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft ARINC 664: See ADN above ARINC 629: Commercial Aircraft (Boeing 777) ARINC 708: Weather Radar for Commercial Aircraft ARINC 717: Flight Data Recorder for Commercial Aircraft ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350) Commercial Standard Digital Bus IEEE 1394b: Military Aircraft MIL-STD-1553: Military Aircraft MIL-STD-1760: Military Aircraft TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace TTEthernet – Time-Triggered Ethernet: Orion spacecraft See also ACARS Acronyms and abbreviations in avionics ARINC Avionics software DO-178C Emergency locator beacon Emergency position-indicating radiobeacon station Flight recorder Integrated modular avionics Notes Further reading Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006) Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007) Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005) Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ). External links Avionics in Commercial Aircraft Aircraft Electronics Association (AEA) Pilot's Guide to Avionics The Avionic Systems Standardisation Committee Space Shuttle Avionics Aviation Today Avionics magazine RAES Avionics homepage Aircraft instruments Spacecraft components Electronic engineering
Avionics
Asterix or The Adventures of Asterix ( or , ; "Asterix the Gaul") is a Bande dessinée comic book series about Gaulish warriors, who adventure around the world and fight the Roman Republic, with the aid of a magic potion, during the era of Julius Caesar in an ahistorical telling of the time after the Gallic Wars. The series first appeared in the Franco-Belgian comic magazine Pilote on 29 October 1959. It was written by René Goscinny and illustrated by Albert Uderzo until Goscinny's death in 1977. Uderzo then took over the writing until 2009, when he sold the rights to publishing company Hachette; he died in 2020. In 2013, a new team consisting of Jean-Yves Ferri (script) and Didier Conrad (artwork) took over. , 39 volumes have been released, with the most recent released in October 2021. Description Asterix comics usually start with the following introduction: The year is 50 BC. Gaul is entirely occupied by the Romans. Well, not entirely... One small village of indomitable Gauls still holds out against the invaders. And life is not easy for the Roman legionaries who garrison the fortified camps of Totorum, Aquarium, Laudanum and Compendium... The series follows the adventures of a village of Gauls as they resist Roman occupation in 50 BC. They do so using a magic potion, brewed by their druid Getafix (Panoramix in the French version), which temporarily gives the recipient superhuman strength. The protagonists, the title character Asterix and his friend Obelix, have various adventures. The "-ix" ending of both names (as well as all the other pseudo-Gaulish "-ix" names in the series) alludes to the "-rix" suffix (meaning "king") present in the names of many real Gaulish chieftains such as Vercingetorix, Orgetorix, and Dumnorix. In many of the stories, they travel to foreign countries, while other tales are set in and around their village. For much of the history of the series (Volumes 4 through 29), settings in Gaul and abroad alternated, with even-numbered volumes set abroad and odd-numbered volumes set in Gaul, mostly in the village. The Asterix series is one of the most popular Franco-Belgian comics in the world, with the series being translated into 111 languages and dialects . The success of the series has led to the adaptation of its books into 13 films: nine animated, and four live action (two of which, Asterix & Obelix: Mission Cleopatra and Asterix and Obelix vs. Caesar, were major box office successes in France). There have also been a number of games based on the characters, and a theme park near Paris, Parc Astérix. The very first French satellite, Astérix, launched in 1965, was named after the character. As of 2017, 370million copies of Asterix books had been sold worldwide, with co-creators René Goscinny and Albert Uderzo being France's best-selling authors abroad. History Prior to creating the Asterix series, Goscinny and Uderzo had previously had success with their series Oumpah-pah, which was published in Tintin magazine. Astérix was originally serialised in Pilote magazine, debuting in the first issue on 29 October 1959. In 1961 the first book was put together, titled Asterix the Gaul. From then on, books were released generally on a yearly basis. Their success was exponential; the first book sold 6,000 copies in its year of publication; a year later, the second sold 20,000. In 1963, the third sold 40,000; the fourth, released in 1964, sold 150,000. A year later, the fifth sold 300,000; 1966's Asterix and the Big Fight sold 400,000 upon initial publication. The ninth Asterix volume, when first released in 1967, sold 1.2 million copies in two days. Uderzo's first preliminary sketches portrayed Asterix as a huge and strong traditional Gaulish warrior. But Goscinny had a different picture in his mind, visualizing Asterix as a shrewd, compact warrior who would possess intelligence and wit more than raw strength. However, Uderzo felt that the downsized hero needed a strong but dim companion, to which Goscinny agreed. Hence, Obelix was born. Despite the growing popularity of Asterix with the readers, the financial backing for the publication Pilote ceased. Pilote was taken over by Georges Dargaud. When Goscinny died in 1977, Uderzo continued the series by popular demand of the readers, who implored him to continue. He continued to issue new volumes of the series, but on a less frequent basis. Many critics and fans of the series prefer the earlier collaborations with Goscinny. Uderzo created his own publishing company, Éditions Albert René, which published every album drawn and written by Uderzo alone since then. However, Dargaud, the initial publisher of the series, kept the publishing rights on the 24 first albums made by both Uderzo and Goscinny. In 1990, the Uderzo and Goscinny families decided to sue Dargaud to take over the rights. In 1998, after a long trial, Dargaud lost the rights to publish and sell the albums. Uderzo decided to sell these rights to Hachette instead of Albert-René, but the publishing rights on new albums were still owned by Albert Uderzo (40%), Sylvie Uderzo (20%) and Anne Goscinny (40%). In December 2008, Uderzo sold his stake to Hachette, which took over the company. In a letter published in the French newspaper Le Monde in 2009, Uderzo's daughter, Sylvie, attacked her father's decision to sell the family publishing firm and the rights to produce new Astérix adventures after his death. She said: ... the co-creator of Astérix, France's comic strip hero, has betrayed the Gaulish warrior to the modern-day Romans – the men of industry and finance. However, René Goscinny's daughter, Anne, also gave her agreement to the continuation of the series and sold her rights at the same time. She is reported to have said that "Asterix has already had two lives: one during my father's lifetime and one after it. Why not a third?". A few months later, Uderzo appointed three illustrators, who had been his assistants for many years, to continue the series. In 2011, Uderzo announced that a new Asterix album was due out in 2013, with Jean-Yves Ferri writing the story and Frédéric Mébarki drawing it. A year later, in 2012, the publisher Albert-René announced that Frédéric Mébarki had withdrawn from drawing the new album, due to the pressure he felt in following in the steps of Uderzo. Comic artist Didier Conrad was officially announced to take over drawing duties from Mébarki, with the due date of the new album in 2013 unchanged. In January 2015, after the murders of seven cartoonists at the satirical Paris weekly Charlie Hebdo, Astérix creator Albert Uderzo came out of retirement to draw two Astérix pictures honouring the memories of the victims. List of titles Numbers 1–24, 32 and 34 are by Goscinny and Uderzo. Numbers 25–31 and 33 are by Uderzo alone. Numbers 35–39 are by Jean-Yves Ferri and Didier Conrad. Years stated are for their initial album release. Asterix the Gaul (1961) Asterix and the Golden Sickle (1962) Asterix and the Goths (1963) Asterix the Gladiator (1964) Asterix and the Banquet (1965) Asterix and Cleopatra (1965) Asterix and the Big Fight (1966) Asterix in Britain (1966) Asterix and the Normans (1966) Asterix the Legionary (1967) Asterix and the Chieftain's Shield (1968) Asterix at the Olympic Games (1968) Asterix and the Cauldron (1969) Asterix in Spain (1969) Asterix and the Roman Agent (1970) Asterix in Switzerland (1970) The Mansions of the Gods (1971) Asterix and the Laurel Wreath (1972) Asterix and the Soothsayer (1972) Asterix in Corsica (1973) Asterix and Caesar's Gift (1974) Asterix and the Great Crossing (1975) Obelix and Co. (1976) Asterix in Belgium (1979) Asterix and the Great Divide (1980) Asterix and the Black Gold (1981) Asterix and Son (1983) Asterix and the Magic Carpet (1987) Asterix and the Secret Weapon (1991) Asterix and Obelix All at Sea (1996) Asterix and the Actress (2001) Asterix and the Class Act (2003) Asterix and the Falling Sky (2005) Asterix and Obelix's Birthday: The Golden Book (2009) Asterix and the Picts (2013) Asterix and the Missing Scroll (2015) Asterix and the Chariot Race (2017) Asterix and the Chieftain's Daughter (2019) Asterix and the Griffin (2021) Non-canonical volumes: Asterix Conquers Rome, to be the 23rd volume, before Obelix and Co. (1976) - comic How Obelix Fell into the Magic Potion When he was a Little Boy (1989) - special issue album The Twelve Tasks of Asterix (2016) - special issue album, illustrated text Uderzo Croqué par ses Amis - (Uderzo sketched by his friends) Tribute album by various artists (1996) Asterix Conquers Rome is a comics adaptation of the animated film The Twelve Tasks of Asterix. It was released in 1976 and was the 23rd volume to be published, but it has been rarely reprinted and is not considered to be canonical to the series. The only English translations ever to be published were in the Asterix Annual 1980 and never an English standalone volume. A picture-book version of the same story was published in English translation as The Twelve Tasks of Asterix by Hodder & Stoughton in 1978. In 1996, a tribute album in honour of Albert Uderzo was released titled "Uderzo Croqué par ses Amis", a volume containing 21 short stories with Uderzo in Ancient Gaul. This volume was published by Soleil Productions and has not been translated into English. In 2007, Éditions Albert René released a tribute volume titled Astérix et ses Amis, a 60-page volume of one-to-four-page short stories. It was a tribute to Albert Uderzo on his 80th birthday by 34 European cartoonists. The volume was translated into nine languages. , it has not been translated into English. In 2016, the French publisher Hachette, along with Anne Goscinny and Albert Uderzo decided to make the special issue album The XII Tasks of Asterix for the 40th anniversary of the film The Twelve Tasks of Asterix. There was no English edition. Synopsis and characters The main setting for the series is an unnamed coastal village, rumoured to be inspired by Erquy in Armorica (present-day Brittany), a province of Gaul (modern France), in the year 50 BC. Julius Caesar has conquered nearly all of Gaul for the Roman Empire during the Gallic Wars. The little Armorican village, however, has held out because the villagers can gain temporary superhuman strength by drinking a magic potion brewed by the local village druid, Getafix. His chief is Vitalstatistix. The main protagonist and hero of the village is Asterix, who, because of his shrewdness, is usually entrusted with the most important affairs of the village. He is aided in his adventures by his rather corpulent and slower thinking friend, Obelix, who, because he fell into the druid's cauldron of the potion as a baby, has permanent superhuman strength (because of this, Getafix steadfastly refuses to allow Obelix to drink the potion, as doing so would have a dangerous and unpredictable result, as shown in Asterix and Obelix All at Sea). Obelix is usually accompanied by Dogmatix, his little dog. (Except for Asterix and Obelix, the names of the characters change with the language. For example, Obelix's dog's name is "Dogmatix" in English, but "Idéfix" in the original French edition.) Asterix and Obelix (and sometimes other members of the village) go on various adventures both within the village and in far away lands. Places visited in the series include parts of Gaul (Lutetia, Corsica etc.), neighbouring nations (Belgium, Spain, Britain, Germany etc.), and far away lands (North America, Middle East, India etc.). The series employs science-fiction and fantasy elements in the more recent books; for instance, the use of extraterrestrials in Asterix and the Falling Sky and the city of Atlantis in Asterix and Obelix All at Sea. With rare exceptions, the ending of the albums usually shows a big banquet with the village's inhabitants gathering - the sole exception is the bard Cacofonix restrained and gagged to prevent him from singing (but in Asterix and the Normans the ironsmith Fulliautomatix was tied up). Mostly the banquets are held under the starry nights in the village, where roast boar is devoured and all (but one) are set about in merrymaking. However, there are a few exceptions, such as in Asterix and Cleopatra. Humour The humour encountered in the Asterix comics often centers around puns, caricatures, and tongue-in-cheek stereotypes of contemporary European nations and French regions. Much of the humour in the initial Asterix books was French-specific, which delayed the translation of the books into other languages for fear of losing the jokes and the spirit of the story. Some translations have actually added local humour: In the Italian translation, the Roman legionaries are made to speak in 20th-century Roman dialect, and Obelix's famous Ils sont fous ces Romains ("These Romans are crazy") is translated properly as Sono pazzi questi romani, humorously alluding to the Roman abbreviation SPQR. In another example: Hiccups are written onomatopoeically in French as hips, but in English as "hic", allowing Roman legionaries in more than one of the English translations to decline their hiccups absurdly in Latin (hic, haec, hoc). The newer albums share a more universal humour, both written and visual. Character names All the fictional characters in Asterix have names which are puns on their roles or personalities, and which follow certain patterns specific to nationality. Certain rules are followed (most of the time) such as Gauls (and their neighbours) having an "-ix" suffix for the men and ending in "-a" for the women; for example, Chief Vitalstatistix (so called due to his portly stature) and his wife Impedimenta (often at odds with the chief). The male Roman names end in "-us", echoing Latin nominative male singular form, as in Gluteus Maximus, a muscle-bound athlete whose name is literally the butt of the joke. Gothic names (present-day Germany) end in "-ic", after Gothic chiefs such as Alaric and Theoderic; for example Rhetoric the interpreter. Greek names end in "-os" or "-es"; for example, Thermos the restaurateur. British names end in "-ax" and are often puns on the taxation associated with the later United Kingdom; examples include Valuaddedtax the druid, and Selectivemploymentax the mercenary. Vikings names end with af, for example necaf or cenotaf. Other nationalities are treated to pidgin translations from their language, like Huevos y Bacon, a Spanish chieftain (whose name, meaning eggs and bacon, is often guidebook Spanish for tourists), or literary and other popular media references, like Dubbelosix (a sly reference to James Bond's codename "007"). Most of these jokes, and hence the names of the characters, are specific to the translation; for example, the druid named Getafix in English translation - "get a fix", referring to the character's role in dispensing the magic potion - is Panoramix in the original French and Miraculix in German. Even so, occasionally the wordplay has been preserved: Obelix's dog, known in the original French as Idéfix (from idée fixe, a "fixed idea" or obsession), is called Dogmatix in English, which not only renders the original meaning strikingly closely ("dogmatic") but in fact adds another layer of wordplay with the syllable "Dog-" at the beginning of the name. The name Asterix, French Astérix, comes from , meaning "asterisk", which is the typographical symbol * indicating a footnote, from the Greek word αστήρ (aster), meaning a "star". His name is usually left unchanged in translations, aside from accents and the use of local alphabets. For example, in Esperanto, Polish, Slovene, Latvian, and Turkish it is Asteriks (in Turkish he was first named Bücür meaning "shorty", but the name was then standardised). Two exceptions include Icelandic, in which he is known as Ástríkur ("Rich of love"), and Sinhala, where he is known as (Soora Pappa), which can be interpreted as "Hero". The name Obelix (Obélix) may refer to "obelisk", a stone column from ancient Egypt, but also to another typographical symbol, the obelisk or obelus (). For explanations of some of the other names, see List of Asterix characters. Ethnic stereotypes Many of the Asterix adventures take place in other countries away from their homeland in Gaul. In every album that takes place abroad, the characters meet (usually modern-day) stereotypes for each country, as seen by the French. Italics (Italians) are the inhabitants of Italy. In the adventures of Asterix, the term "Romans" is used by non-Italics to refer to all inhabitants of Italy, who at that time had extended their dominion over a large part of the Mediterranean basin. But as can be seen in Asterix and the Chariot Race, in the Italic peninsula this term is used only to the people from the capital, with many Italics preferring to identify themselves as Umbrians, Etruscans, Venetians, etc. Various topics from this country are explored, as in this example, Italian gastronomy (pasta, pizza, wine), art, famous people (Pavarotti, Berlusconi, Mona Lisa), and even the controversial issue of political corruption.Romans in general appear more similar to the historical Romans, than to modern-age Italians. Goths (Germans) are disciplined and militaristic, they are composed of many factions that fight amongst each other (which is a reference to Germany before Otto von Bismarck, and to East and West Germany after the Second World War), and they wear the Pickelhaube helmet common during the German Empire. In later appearances, the Goths tend to be more good-natured. Helvetians (Swiss) are neutral, eat fondue, and are obsessed with cleaning, accurate time-keeping, and banks. The Britons (English) are phlegmatic, and speak with early 20th-century aristocratic slang (similar to Bertie Wooster). They stop for tea every day (making it with hot water and a drop of milk until Asterix brings them actual tea leaves), drink lukewarm beer (Bitter), eat tasteless foods with mint sauce (Rosbif), and live in streets containing rows of identical houses. In Asterix and Obelix: God Save Britannia the Britons all wore woollen pullovers and Tam o' shanters. Hibernians (Irish) inhabit Hibernia, the Latin name of Ireland and they fight against the Romans alongside the Britons to defend the British Isles. Iberians (Spanish) are filled with pride and have rather choleric tempers. They produce olive oil, provide very slow aid for chariot problems on the Roman roads and (thanks to Asterix) adopt bullfighting as a tradition. When the Gauls visited North America in Asterix and the Great Crossing, Obelix punches one of the attacking Native Americans with a knockout blow. The warrior first hallucinates American-style emblematic eagles; the second time, he sees stars in the formation of the Stars and Stripes; the third time, he sees stars shaped like the United States Air Force roundel. Asterix's inspired idea for getting the attention of a nearby Viking ship (which could take them back to Gaul) is to hold up a torch; this refers to the Statue of Liberty (which was a gift from France). Corsicans are proud, patriotic, and easily aroused but lazy, making decisions by using pre-filled ballot boxes. They harbour vendettas against each other, and always take their siesta. Greeks are chauvinists and consider Romans, Gauls, and all others to be barbarians. They eat stuffed grape leaves (dolma), drink resinated wine (retsina), and are hospitable to tourists. Most seem to be related by blood, and often suggest some cousin appropriate for a job. Greek characters are often depicted in side profile, making them resemble figures from classical Greek vase paintings. Normans (Vikings) drink endlessly, they always use cream in their cuisine, they don't know what fear is (which they're trying to discover), and in their home territory (Scandinavia), the night lasts for 6 months.Their depiction in the albums is a mix of stereotypes of Swedish Vikings and the Norman French. Cimbres (Danes) are very similar to the Normans with the greatest difference being that the Gauls are unable to communicate with them. Their names end in "-sen", a common ending of surnames in Denmark and Norway akin to "-son". Belgians speak with a funny accent, snub the Gauls, and always eat sliced roots deep-fried in bear fat. They also tell Belgian jokes. Lusitanians (Portuguese) are short in stature and polite (Uderzo said all the Portuguese who he had met were like that). The Indians have elephant trainers, as well as gurus who can fast for weeks and levitate on magic carpets. They worship thirty-three million deities and consider cows as sacred. They also bathe in the Ganges river. Egyptians are short with prominent noses, endlessly engaged in building pyramids and palaces. Their favorite food is lentil soup and they sail feluccas along the banks of the Nile River. Persians (Iranians) produce carpets and staunchly refuse to mend foreign ones. They eat caviar, as well as roasted camel and the women wear burqas. Hittites (Turks), Sumerians, Akkadians, Assyrians, and Babylonians (the last four peoples: Iraqis) are perpetually at war with each other and attack strangers because they confuse them with their enemies, but they later apologize when they realize that the strangers are not their enemies. This is likely a criticism of the constant conflicts among the Middle Eastern peoples. The Jews are all depicted as Yemenite Jews, with dark skin, black eyes, and beards, a tribute to Marc Chagall, the famous painter whose painting of King David hangs at the Knesset (Israeli Parliament). Numidians, contrary to the Berber inhabitants of ancient Numidia (located in North Africa), are obviously Africans from sub-Saharan Africa. The names end in "-tha", similar to the historical king Jugurtha of Numidia. The Picts (Scots) wear a typical dress with a kilt (skirt), have the habit of drinking "malt water" (whisky) and throwing logs (caber tossing) as a popular sport and their names all start with "Mac-". Sarmatians (Ukrainians), inhabit the North Black Sea area, which represents present-day Ukraine. Their names end in "-ov", like many Ukrainian surnames. When the Gauls see foreigners speaking their foreign languages, these have different representations in the cartoon speech bubbles: Iberian: Same as Spanish, with inversion of exclamation marks ('¡') and question marks ("¿") Goth language: Gothic script (incomprehensible to the Gauls, except Getafix) Viking (Normans and Cimbres): "Ø" and "Å" instead of "O" and "A" (incomprehensible to the Gauls) Amerindian: Pictograms and sign language (generally incomprehensible to the Gauls) Egyptians and Kushites: Hieroglyphs with explanatory footnotes (incomprehensible to the Gauls) Greek: Straight letters, carved as if in stone Sarmatian: In their speech balloons, some letters (E, F, N, R ...) are written in a mirror-reversed form, which evokes the modern Cyrillic alphabet. Translations The various volumes have been translated into more than 100 languages and dialects. Besides the original French language, most albums are available in Bengali, Estonian, English, Czech, Dutch, German, Galician, Danish, Icelandic, Norwegian, Swedish, Finnish, Spanish, Catalan, Basque, Portuguese, Italian, Greek, Hungarian, Polish, Romanian, Turkish, Slovene, Bulgarian, Serbian, Croatian, Latvian, Welsh, as well as Latin. Selected albums have also been translated into languages such as Esperanto, Scottish Gaelic, Irish, Scots, Indonesian, Persian, Mandarin, Korean, Japanese, Bengali, Afrikaans, Arabic, Hindi, Hebrew, Frisian, Romansch, Vietnamese, Sinhala, Ancient Greek, and Luxembourgish. In Europe, several volumes were translated into a variety of regional languages and dialects, such as Alsatian, Breton, Chtimi (Picard), and Corsican in France; Bavarian, Swabian, and Low German in Germany; and Savo, Karelia, Rauma, and Helsinki slang dialects in Finland. Also, in Portugal, a special edition of the first volume, Asterix the Gaul, was translated into local language Mirandese. In Greece, a number of volumes have appeared in the Cretan Greek, Cypriot Greek, and Pontic Greek dialects. In the Italian version, while the Gauls speak standard Italian, the legionaries speak in the Romanesque dialect. In the former Yugoslavia, the "Forum" publishing house translated Corsican text in Asterix in Corsica into the Montenegrin dialect of Serbo-Croatian (today called Montenegrin). In the Netherlands, several volumes were translated into West Frisian, a Germanic language spoken in the province of Friesland; into Limburgish, a regional language spoken not only in Dutch Limburg but also in Belgian Limburg and North Rhine-Westphalia, Germany; and into Tweants, a dialect in the region of Twente in the eastern province of Overijssel. Hungarian-language books have been published in Yugoslavia for the Hungarian minority living in Serbia. Although not translated into a fully autonomous dialect, the books differ slightly from the language of the books issued in Hungary. In Sri Lanka, the cartoon series was adapted into Sinhala as Sura Pappa. Most volumes have been translated into Latin and Ancient Greek, with accompanying teachers' guides, as a way of teaching these ancient languages. English translation Before Asterix became famous, translations of some strips were published in British comics including Valiant, Ranger, and Look & Learn, under names Little Fred and Big Ed and Beric the Bold, set in Roman-occupied Britain. These were included in an exhibition on Goscinny's life and career, and Asterix, in London's Jewish Museum in 2018. In 1970 William Morrow published English translations in hardback of three Asterix albums for the American market. These were Asterix the Gaul, Asterix and Cleopatra and Asterix the Legionary. Lawrence Hughes in a letter to The New York Times stated, "Sales were modest, with the third title selling half the number of the first. I was publisher at the time, and Bill Cosby tried to buy film and television rights. When that fell through, we gave up the series." The first 33 Asterix albums were translated into English by Anthea Bell and Derek Hockridge (including the three volumes reprinted by William Morrow), who were widely praised for maintaining the spirit and humour of the original French versions. Hockridge died in 2013, so Bell translated books 34 to 36 by herself, before retiring in 2016 for health reasons. She died in 2018. Adriana Hunter is the present translator. US publisher Papercutz in December 2019 announced it would begin publishing "all-new more American translations" of the Asterix books, starting on 19 May 2020. The launch was postponed to 15 July 2020 as a result of the COVID-19 pandemic. The new translator is Joe Johnson (Dr. Edward Joseph Johnson), a Professor of French and Spanish at Clayton State University. Adaptations The series has been adapted into various media. There are 18 films, 15 board games, 40 video games, and 1 theme park. Films Deux Romains en Gaule, 1967 black and white television film, mixed media, live-action with Asterix and Obelix animated. Released on DVD in 2002. Asterix the Gaul, 1967, animated, based on the album Asterix the Gaul. Asterix and the Golden Sickle, 1967, animated, based upon the album Asterix and the Golden Sickle, incomplete and never released. Asterix and Cleopatra, 1968, animated, based on the album Asterix and Cleopatra. The Dogmatix Movie, 1973, animated, a unique story based on Dogmatix and his animal friends, Albert Uderzo created a comic version (consisting of eight comics, as the film is a combination of 8 different stories) of the never-released movie in 2003. The Twelve Tasks of Asterix, 1976, animated, a unique story not based on an existing comic. Asterix Versus Caesar, 1985, animated, based on both Asterix the Legionary and Asterix the Gladiator. Asterix in Britain, 1986, animated, based upon the album Asterix in Britain. Asterix and the Big Fight, 1989, animated, based on both Asterix and the Big Fight and Asterix and the Soothsayer. Asterix Conquers America, 1994, animated, loosely based upon the album Asterix and the Great Crossing. Asterix and Obelix vs. Caesar, 1999, live-action, based primarily upon Asterix the Gaul, Asterix and the Soothsayer, Asterix and the Goths, Asterix the Legionary, and Asterix the Gladiator. Asterix & Obelix: Mission Cleopatra, 2002, live-action, based upon the album Asterix and Cleopatra. Asterix and Obelix in Spain, 2004, live-action, based upon the album Asterix in Spain, incomplete and never released because of disagreement with the team behind the movie and the creator of the comics. Asterix and the Vikings, 2006, animated, loosely based upon the album Asterix and the Normans. Asterix at the Olympic Games, 2008, live-action, loosely based upon the album Asterix at the Olympic Games. Asterix and Obelix: God Save Britannia, 2012, live-action, loosely based upon the album Asterix in Britain and Asterix and the Normans. Asterix: The Mansions of the Gods, 2014, computer-animated, based upon the album The Mansions of the Gods and is the first animated Asterix movie in stereoscopic 3D. Asterix: The Secret of the Magic Potion, 2018, computer-animated, original story. Television series On 17 November, 2018, a 52 eleven-minute episode computer-animated series centred around Dogmatix was announced to be in production by Studio 58 and Futurikon for broadcast on France Télévisions in 2020. On 21 December, 2020, it was confirmed that Dogmatix and the Indomitables had been pushed back to fall 2021, with o2o Studio producing the animation. The show is distributed globally by LS Distribution. The series premiered on the Okoo streaming service on 2 July before beginning its linear broadcast on France 4 on 28 August 2021. On 3 March, 2021, it was announced that Asterix the Gaul is to star in a new Netflix animated series directed by Alain Chabat. The series will be adapted from one of the classic volumes, Asterix and the Big Fight, where the Romans, after being constantly embarrassed by Asterix and his village cohorts, organize a brawl between rival Gaulish chiefs and try to fix the result by kidnapping a druid along with his much-needed magic potion. The series will debut in 2023. The series will be CG-Animated. Games Many gamebooks, board games and video games are based upon the Asterix series. In particular, many video games were released by various computer game publishers. Theme park Parc Astérix, a theme park 22 miles north of Paris, based upon the series, was opened in 1989. It is one of the most visited sites in France, with around 1.6  million visitors per year. Influence in popular culture The first French satellite, which was launched in 1965, was named Astérix-1 in honour of Asterix. Asteroids 29401 Asterix and 29402 Obelix were also named in honour of the characters. Coincidentally, the word Asterix/Asterisk originates from the Greek for Little Star. During the campaign for Paris to host the 1992 Summer Olympics, Asterix appeared in many posters over the Eiffel Tower. The French company Belin introduced a series of Asterix crisps shaped in the forms of Roman shields, gourds, wild boar, and bones. In the UK in 1995, Asterix coins were presented free in every Nutella jar. In 1991, Asterix and Obelix appeared on the cover of Time for a special edition about France, art directed by Mirko Ilic. In a 2009 issue of the same magazine, Asterix is described as being seen by some as a symbol for France's independence and defiance of globalisation. Despite this, Asterix has made several promotional appearances for fast food chain McDonald's, including one advertisement which featured members of the village enjoying the traditional story-ending feast at a McDonald's restaurant. Version 4.0 of the operating system OpenBSD features a parody of an Asterix story. Action Comics Issue #579, published by DC Comics in 1986, written by Lofficier and Illustrated by Keith Giffen, featured a homage to Asterix where Superman and Jimmy Olsen are drawn back in time to a small village of indomitable Gauls. In 2005, the Mirror World Asterix exhibition was held in Brussels. The Belgian post office also released a set of stamps to coincide with the exhibition. A book was released to coincide with the exhibition, containing sections in French, Dutch and English. On 29 October 2009, the Google homepage of a great number of countries displayed a logo (called Google Doodle) commemorating 50 years of Asterix. Although they have since changed, the #2 and #3 heralds in the Society for Creative Anachronism's Kingdom of Ansteorra were the Asterisk and Obelisk Heralds. Asterix and Obelix were the official mascots of the 2017 Ice Hockey World Championships, jointly hosted by France and Germany. In 2019, France issued a commemorative €2 coin to celebrate the 60th anniversary of Asterix. The Royal Canadian Navy has a supply vessel named MV Asterix. A second Resolve-Class ship, to have been named MV Obelix, was cancelled. See also List of Asterix characters Bande dessinée English translations of Asterix List of Asterix games List of Asterix volumes Kajko i Kokosz Potion Roman Gaul, after Julius Caesar's conquest of 58–51 BC that consisted of five provinces Commentarii de Bello Gallico References Sources Astérix publications in Pilote BDoubliées Astérix albums Bedetheque External links Official site Asterix the Gaul at Don Markstein's Toonopedia, from the original on 6 April 2012. Asterix around the World – The many languages Alea Jacta Est (Asterix for grown-ups) Each Asterix book is examined in detail Les allusions culturelles dans Astérix - Cultural allusions The Asterix Annotations – album-by-album explanations of all the historical references and obscure in-jokes French comic strips Pilote titles Dargaud titles Alternate history comics Lagardère SCA franchises Satirical comics Comic franchises Fantasy comics Historical comics Humor comics Pirate comics 1959 comics debuts Fiction set in Roman Gaul Comics set in ancient Rome Comics set in France Brittany in fiction Comics set in the 1st century BC French comics adapted into films Comics adapted into animated films Comics adapted into animated series Comics adapted into video games 1959 establishments in France Works about rebels Works about rebellions Rebellions in fiction Gallia Lugdunensis Comics by Albert Uderzo Armorica
Asterix
In organic chemistry, hydrocarbons (compounds composed solely of carbon and hydrogen) are divided into two classes: aromatic compounds and aliphatic compounds (; G. aleiphar, fat, oil). Aliphatic compounds can be saturated, like hexane, or unsaturated, like hexene and hexyne. Open-chain compounds, whether straight or branched, and which contain no rings of any type, are always aliphatic. Cyclic compounds can be aliphatic if they are not aromatic. Structure Aliphatic compounds can be saturated, joined by single bonds (alkanes), or unsaturated, with double bonds (alkenes) or triple bonds (alkynes). Besides hydrogen, other elements can be bound to the carbon chain, the most common being oxygen, nitrogen, sulfur, and chlorine. The least complex aliphatic compound is methane (CH4). Properties Most aliphatic compounds are flammable, allowing the use of hydrocarbons as fuel, such as methane in Bunsen burners and as liquefied natural gas (LNG), and ethyne (acetylene) in welding. Examples of aliphatic compounds / non-aromatic The most important aliphatic compounds are: n-, iso- and cyclo-alkanes (saturated hydrocarbons) n-, iso- and cyclo-alkenes and -alkynes (unsaturated hydrocarbons). Important examples of low-molecular aliphatic compounds can be found in the list below (sorted by the number of carbon-atoms): References Organic compounds
Aliphatic compound
Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way. Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors. Examples Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly. The most basic stressors include: High winds Extreme temperatures Drought Flood Other natural disasters, such as tornadoes and wildfires. Cold Heat Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination. Effects Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress will have. The higher the latitude of the area affected, the greater the impact of abiotic stress will be on that area. So, a taiga or boreal forest is at the mercy of whatever abiotic stress factors may come along, while tropical zones are much less susceptible to such stressors. Benefits One example of a situation where abiotic stress plays a constructive role in an ecosystem is in natural wildfires. While they can be a human safety hazard, it is productive for these ecosystems to burn out every once in a while so that new organisms can begin to grow and thrive. Even though it is healthy for an ecosystem, a wildfire can still be considered an abiotic stressor, because it puts an obvious stress on individual organisms within the area. Every tree that is scorched and each bird nest that is devoured is a sign of the abiotic stress. On the larger scale, though, natural wildfires are positive manifestations of abiotic stress. What also needs to be taken into account when looking for benefits of abiotic stress, is that one phenomenon may not affect an entire ecosystem in the same way. While a flood will kill most plants living low on the ground in a certain area, if there is rice there, it will thrive in the wet conditions. Another example of this is in phytoplankton and zooplankton. The same types of conditions are usually considered stressful for these two types of organisms. They act very similarly when exposed to ultraviolet light and most toxins, but at elevated temperatures the phytoplankton reacts negatively, while the thermophilic zooplankton reacts positively to the increase in temperature. The two may be living in the same environment, but an increase in temperature of the area would prove stressful only for one of the organisms. Lastly, abiotic stress has enabled species to grow, develop, and evolve, furthering natural selection as it picks out the weakest of a group of organisms. Both plants and animals have evolved mechanisms allowing them to survive extremes. Detriments The most obvious detriment concerning abiotic stress involves farming. It has been claimed by one study that abiotic stress causes the most crop loss of any other factor and that most major crops are reduced in their yield by more than 50% from their potential yield. Because abiotic stress is widely considered a detrimental effect, the research on this branch of the issue is extensive. For more information on the harmful effects of abiotic stress, see the sections below on plants and animals. In plants A plant's first line of defense against abiotic stress is in its roots. If the soil holding the plant is healthy and biologically diverse, the plant will have a higher chance of surviving stressful conditions. The plant responses to stress are dependent on the tissue or organ affected by the stress. For example, transcriptional responses to stress are tissue or cell specific in roots and are quite different depending on the stress involved. One of the primary responses to abiotic stress such as high salinity is the disruption of the Na+/K+ ratio in the cytoplasm of the plant cell. High concentrations of Na+, for example, can decrease the capacity for the plant to take up water and also alter enzyme and transporter functions. Evolved adaptations to efficiently restore cellular ion homeostasis have led to a wide variety of stress tolerant plants. Facilitation, or the positive interactions between different species of plants, is an intricate web of association in a natural environment. It is how plants work together. In areas of high stress, the level of facilitation is especially high as well. This could possibly be because the plants need a stronger network to survive in a harsher environment, so their interactions between species, such as cross-pollination or mutualistic actions, become more common to cope with the severity of their habitat. Plants also adapt very differently from one another, even from a plant living in the same area. When a group of different plant species was prompted by a variety of different stress signals, such as drought or cold, each plant responded uniquely. Hardly any of the responses were similar, even though the plants had become accustomed to exactly the same home environment. Serpentine soils (media with low concentrations of nutrients and high concentrations of heavy metals) can be a source of abiotic stress. Initially, the absorption of toxic metal ions is limited by cell membrane exclusion. Ions that are absorbed into tissues are sequestered in cell vacuoles. This sequestration mechanism is facilitated by proteins on the vacuole membrane. An example of plants that adapt to serpentine soil are Metallophytes, or hyperaccumulators, as they are known for their ability to absorbed heavy metals using the root-to-shoot translocation (which it will absorb into shoots rather than the plant itself). They're also extinguished for their ability to absorb toxic substances from heavy metals. Chemical priming has been proposed to increase tolerance to abiotic stresses in crop plants. In this method, which is analogous to vaccination, stress-inducing chemical agents are introduced to the plant in brief doses so that the plant begins preparing defense mechanisms. Thus, when the abiotic stress occurs, the plant has already prepared defense mechanisms that can be activated faster and increase tolerance. Prior exposure to tolerable doses of biotic stresses such as phloem-feeding insect infestation have also been shown to increase tolerance to abiotic stresses in plant Impact on Food Production Abiotic stress mostly affected plants that are in the agricultural industry. Mostly because of their constant need of adjusting the mechanisms through the effects of climate change such as coldness, drought, salt salinity, heat, toxins, etc. Rice (Oryza sativa) is a classic example. Rice is a staple food throughout the world, especially in China and India. Rice plants experience different types of abiotic stresses, like drought and high salinity. These stress conditions have a negative impact on rice production. Genetic diversity has been studied among several rice varieties with different genotypes using molecular markers. Chickpea experiences drought which affects its production since it was considered one of the most significant food to be used around the globe. Wheat is one of the major crops that are mostly affected by drought because lack of water would affect the plant development, thus making the leaves withered in the process. Maize has a couple of factors that affects the crop itself. The primary examples are high temperature and drought which was responsible for the changes in plant development and for the loss of the maize crops, respectively. Soybean not only affects the plant itself from drought, but also agricultural production since the world relies on soybeans for its source of protein. Salt stress in plants Soil salinization, the accumulation of water-soluble salts to levels that negatively impact plant production, is a global phenomenon affecting approximately 831 million hectares of land. More specifically, the phenomenon threatens 19.5% of the world's irrigated agricultural land and 2.1% of the world's non-irrigated (dry-land) agricultural lands. High soil salinity content can be harmful to plants because water-soluble salts can alter osmotic potential gradients and consequently inhibit many cellular functions. For example, high soil salinity content can inhibit the process of photosynthesis by limiting a plant's water uptake; high levels of water-soluble salts in the soil can decrease the osmotic potential of the soil and consequently decrease the difference in water potential between the soil and the plant's roots, thereby limiting electron flow from H2O to P680 in Photosystem II's reaction center. Over generations, many plants have mutated and built different mechanisms to counter salinity effects. A good combatant of salinity in plants is the hormone ethylene. Ethylene is known for regulating plant growth and development and dealing with stress conditions. Many central membrane proteins in plants, such as ETO2, ERS1 and EIN2, are used for ethylene signaling in many plant growth processes. Mutations in these proteins can lead to heightened salt sensitivity and can limit plant growth. The effects of salinity has been studied on Arabidopsis plants that have mutated ERS1, ERS2, ETR1, ETR2 and EIN4 proteins. These proteins are used for ethylene signaling against certain stress conditions, such as salt and the ethylene precursor ACC is used to suppress any sensitivity to the salt stress. Phosphate starvation in plants Phosphorus (P) is an essential macronutrient required for plant growth and development, but most of the world's soil is limited in this important plant nutrient. Plants can utilize P mainly in the form if soluble inorganic phosphate (Pi) but are subjected to abiotic stress of P-limitation when there is not sufficient soluble PO4 available in the soil. Phosphorus forms insoluble complexes with Ca and Mg in alkaline soils and Al and Fe in acidic soils that makes it unavailable for plant roots. When there is limited bioavailable P in the soil, plants show extensive abiotic stress phenotype such as short primary roots and more lateral roots and root hairs to make more surface available for Pi absorption, exudation of organic acids and phosphatase to release Pi from complex P containing molecules and make it available for growing plants' organs. It has been shown that PHR1, a MYB - related transcription factor is a master regulator of P-starvation response in plants. PHR1 also has been shown to regulate extensive remodeling of lipids and metabolites during phosphorus limitation stress Drought stress Drought stress defined as naturally occurring water deficit is one of the main causes of crop losses within the agricultural world. This is due to water's necessity in so many fundamental processes in plant growth. It has become especially important in recent years to find a way to combat drought stress. A decrease in precipitation and subsequent increase in drought are extremely likely in the future due to an increase in global warming. Plants have come up with many mechanisms and adaptations to try and deal with drought stress. One of the leading ways that plants combat drought stress is by closing their stomata. A key hormone regulating stomatal opening and closing is abscisic acid (ABA). Synthesis of ABA causes the ABA to bind to receptors. This binding then affects the opening of ion channels thereby decreasing turgor pressure in the stomata and causing them to close. Recent studies, by Gonzalez-Villagra, et al., showed how ABA levels increased in drought-stressed plants (2018). They showed that when plants were placed in a stressful situation they produced more ABA to try and conserve any water they had in their leaves. Another extremely important factor in dealing with drought stress and regulating the uptake and export of water is aquaporins (AQPs). AQPs are integral membrane proteins that make up channels. These channels' main job is the transport of water and other necessary solutes. AQPs are both transcriptionally and post transcriptionally regulated by many different factors such as ABA, GA3, pH and Ca2+ and the specific levels of AQPs in certain parts of the plant, such as roots or leaves, helps to draw as much water into the plant as possible. By understanding both the mechanism of AQPs and the hormone ABA, scientists will be better able to produce drought resistant plants in the future. One interesting thing that has been found in plants that are consistently exposed to drought, is their ability to form a sort of "memory". In a study by Tombesi et al., they found plants who had previously been exposed to drought were able to come up with a sort of strategy to minimize water loss and decrease water use. They found that plants who were exposed to drought conditions actually changed the way they regulated their stomata and what they called "hydraulic safety margin" so as to decrease the vulnerability of the plant. By changing the regulation of stomata and subsequently the transpiration, plants were able to function better in situations where the availability of water decreased. In animals For animals, the most stressful of all the abiotic stressors is heat. This is because many species are unable to regulate their internal body temperature. Even in the species that are able to regulate their own temperature, it is not always a completely accurate system. Temperature determines metabolic rates, heart rates, and other very important factors within the bodies of animals, so an extreme temperature change can easily distress the animal's body. Animals can respond to extreme heat, for example, through natural heat acclimation or by burrowing into the ground to find a cooler space. It is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. This acts as a sort of stock room when a species is plagued by the perils of natural selection. A variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor. In endangered species Biodiversity is determined by many things, and one of them is abiotic stress. If an environment is highly stressful, biodiversity tends to be low. If abiotic stress does not have a strong presence in an area, the biodiversity will be much higher. This idea leads into the understanding of how abiotic stress and endangered species are related. It has been observed through a variety of environments that as the level of abiotic stress increases, the number of species decreases. This means that species are more likely to become population threatened, endangered, and even extinct, when and where abiotic stress is especially harsh. See also Ecophysiology References Stress (biological and psychological) Biodiversity Habitat Agriculture Botany
Abiotic stress
Sir Arthur Stanley Eddington (28 December 1882 – 22 November 1944) was an English astronomer, physicist, and mathematician. He was also a philosopher of science and a populariser of science. The Eddington limit, the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object, is named in his honour. Around 1920, he foreshadowed the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington was the first to correctly speculate that the source was fusion of hydrogen into helium. Eddington wrote a number of articles that announced and explained Einstein's theory of general relativity to the English-speaking world. World War I had severed many lines of scientific communication, and new developments in German science were not well known in England. He also conducted an expedition to observe the solar eclipse of 29 May 1919 that provided one of the earliest confirmations of general relativity, and he became known for his popular expositions and interpretations of the theory. Early years Eddington was born 28 December 1882 in Kendal, Westmorland (now Cumbria), England, the son of Quaker parents, Arthur Henry Eddington, headmaster of the Quaker School, and Sarah Ann Shout. His father taught at a Quaker training college in Lancashire before moving to Kendal to become headmaster of Stramongate School. He died in the typhoid epidemic which swept England in 1884. His mother was left to bring up her two children with relatively little income. The family moved to Weston-super-Mare where at first Stanley (as his mother and sister always called Eddington) was educated at home before spending three years at a preparatory school. The family lived at a house called Varzin, 42 Walliscote Road, Weston-super-Mare. There is a commemorative plaque on the building explaining Sir Arthur's contribution to science. In 1893 Eddington entered Brynmelyn School. He proved to be a most capable scholar, particularly in mathematics and English literature. His performance earned him a scholarship to Owens College, Manchester (what was later to become the University of Manchester) in 1898, which he was able to attend, having turned 16 that year. He spent the first year in a general course, but turned to physics for the next three years. Eddington was greatly influenced by his physics and mathematics teachers, Arthur Schuster and Horace Lamb. At Manchester, Eddington lived at Dalton Hall, where he came under the lasting influence of the Quaker mathematician J. W. Graham. His progress was rapid, winning him several scholarships and he graduated with a BSc in physics with First Class Honours in 1902. Based on his performance at Owens College, he was awarded a scholarship to Trinity College, Cambridge, in 1902. His tutor at Cambridge was Robert Alfred Herman and in 1904 Eddington became the first ever second-year student to be placed as Senior Wrangler. After receiving his M.A. in 1905, he began research on thermionic emission in the Cavendish Laboratory. This did not go well, and meanwhile he spent time teaching mathematics to first year engineering students. This hiatus was brief. Through a recommendation by E. T. Whittaker, his senior colleague at Trinity College, he secured a position at the Royal Observatory in Greenwich where he was to embark on his career in astronomy, a career whose seeds had been sown even as a young child when he would often "try to count the stars". Astronomy In January 1906, Eddington was nominated to the post of chief assistant to the Astronomer Royal at the Royal Greenwich Observatory. He left Cambridge for Greenwich the following month. He was put to work on a detailed analysis of the parallax of 433 Eros on photographic plates that had started in 1900. He developed a new statistical method based on the apparent drift of two background stars, winning him the Smith's Prize in 1907. The prize won him a fellowship of Trinity College, Cambridge. In December 1912 George Darwin, son of Charles Darwin, died suddenly and Eddington was promoted to his chair as the Plumian Professor of Astronomy and Experimental Philosophy in early 1913. Later that year, Robert Ball, holder of the theoretical Lowndean chair also died, and Eddington was named the director of the entire Cambridge Observatory the next year. In May 1914 he was elected a fellow of the Royal Society: he was awarded the Royal Medal in 1928 and delivered the Bakerian Lecture in 1926. Eddington also investigated the interior of stars through theory, and developed the first true understanding of stellar processes. He began this in 1916 with investigations of possible physical explanations for Cepheid variable stars. He began by extending Karl Schwarzschild's earlier work on radiation pressure in Emden polytropic models. These models treated a star as a sphere of gas held up against gravity by internal thermal pressure, and one of Eddington's chief additions was to show that radiation pressure was necessary to prevent collapse of the sphere. He developed his model despite knowingly lacking firm foundations for understanding opacity and energy generation in the stellar interior. However, his results allowed for calculation of temperature, density and pressure at all points inside a star (thermodynamic anisotropy), and Eddington argued that his theory was so useful for further astrophysical investigation that it should be retained despite not being based on completely accepted physics. James Jeans contributed the important suggestion that stellar matter would certainly be ionized, but that was the end of any collaboration between the pair, who became famous for their lively debates. Eddington defended his method by pointing to the utility of his results, particularly his important mass–luminosity relation. This had the unexpected result of showing that virtually all stars, including giants and dwarfs, behaved as ideal gases. In the process of developing his stellar models, he sought to overturn current thinking about the sources of stellar energy. Jeans and others defended the Kelvin–Helmholtz mechanism, which was based on classical mechanics, while Eddington speculated broadly about the qualitative and quantitative consequences of possible proton–electron annihilation and nuclear fusion processes. Around 1920, he anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even the fact that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington's paper, based on knowledge at the time, reasoned that: The leading theory of stellar energy, the contraction hypothesis, should cause stars' rotation to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening. The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy. Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom, suggesting that if such a combination could happen, it would release considerable energy as a byproduct. If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (We now know that most "ordinary" stars contain far more than 5% hydrogen.) Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time. All of these speculations were proven correct in the following decades. With these assumptions, he demonstrated that the interior temperature of stars must be millions of degrees. In 1924, he discovered the mass–luminosity relation for stars (see Lecchini in ). Despite some disagreement, Eddington's models were eventually accepted as a powerful tool for further investigation, particularly in issues of stellar evolution. The confirmation of his estimated stellar diameters by Michelson in 1920 proved crucial in convincing astronomers unused to Eddington's intuitive, exploratory style. Eddington's theory appeared in mature form in 1926 as The Internal Constitution of the Stars, which became an important text for training an entire generation of astrophysicists. Eddington's work in astrophysics in the late 1920s and the 1930s continued his work in stellar structure, and precipitated further clashes with Jeans and Edward Arthur Milne. An important topic was the extension of his models to take advantage of developments in quantum physics, including the use of degeneracy physics in describing dwarf stars. Dispute with Chandrasekhar on existence of black holes The topic of extension of his models precipitated his dispute with Subrahmanyan Chandrasekhar, who was then a student at Cambridge. Chandrasekhar's work presaged the discovery of black holes, which at the time seemed so absurdly non-physical that Eddington refused to believe that Chandrasekhar's purely mathematical derivation had consequences for the real world. Eddington was wrong and his motivation is controversial. Chandrasekhar's narrative of this incident, in which his work is harshly rejected, portrays Eddington as rather cruel and dogmatic. It is not clear if his actions were anything to do with Chandra's race as his treatment of many other notable scientists such as E.A Milne and James Jeans was no less scathing. Chandra benefited from his friendship with Eddington. It was Eddington and Milne who put up Chandra's name for the fellowship for the Royal Society which Chandra obtained. An FRS meant he was at the Cambridge high-table with all the luminaries and a very comfortable endowment for research. Eddington's criticism seems to have been based partly on a suspicion that a purely mathematical derivation from relativity theory was not enough to explain the seemingly daunting physical paradoxes that were inherent to degenerate stars, but to have "raised irrelevant objections" in addition, as Thanu Padmanabhan puts it. Relativity During World War I, Eddington was secretary of the Royal Astronomical Society, which meant he was the first to receive a series of letters and papers from Willem de Sitter regarding Einstein's theory of general relativity. Eddington was fortunate in being not only one of the few astronomers with the mathematical skills to understand general relativity, but owing to his internationalist and pacifist views inspired by his Quaker religious beliefs, one of the few at the time who was still interested in pursuing a theory developed by a German physicist. He quickly became the chief supporter and expositor of relativity in Britain. He and Astronomer Royal Frank Watson Dyson organized two expeditions to observe a solar eclipse in 1919 to make the first empirical test of Einstein's theory: the measurement of the deflection of light by the sun's gravitational field. In fact, Dyson's argument for the indispensability of Eddington's expertise in this test was what prevented Eddington from eventually having to enter military service. When conscription was introduced in Britain on 2 March 1916, Eddington intended to apply for an exemption as a conscientious objector. Cambridge University authorities instead requested and were granted an exemption on the ground of Eddington's work being of national interest. In 1918, this was appealed against by the Ministry of National Service. Before the appeal tribunal in June, Eddington claimed conscientious objector status, which was not recognized and would have ended his exemption in August 1918. A further two hearings took place in June and July, respectively. Eddington's personal statement at the June hearing about his objection to war based on religious grounds is on record. The Astronomer Royal, Sir Frank Dyson, supported Eddington at the July hearing with a written statement, emphasising Eddington's essential role in the solar eclipse expedition to Príncipe in May 1919. Eddington made clear his willingness to serve in the Friends' Ambulance Unit, under the jurisdiction of the British Red Cross, or as a harvest labourer. However, the tribunal's decision to grant a further twelve months' exemption from military service was on condition of Eddington continuing his astronomy work, in particular in preparation for the Príncipe expedition. The war ended before the end of his exemption. After the war, Eddington travelled to the island of Príncipe off the west coast of Africa to watch the solar eclipse of 29 May 1919. During the eclipse, he took pictures of the stars (several stars in the Hyades cluster include Kappa Tauri of the constellation Taurus) in the region around the Sun. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. This effect is noticeable only during eclipses, since otherwise the Sun's brightness obscures the affected stars. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein. Eddington's observations published the next year allegedly confirmed Einstein's theory, and were hailed at the time as evidence of general relativity over the Newtonian model. The news was reported in newspapers all over the world as a major story. Afterward, Eddington embarked on a campaign to popularize relativity and the expedition as landmarks both in scientific development and international scientific relations. It has been claimed that Eddington's observations were of poor quality, and he had unjustly discounted simultaneous observations at Sobral, Brazil, which appeared closer to the Newtonian model, but a 1979 re-analysis with modern measuring equipment and contemporary software validated Eddington's results and conclusions. The quality of the 1919 results was indeed poor compared to later observations, but was sufficient to persuade contemporary astronomers. The rejection of the results from the Brazil expedition was due to a defect in the telescopes used which, again, was completely accepted and well understood by contemporary astronomers. Throughout this period, Eddington lectured on relativity, and was particularly well known for his ability to explain the concepts in lay terms as well as scientific. He collected many of these into the Mathematical Theory of Relativity in 1923, which Albert Einstein suggested was "the finest presentation of the subject in any language." He was an early advocate of Einstein's general relativity, and an interesting anecdote well illustrates his humour and personal intellectual investment: Ludwik Silberstein, a physicist who thought of himself as an expert on relativity, approached Eddington at the Royal Society's (6 November) 1919 meeting where he had defended Einstein's relativity with his Brazil-Príncipe solar eclipse calculations with some degree of scepticism, and ruefully charged Arthur as one who claimed to be one of three men who actually understood the theory (Silberstein, of course, was including himself and Einstein as the other). When Eddington refrained from replying, he insisted Arthur not be "so shy", whereupon Eddington replied, "Oh, no! I was wondering who the third one might be!" Cosmology Eddington was also heavily involved with the development of the first generation of general relativistic cosmological models. He had been investigating the instability of the Einstein universe when he learned of both Lemaître's 1927 paper postulating an expanding or contracting universe and Hubble's work on the recession of the spiral nebulae. He felt the cosmological constant must have played the crucial role in the universe's evolution from an Einsteinian steady state to its current expanding state, and most of his cosmological investigations focused on the constant's significance and characteristics. In The Mathematical Theory of Relativity, Eddington interpreted the cosmological constant to mean that the universe is "self-gauging". Fundamental theory and the Eddington number During the 1920s until his death, Eddington increasingly concentrated on what he called "fundamental theory" which was intended to be a unification of quantum theory, relativity, cosmology, and gravitation. At first he progressed along "traditional" lines, but turned increasingly to an almost numerological analysis of the dimensionless ratios of fundamental constants. His basic approach was to combine several fundamental constants in order to produce a dimensionless number. In many cases these would result in numbers close to 1040, its square, or its square root. He was convinced that the mass of the proton and the charge of the electron were a "natural and complete specification for constructing a Universe" and that their values were not accidental. One of the discoverers of quantum mechanics, Paul Dirac, also pursued this line of investigation, which has become known as the Dirac large numbers hypothesis. A somewhat damaging statement in his defence of these concepts involved the fine-structure constant, α. At the time it was measured to be very close to 1/136, and he argued that the value should in fact be exactly 1/136 for epistemological reasons. Later measurements placed the value much closer to 1/137, at which point he switched his line of reasoning to argue that one more should be added to the degrees of freedom, so that the value should in fact be exactly 1/137, the Eddington number. Wags at the time started calling him "Arthur Adding-one". This change of stance detracted from Eddington's credibility in the physics community. The current measured value is estimated at 1/137.035 999 074(44). Eddington believed he had identified an algebraic basis for fundamental physics, which he termed "E-numbers" (representing a certain group – a Clifford algebra). These in effect incorporated spacetime into a higher-dimensional structure. While his theory has long been neglected by the general physics community, similar algebraic notions underlie many modern attempts at a grand unified theory. Moreover, Eddington's emphasis on the values of the fundamental constants, and specifically upon dimensionless numbers derived from them, is nowadays a central concern of physics. In particular, he predicted a number of hydrogen atoms in the Universe 136 × 2256 ≈ 1.57 1079, or equivalently the half of the total number of particles protons + electrons. He did not complete this line of research before his death in 1944; his book Fundamental Theory was published posthumously in 1948. Eddington number for cycling Eddington is credited with devising a measure of a cyclist's long-distance riding achievements. The Eddington number in the context of cycling is defined as the maximum number E such that the cyclist has cycled at least E miles on at least E days. For example, an Eddington number of 70 would imply that the cyclist has cycled at least 70 miles in a day on at least 70 occasions. Achieving a high Eddington number is difficult since moving from, say, 70 to 75 will (probably) require more than five new long distance rides, since any rides shorter than 75 miles will no longer be included in the reckoning. Eddington's own life-time E-number was 84. The Eddington number for cycling involves units of both distance and time. The significance of E is tied to its units. For example, in cycling an E of 62 with distance measured in miles means a cyclist has covered 62 miles at least 62 times. The distance 62 miles is equivalent to 100 kilometers. However, an E of 62 when distance is measured in miles may not be equivalent to an E of 100 when distance is measured in kilometres. A cyclist with an E of 100 measured in kilometres would have achieved 100 or more rides of at least 100 kilometers. While the distances 100 kilometers and 62 miles are equivalent, an E of 100 kilometers would require 38 more rides of that length than an E of 62 miles. The Eddington number for cycling is analogous to the h-index that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist. Philosophy Idealism Eddington wrote in his book The Nature of the Physical World that "The stuff of the world is mind-stuff." The idealist conclusion was not integral to his epistemology but was based on two main arguments. The first derives directly from current physical theory. Briefly, mechanical theories of the ether and of the behaviour of fundamental particles have been discarded in both relativity and quantum physics. From this, Eddington inferred that a materialistic metaphysics was outmoded and that, in consequence, since the disjunction of materialism or idealism are assumed to be exhaustive, an idealistic metaphysics is required. The second, and more interesting argument, was based on Eddington's epistemology, and may be regarded as consisting of two parts. First, all we know of the objective world is its structure, and the structure of the objective world is precisely mirrored in our own consciousness. We therefore have no reason to doubt that the objective world too is "mind-stuff". Dualistic metaphysics, then, cannot be evidentially supported. But, second, not only can we not know that the objective world is nonmentalistic, we also cannot intelligibly suppose that it could be material. To conceive of a dualism entails attributing material properties to the objective world. However, this presupposes that we could observe that the objective world has material properties. But this is absurd, for whatever is observed must ultimately be the content of our own consciousness, and consequently, nonmaterial. Ian Barbour, in his book Issues in Science and Religion (1966), p. 133, cites Eddington's The Nature of the Physical World (1928) for a text that argues the Heisenberg Uncertainty Principles provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism "the thesis that reality is basically mental". Charles De Koninck points out that Eddington believed in objective reality existing apart from our minds, but was using the phrase "mind-stuff" to highlight the inherent intelligibility of the world: that our minds and the physical world are made of the same "stuff" and that our minds are the inescapable connection to the world. As De Koninck quotes Eddington, Indeterminism Against Albert Einstein and others who advocated determinism, indeterminism—championed by Eddington—says that a physical object has an ontologically undetermined component that is not due to the epistemological limitations of physicists' understanding. The uncertainty principle in quantum mechanics, then, would not necessarily be due to hidden variables but to an indeterminism in nature itself. Popular and philosophical writings Eddington wrote a parody of The Rubaiyat of Omar Khayyam, recounting his 1919 solar eclipse experiment. It contained the following quatrain: During the 1920s and 30s, Eddington gave numerous lectures, interviews, and radio broadcasts on relativity, in addition to his textbook The Mathematical Theory of Relativity, and later, quantum mechanics. Many of these were gathered into books, including The Nature of the Physical World and New Pathways in Science. His use of literary allusions and humour helped make these difficult subjects more accessible. Eddington's books and lectures were immensely popular with the public, not only because of his clear exposition, but also for his willingness to discuss the philosophical and religious implications of the new physics. He argued for a deeply rooted philosophical harmony between scientific investigation and religious mysticism, and also that the positivist nature of relativity and quantum physics provided new room for personal religious experience and free will. Unlike many other spiritual scientists, he rejected the idea that science could provide proof of religious propositions. He is sometimes misunderstood as having promoted the infinite monkey theorem in his 1928 book The Nature of the Physical World, with the phrase "If an army of monkeys were strumming on typewriters, they might write all the books in the British Museum". It is clear from the context that Eddington is not suggesting that the probability of this happening is worthy of serious consideration. On the contrary, it was a rhetorical illustration of the fact that below certain levels of probability, the term improbable is functionally equivalent to impossible. His popular writings made him a household name in Great Britain between the world wars. Death Eddington died of cancer in the Evelyn Nursing Home, Cambridge, on 22 November 1944. He was unmarried. His body was cremated at Cambridge Crematorium (Cambridgeshire) on 27 November 1944; the cremated remains were buried in the grave of his mother in the Ascension Parish Burial Ground in Cambridge. Cambridge University's North West Cambridge Development has been named "Eddington" in his honour. The actor Paul Eddington was a relative, mentioning in his autobiography (in light of his own weakness in mathematics) "what I then felt to be the misfortune" of being related to "one of the foremost physicists in the world". Obituaries Obituary 1 by Henry Norris Russell, Astrophysical Journal 101 (1943–46) 133 Obituary 2 by A. Vibert Douglas, Journal of the Royal Astronomical Society of Canada, 39 (1943–46) 1 Obituary 3 by Harold Spencer Jones and E. T. Whittaker, Monthly Notices of the Royal Astronomical Society 105 (1943–46) 68 Obituary 4 by Herbert Dingle, The Observatory 66 (1943–46) 1 The Times, Thursday, 23 November 1944; pg. 7; Issue 49998; col D: Obituary (unsigned) – Image of cutting available at Honours Awards Smith's Prize (1907) Bruce Medal of Astronomical Society of the Pacific (1924) Henry Draper Medal of the National Academy of Sciences (1924) Gold Medal of the Royal Astronomical Society (1924) Foreign membership of the Royal Netherlands Academy of Arts and Sciences (1926) Prix Jules Janssen of the Société astronomique de France (French Astronomical Society) (1928) Royal Medal of the Royal Society (1928) Knighthood (1930) Order of Merit (1938) Hon. Freeman of Kendal, 1930 Named after him Lunar crater Eddington asteroid 2761 Eddington Royal Astronomical Society's Eddington Medal Eddington mission, now cancelled Eddington Tower, halls of residence at the University of Essex Eddington Astronomical Society, an amateur society based in his hometown of Kendal Eddington, a house (group of students, used for in-school sports matches) of Kirkbie Kendal School Eddington, new suburb of North West Cambridge, opened in 2017 Service Gave the Swarthmore Lecture in 1929 Chairman of the National Peace Council 1941–1943 President of the International Astronomical Union; of the Physical Society, 1930–32; of the Royal Astronomical Society, 1921–23 Romanes Lecturer, 1922 Gifford Lecturer, 1927 In popular culture Eddington is a central figure in the short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt" by Bertrand Russell, a work featured in The Mathematical Magpie by Clifton Fadiman. He was portrayed by David Tennant in the television film Einstein and Eddington, a co-production of the BBC and HBO, broadcast in the United Kingdom on Saturday, 22 November 2008, on BBC2. His thoughts on humour and religious experience were quoted in the adventure game The Witness, a production of the Thelka, Inc., released on 26 January 2016. Time placed him on the cover on 16 April 1934. Publications 1914. Stellar Movements and the Structure of the Universe. London: Macmillan. 1918. Report on the relativity theory of gravitation. London, Fleetway Press, Ltd. 1920. Space, Time and Gravitation: An Outline of the General Relativity Theory. Cambridge University Press. 1923, 1952. The Mathematical Theory of Relativity. Cambridge University Press. 1925. The Domain of Physical Science. 2005 reprint: 1926. Stars and Atoms. Oxford: British Association. 1926. The Internal Constitution of Stars. Cambridge University Press. 1928. The Nature of the Physical World. MacMillan. 1935 replica edition: , University of Michigan 1981 edition: (1926–27 Gifford lectures) 1929. Science and the Unseen World. US Macmillan, UK Allen & Unwin. 1980 Reprint Arden Library . 2004 US reprint – Whitefish, Montana : Kessinger Publications: . 2007 UK reprint London, Allen & Unwin (Swarthmore Lecture), with a new foreword by George Ellis. 1930. Why I Believe in God: Science and Religion, as a Scientist Sees It. Arrow/scrollable preview. 1933. The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Cambridge University Press. 1935. New Pathways in Science. Cambridge University Press. 1936. Relativity Theory of Protons and Electrons. Cambridge Univ. Press. 1939. Philosophy of Physical Science. Cambridge University Press. (1938 Tarner lectures at Cambridge) 1946. Fundamental Theory. Cambridge University Press. See also Astronomy Chandrasekhar limit Eddington luminosity (also called the Eddington limit) Gravitational lens Outline of astronomy Stellar nucleosynthesis Timeline of stellar astronomy List of astronomers Science Arrow of time Classical unified field theories Degenerate matter Dimensionless physical constant Dirac large numbers hypothesis (also called the Eddington–Dirac number) Eddington number Introduction to quantum mechanics Luminiferous aether Parameterized post-Newtonian formalism Special relativity Theory of everything (also called "final theory" or "ultimate theory") Timeline of gravitational physics and relativity List of experiments People List of science and religion scholars Other Infinite monkey theorem Numerology Ontic structural realism References Further reading Durham, Ian T., "Eddington & Uncertainty". Physics in Perspective (September – December). Arxiv, History of Physics Lecchini, Stefano, "How Dwarfs Became Giants. The Discovery of the Mass–Luminosity Relation" Bern Studies in the History and Philosophy of Science, pp. 224. (2007) Stanley, Matthew. "An Expedition to Heal the Wounds of War: The 1919 Eclipse Expedition and Eddington as Quaker Adventurer." Isis 94 (2003): 57–89. Stanley, Matthew. "So Simple a Thing as a Star: Jeans, Eddington, and the Growth of Astrophysical Phenomenology" in British Journal for the History of Science, 2007, 40: 53–82. External links Trinity College Chapel Arthur Stanley Eddington (1882–1944). University of St Andrews, Scotland. Quotations by Arthur Eddington Arthur Stanley Eddington The Bruce Medalists. Russell, Henry Norris, "Review of The Internal Constitution of the Stars by A.S. Eddington". Ap.J. 67, 83 (1928). Experiments of Sobral and Príncipe repeated in the space project in proceeding in fórum astronomical. Biography and bibliography of Bruce medalists: Arthur Stanley Eddington Eddington books: The Nature of the Physical World, The Philosophy of Physical Science, Relativity Theory of Protons and Electrons, and Fundamental Theory 1882 births 1944 deaths Alumni of Trinity College, Cambridge Alumni of the Victoria University of Manchester British anti–World War I activists British astrophysicists British conscientious objectors British Christian pacifists Corresponding Members of the Russian Academy of Sciences (1917–1925) Corresponding Members of the USSR Academy of Sciences British Quakers 20th-century British astronomers Fellows of Trinity College, Cambridge Fellows of the Royal Astronomical Society Fellows of the Royal Society Foreign associates of the National Academy of Sciences Knights Bachelor Members of the Order of Merit Members of the Royal Netherlands Academy of Arts and Sciences People from Kendal Presidents of the Physical Society Presidents of the Royal Astronomical Society Recipients of the Bruce Medal Recipients of the Gold Medal of the Royal Astronomical Society British relativity theorists Royal Medal winners Senior Wranglers 20th-century British physicists
Arthur Eddington
Ayahuasca is a South American (pan-Amazonian) psychoactive brew used both socially and as ceremonial spiritual medicine among the indigenous peoples of the Amazon basin. It is a psychedelic and entheogenic mixed drink brew commonly made out of the Banisteriopsis caapi vine, the Psychotria viridis shrub or a substitute, and possibly other ingredients. A chemically similar preparation, sometimes called "pharmahuasca", can be prepared using N,N-Dimethyltryptamine (DMT) and a pharmaceutical monoamine oxidase inhibitor (MAOI), such as isocarboxazid. B. caapi contains several alkaloids that act as MAOIs, which are required for DMT to be orally active. Ayahuasca is prepared in a tea that, when consumed, causes an altered state of consciousness or "high", including visual hallucinations and altered perceptions of reality. The other required ingredient is a plant that contains the primary psychoactive, DMT. This is usually the shrub Psychotria viridis, but Diplopterys cabrerana may be used as a substitute. Other plant ingredients often or occasionally used in the production of ayahuasca include Justicia pectoralis, one of the Brugmansia (especially Brugmansia insignis and Brugmansia versicolor, or a hybrid breed) or Datura species, and mapacho (Nicotiana rustica). Nomenclature Ayahuasca is known by many names throughout Northern South America and Brazil. Ayahuasca is the hispanicized (traditional) spelling of a word in the Quechuan languages, which are spoken in the Andean states of Ecuador, Bolivia, Peru, and Colombia—speakers of Quechuan languages who use the modern Alvarado orthography spell it ayawaska. This word refers both to the liana Banisteriopsis caapi, and to the brew prepared from it. In the Quechua languages, aya means "spirit, soul", or "corpse, dead body", and waska means "rope" or "woody vine", "liana". The word ayahuasca has been variously translated as "liana of the soul", "liana of the dead", and "spirit liana". It is also referred to as "la purge" due to the belief that it cures the soul, offering a deep introspective journey that allows the user to examine their emotions and ways of thinking. In Brazil, the brew and the liana are informally called either caapi or cipó; the latter is the Portuguese word for liana (or woody climbing vine). In the União do Vegetal of Brazil, an organised spiritual tradition in which people drink ayahuasca, the brew is prepared exclusively from B. caapi and Psychotria viridis. Adherents of União do Vegetal call this brew hoasca or vegetal; Brazilian Yawanawa call the brew "uní". The Achuar people and Shuar people of Ecuador and Peru call it natem, while the Sharanahua peoples of Peru call it shori. History Evidence of ayahuasca use dates back 1,000 years, as demonstrated by a bundle containing the residue of ayahuasca ingredients and various other preserved shamanic substances in a cave in southwestern Bolivia, discovered in 2010. In the 16th century, Christian missionaries from Spain first encountered indigenous western Amazonian basin South Americans using ayahuasca; their earliest reports described it as "the work of the devil". In the 20th century, the active chemical constituent of B. caapi was named telepathine, but it was found to be identical to a chemical already isolated from Peganum harmala and was given the name harmine. Beat writer William S. Burroughs read a paper by Richard Evans Schultes on the subject and while traveling through South America in the early 1950s sought out ayahuasca in the hopes that it could relieve or cure opiate addiction (see The Yage Letters). Ayahuasca became more widely known when the McKenna brothers published their experience in the Amazon in True Hallucinations. Dennis McKenna later studied pharmacology, botany, and chemistry of ayahuasca and oo-koo-he, which became the subject of his master's thesis. Richard Evans Schultes allowed Claudio Naranjo to make a special journey by canoe up the Amazon River to study ayahuasca with the South American Indians. He brought back samples of the beverage and published the first scientific description of the effects of its active alkaloids. In Brazil, a number of modern religious movements based on the use of ayahuasca have emerged, the most famous being Santo Daime and the União do Vegetal (or UDV), usually in an animistic context that may be shamanistic or, more often (as with Santo Daime and the UDV), integrated with Christianity. Both Santo Daime and União do Vegetal now have members and churches throughout the world. Similarly, the US and Europe have started to see new religious groups develop in relation to increased ayahuasca use. Some Westerners have teamed up with shamans in the Amazon forest regions, forming ayahuasca healing retreats that claim to be able to cure mental and physical illness and allow communication with the spirit world. In recent years, the brew has been popularized by Wade Davis (One River), English novelist Martin Goodman in I Was Carlos Castaneda, Chilean novelist Isabel Allende, writer Kira Salak, author Jeremy Narby (The Cosmic Serpent), author Jay Griffiths (Wild: An Elemental Journey), American novelist Steven Peck, radio personality Robin Quivers, and writer Paul Theroux (Figures in a Landscape: People and Places). Preparation Sections of Banisteriopsis caapi vine are macerated and boiled alone or with leaves from any of a number of other plants, including Psychotria viridis (chacruna), Diplopterys cabrerana (also known as chaliponga and chacropanga), and Mimosa tenuiflora, among other ingredients which can vary greatly from one shaman to the next. The resulting brew may contain the powerful psychedelic drug DMT and MAO inhibiting harmala alkaloids, which are necessary to make the DMT orally active. The traditional making of ayahuasca follows a ritual process that requires the user to pick the lower Chacruna leaf at sunrise, then say a prayer. The vine must be "cleaned meticulously with wooden spoons" and pounded "with wooden mallets until it's fibre." Brews can also be made with plants that do not contain DMT, Psychotria viridis being replaced by plants such as Justicia pectoralis, Brugmansia, or sacred tobacco, also known as mapacho (Nicotiana rustica), or sometimes left out with no replacement. This brew varies radically from one batch to the next, both in potency and psychoactive effect, based mainly on the skill of the shaman or brewer, as well as other admixtures sometimes added and the intent of the ceremony. Natural variations in plant alkaloid content and profiles also affect the final concentration of alkaloids in the brew, and the physical act of cooking may also serve to modify the alkaloid profile of harmala alkaloids. The actual preparation of the brew takes several hours, often taking place over the course of more than one day. After adding the plant material, each separately at this stage, to a large pot of water it is boiled until the water is reduced by half in volume. The individual brews are then added together and brewed until reduced significantly. This combined brew is what is taken by participants in ayahuasca ceremonies. Traditional use The uses of ayahuasca in traditional societies in South America vary greatly. Some cultures do use it for shamanic purposes, but in other cases, it is consumed socially among friends, in order to learn more about the natural environment, and even in order to visit friends and family who are far away. Nonetheless, people who work with ayahuasca in non-traditional contexts often align themselves with the philosophies and cosmologies associated with ayahuasca shamanism, as practiced among indigenous peoples like the Urarina of the Peruvian Amazon. Dietary taboos are often associated with the use of ayahuasca, although these seem to be specific to the culture around Iquitos, Peru, a major center of ayahuasca tourism. In the rainforest, these taboos tend towards the purification of one's self—abstaining from spicy and heavily seasoned foods, excess fat, salt, caffeine, acidic foods (such as citrus) and sex before, after, or during a ceremony. A diet low in foods containing tyramine has been recommended, as the speculative interaction of tyramine and MAOIs could lead to a hypertensive crisis; however, evidence indicates that harmala alkaloids act only on MAO-A, in a reversible way similar to moclobemide (an antidepressant that does not require dietary restrictions). Dietary restrictions are not used by the highly urban Brazilian ayahuasca church União do Vegetal, suggesting the risk is much lower than perceived and probably non-existent. Ceremony and the role of shamans Shamans, curanderos and experienced users of ayahuasca advise against consuming ayahuasca when not in the presence of one or several well-trained shamans. In some areas, there are purported brujos (Spanish for "witches") who masquerade as real shamans and who entice tourists to drink ayahuasca in their presence. Shamans believe one of the purposes for this is to steal one's energy and/or power, of which they believe every person has a limited stockpile. The shamans lead the ceremonial consumption of the ayahuasca beverage, in a rite that typically takes place over the entire night. During the ceremony, the effect of the drink lasts for hours. Prior to the ceremony, participants are instructed to abstain from spicy foods, red meat and sex. The ceremony is usually accompanied with purging which include vomiting and diarrhea, which is believed to release built-up emotions and negative energy. Traditional brew Traditional ayahuasca brews are usually made with Banisteriopsis caapi as an MAOI, while dimethyltryptamine sources and other admixtures vary from region to region. There are several varieties of caapi, often known as different "colors", with varying effects, potencies, and uses. DMT admixtures: Psychotria viridis (Chacruna) – leaves Diplopterys cabrerana (Chaliponga, Chagropanga, Banisteriopsis rusbyana) – leaves Psychotria carthagenensis (Amyruca) – leaves Mimosa tenuiflora (M. hostilis) - root bark Other common admixtures: Justicia pectoralis Brugmansia (Toé) Nicotiana rustica (Mapacho, variety of tobacco) Ilex guayusa, a relative of yerba mate Common admixtures with their associated ceremonial values and spirits: Ayahuma bark: Cannon Ball tree. Provides protection and is used in healing susto (soul loss from spiritual fright or trauma). Capirona bark: Provides cleansing, balance and protection. It is noted for its smooth bark, white flowers, and hard wood. Chullachaki caspi bark (Brysonima christianeae): Provides cleansing to the physical body. Used to transcend physical body ailments. Lopuna blanca bark: Provides protection. Punga amarilla bark: Yellow Punga. Provides protection. Used to pull or draw out negative spirits or energies. Remo caspi bark: Oar Tree. Used to move dense or dark energies. Wyra (huaira) caspi bark (Cedrelinga catanaeformis): Air Tree. Used to create purging, transcend gastro/intestinal ailments, calm the mind, and bring tranquility. Shiwawaku bark: Brings purple medicine to the ceremony. Uchu sanango: Head of the sanango plants. Huacapurana: Giant tree of the Amazon with very hard bark. Bobinsana: Mermaid Spirit. Provides major heart chakra opening, healing of emotions and relationships. Non-traditional usage In the late 20th century, the practice of ayahuasca drinking began spreading to Europe, North America and elsewhere. The first ayahuasca churches, affiliated with the Brazilian Santo Daime, were established in the Netherlands. A legal case was filed against two of the Church's leaders, Hans Bogers (one of the original founders of the Dutch Santo Daime community) and Geraldine Fijneman (the head of the Amsterdam Santo Daime community). Bogers and Fijneman were charged with distributing a controlled substance (DMT); however, the prosecution was unable to prove that the use of ayahuasca by members of the Santo Daime constituted a sufficient threat to public health and order such that it warranted denying their rights to religious freedom under ECHR Article 9. The 2001 verdict of the Amsterdam district court is an important precedent. Since then groups that are not affiliated to the Santo Daime have used ayahuasca, and a number of different "styles" have been developed, including non-religious approaches. Ayahuasca analogs In modern Europe and North America, ayahuasca analogs are often prepared using non-traditional plants which contain the same alkaloids. For example, seeds of the Syrian rue plant can be used as a substitute for the ayahuasca vine, and the DMT-rich Mimosa hostilis is used in place of chacruna. Australia has several indigenous plants which are popular among modern ayahuasqueros there, such as various DMT-rich species of Acacia. The name "ayahuasca" specifically refers to a botanical decoction that contains Banisteriopsis caapi. A synthetic version, known as pharmahuasca, is a combination of an appropriate MAOI and typically DMT. In this usage, the DMT is generally considered the main psychoactive active ingredient, while the MAOI merely preserves the psychoactivity of orally ingested DMT, which would otherwise be destroyed in the gut before it could be absorbed in the body. In contrast, traditionally among Amazonian tribes, the B. Caapi vine is considered to be the "spirit" of ayahuasca, the gatekeeper, and guide to the otherworldly realms. Brews similar to ayahuasca may be prepared using several plants not traditionally used in South America: DMT admixtures: Acacia maidenii (Maiden's wattle) – bark *not all plants are "active strains", meaning some plants will have very little DMT and others larger amounts Acacia phlebophylla, and other Acacias, most commonly employed in Australia – bark Anadenanthera peregrina, A. colubrina, A. excelsa, A. macrocarpa Desmanthus illinoensis (Illinois bundleflower) – root bark is mixed with a native source of beta-Carbolines (e.g., passion flower in North America) to produce a hallucinogenic drink called prairiehuasca. MAOI admixtures: Harmal (Peganum harmala, Syrian rue) – seeds Passion flower synthetic MAOIs, especially RIMAs Effects People who have consumed ayahuasca report having mystical experiences and spiritual revelations regarding their purpose on earth, the true nature of the universe, and deep insight into how to be the best person they possibly can. This is viewed by many as a spiritual awakening and what is often described as a near-death experience or rebirth. It is often reported that individuals feel they gain access to higher spiritual dimensions and make contact with various spiritual or extra-dimensional beings who can act as guides or healers. The experiences that people have while under the influence of ayahuasca are also culturally influenced. Westerners typically describe experiences with psychological terms like "ego death" and understand the hallucinations as repressed memories or metaphors of mental states. However, at least in Iquitos, Peru (a center of ayahuasca ceremonies), those from the area describe the experiences more in terms of the actions in the body and understand the visions as reflections of their environment—sometimes including the person who they believe caused their illness—as well as interactions with spirits. Recently, ayahuasca has been found to interact specifically with the visual cortex of the brain. In one study, de Araujo et al. measured the activity in the visual cortex when they showed participants photographs. Then, they measured the activity when the individuals closed their eyes. In the control group, the cortex was activated when looking at the photos, and less active when the participant closed his eyes; however, under the influence of ayahuasca and DMT, even with closed eyes, the cortex was just as active as when looking at the photographs. This study suggests that ayahuasca activates a complicated network of vision and memory which heightens the internal reality of the participants. It is claimed that people may experience profound positive life changes subsequent to consuming ayahuasca, by author Don Jose Campos and others. Vomiting can follow ayahuasca ingestion; this is considered by many shamans and experienced users of ayahuasca to be a purging and an essential part of the experience, representing the release of negative energy and emotions built up over the course of one's life. Others report purging in the form of diarrhea and hot/cold flashes. The ingestion of ayahuasca can also cause significant but temporary emotional and psychological distress. Excessive use could possibly lead to serotonin syndrome (although serotonin syndrome has never been specifically caused by ayahuasca except in conjunction with certain anti-depressants like SSRIs). Depending on dosage, the temporary non-entheogenic effects of ayahuasca can include tremors, nausea, vomiting, diarrhea, autonomic instability, hyperthermia, sweating, motor function impairment, sedation, relaxation, vertigo, dizziness, and muscle spasms which are primarily caused by the harmala alkaloids in ayahuasca. Long-term negative effects are not known. A few deaths linked to participation in the consumption of ayahuasca have been reported. Some of the deaths may have been due to unscreened preexisting heart conditions, interaction with drugs, such as antidepressants, recreational drugs, caffeine (due to the CYP1A2 inhibition of the harmala alkaloids), nicotine (from drinking tobacco tea for purging/cleansing), or from improper/irresponsible use due to behavioral risks or possible drug to drug interactions. Potential therapeutic effects There are potential antidepressant and anxiolytic effects of ayahuasca. For example, in 2018 it was reported that a single dose of ayahuasca significantly reduced symptoms of treatment-resistant depression in a small placebo-controlled trial. More specifically, statistically significant reductions of up to 82% in depressive scores were observed between baseline and 1, 7, and 21 days after ayahuasca administration, as measured on the Hamilton Rating Scale for Depression (HAM-D), the Montgomery-Åsberg Depression Rating Scale (MADRS), and the Anxious-Depression subscale of the Brief Psychiatric Rating Scale (BPRS). Other placebo-controlled research has provided evidence that ayahuasca can help improve self-perceptions in those with social anxiety disorder. Ayahuasca has also been studied for the treatment of addictions and shown to be effective, with lower Addiction Severity Index scores seen in users of ayahuasca compared to controls. Ayahuasca users have also been seen to consume less alcohol. Both in vitro and in vivo experiments have shown the DMT component of ayahuasca may induce the production of new neurons in the hippocampus. Murine test subjects performed better on memory tasks compared to a control group. Future research may lead to treatments for psychiatric and neurological disorders. Chemistry and pharmacology Harmala alkaloids are MAO-inhibiting beta-carbolines. The three most studied harmala alkaloids in the B. caapi vine are harmine, harmaline and tetrahydroharmine. Harmine and harmaline are selective and reversible inhibitors of monoamine oxidase A (MAO-A), while tetrahydroharmine is a weak serotonin reuptake inhibitor (SRI). This inhibition of MAO-A allows DMT to diffuse unmetabolized past the membranes in the stomach and small intestine, and eventually cross the blood–brain barrier (which, by itself, requires no MAO-A inhibition) to activate receptor sites in the brain. Without RIMAs or the non-selective, nonreversible monoamine oxidase inhibition by drugs like phenelzine and tranylcypromine, DMT would be oxidized (and thus rendered biologically inactive) by monoamine oxidase enzymes in the digestive tract. Individual polymorphisms of the cytochrome P450-2D6 enzyme affect the ability of individuals to metabolize harmine. Some natural tolerance to habitual use of ayahuasca (roughly once weekly) may develop through upregulation of the serotonergic system. A phase 1 pharmacokinetic study on ayahuasca (as Hoasca) with 15 volunteers was conducted in 1993, during the Hoasca Project. A review of the Hoasca Project has been published. The compound N,N-dimethyltryptamine (DMT) found in ayahuasca has been shown to be immunoregulatory by preventing severe hypoxia and oxidative stress in in vitro macrophages, cortical neurons, and dendritic cells by binding to the Sigma-1 receptor. In vitro co-treatment of monocyte derived dendritic cells with DMT and 5-MeO-DMT inhibited the production of pro-inflammatory cytokines IL-1β, IL-6, TNFα and the chemokine IL-8, while increased the secretion of the anti-inflammatory cytokine IL-10 by activating the Sigma-1 receptor. Neurogenesis Several studies have shown the alkaloids in the B. caapi vine promote neurogenesis. More specifically, in vitro studies showed that harmine, tetrahydroharmine and harmaline, stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. In vivo studies conducted on the dentate gyrus of the hippocampus noted an increase in the proliferation of BrdU positive cells in response to 100 μg of 5-MeO-DMT injected intravenously in the adult mouse brain. Legal status Internationally, DMT is a Schedule I drug under the Convention on Psychotropic Substances. The Commentary on the Convention on Psychotropic Substances notes, however, that the plants containing it are not subject to international control: The cultivation of plants from which psychotropic substances are obtained is not controlled by the Vienna Convention... Neither the crown (fruit, mescal button) of the Peyote cactus nor the roots of the plant Mimosa hostilis nor Psilocybe mushrooms themselves are included in Schedule 1, but only their respective principals, mescaline, DMT, and psilocin. A fax from the Secretary of the International Narcotics Control Board (INCB) to the Netherlands Ministry of Public Health sent in 2001 goes on to state that "Consequently, preparations (e.g. decoctions) made of these plants, including ayahuasca, are not under international control and, therefore, not subject to any of the articles of the 1971 Convention." Despite the INCB's 2001 affirmation that ayahuasca is not subject to drug control by international convention, in its 2010 Annual Report the Board recommended that governments consider controlling (i.e. criminalizing) ayahuasca at the national level. This recommendation by the INCB has been criticized as an attempt by the Board to overstep its legitimate mandate and as establishing a reason for governments to violate the human rights (i.e., religious freedom) of ceremonial ayahuasca drinkers. Under American federal law, DMT is a Schedule I drug that is illegal to possess or consume; however, certain religious groups have been legally permitted to consume ayahuasca. A court case allowing the União do Vegetal to import and use the tea for religious purposes in the United States, Gonzales v. O Centro Espirita Beneficente Uniao do Vegetal, was heard by the U.S. Supreme Court on November 1, 2005; the decision, released February 21, 2006, allows the UDV to use the tea in its ceremonies pursuant to the Religious Freedom Restoration Act. In a similar case an Ashland, Oregon-based Santo Daime church sued for their right to import and consume ayahuasca tea. In March 2009, U.S. District Court Judge Panner ruled in favor of the Santo Daime, acknowledging its protection from prosecution under the Religious Freedom Restoration Act. In 2017 the Santo Daime Church Céu do Montréal in Canada received religious exemption to use ayahuasca as a sacrament in their rituals. Religious use in Brazil was legalized after two official inquiries into the tea in the mid-1980s, which concluded that ayahuasca is not a recreational drug and has valid spiritual uses. In France, Santo Daime won a court case allowing them to use the tea in early 2005; however, they were not allowed an exception for religious purposes, but rather for the simple reason that they did not perform chemical extractions to end up with pure DMT and harmala and the plants used were not scheduled. Four months after the court victory, the common ingredients of ayahuasca as well as harmala were declared stupéfiants, or narcotic schedule I substances, making the tea and its ingredients illegal to use or possess. In June 2019, Oakland, California, decriminalized natural entheogens. The City Council passed the resolution in a unanimous vote, ending the investigation and imposition of criminal penalties for use and possession of entheogens derived from plants or fungi. The resolution states: "Practices with Entheogenic Plants have long existed and have been considered to be sacred to human cultures and human interrelationships with nature for thousands of years, and continue to be enhanced and improved to this day by religious and spiritual leaders, practicing professionals, mentors, and healers throughout the world, many of whom have been forced underground." In January 2020, Santa Cruz, California, and in September 2020, Ann Arbor, Michigan, decriminalized natural entheogens. Intellectual property issues Ayahuasca has also stirred debate regarding intellectual property protection of traditional knowledge. In 1986 the US Patent and Trademarks Office allowed the granting of a patent on the ayahuasca vine B. caapi. It allowed this patent based on the assumption that ayahuasca's properties had not been previously described in writing. Several public interest groups, including the Coordinating Body of Indigenous Organizations of the Amazon Basin (COICA) and the Coalition for Amazonian Peoples and Their Environment (Amazon Coalition) objected. In 1999 they brought a legal challenge to this patent which had granted a private US citizen "ownership" of the knowledge of a plant that is well-known and sacred to many indigenous peoples of the Amazon, and used by them in religious and healing ceremonies. Later that year the PTO issued a decision rejecting the patent, on the basis that the petitioners' arguments that the plant was not "distinctive or novel" were valid; however, the decision did not acknowledge the argument that the plant's religious or cultural values prohibited a patent. In 2001, after an appeal by the patent holder, the US Patent Office reinstated the patent. The law at the time did not allow a third party such as COICA to participate in that part of the reexamination process. The patent, held by US entrepreneur Loren Miller, expired in 2003. See also Icaro Kambo Mariri Footnotes Notes References Further reading Burroughs, William S. and Allen Ginsberg. The Yage Letters. San Francisco: City Lights, 1963. Langdon, E. Jean Matteson & Gerhard Baer, eds. Portals of Power: Shamanism in South America. Albuquerque: University of New Mexico Press, 1992. Shannon, Benny. The Antipodes of the Mind: Charting the Phenomenology of the Ayahuasca Experience. Oxford: Oxford University Press, 2002. Taussig, Michael. Shamanism, Colonialism, and the Wild Man: A Study in Terror and Healing. Chicago: University of Chicago Press, 1986. External links Entheogens Herbal and fungal hallucinogens Indigenous culture of the Amazon Mixed drinks Polysubstance drinks
Ayahuasca
Anointing of the sick, known also by other names, is a form of religious anointing or "unction" (an older term with the same meaning) for the benefit of a sick person. It is practiced by many Christian churches and denominations. Anointing of the sick was a customary practice in many civilizations, including among the ancient Greeks and early Jewish communities. The use of oil for healing purposes is referred to in the writings of Hippocrates. Anointing of the sick should be distinguished from other religious anointings that occur in relation to other sacraments, in particular baptism, confirmation and ordination, and also in the coronation of a monarch. Names Since 1972, the Roman Catholic Church has used the name "Anointing of the Sick" both in the English translations issued by the Holy See of its official documents in Latin and in the English official documents of Episcopal conferences. It does not, of course, forbid the use of other names, for example the more archaic term "Unction of the Sick" or the term "Extreme Unction". Cardinal Walter Kasper used the latter term in his intervention at the 2005 Assembly of the Synod of Bishops. However, the Church declared that "'Extreme unction' ... may also and more fittingly be called 'anointing of the sick'", and has itself adopted the latter term, while not outlawing the former. This is to emphasize that the sacrament is available, and recommended, to all those suffering from any serious illness, and to dispel the common misconception that it is exclusively for those at or very near the point of death. Extreme Unction was the usual name for the sacrament in the West from the late twelfth century until 1972, and was thus used at the Council of Trent and in the 1913 Catholic Encyclopedia. Peter Lombard (died 1160) is the first writer known to have used the term, which did not become the usual name in the West till towards the end of the twelfth century, and never became current in the East. The word "extreme" (final) indicated either that it was the last of the sacramental unctions (after the anointings at Baptism, Confirmation and, if received, Holy Orders) or because at that time it was normally administered only when a patient was in extremis. Other names used in the West include the unction or blessing of consecrated oil, the unction of God, and the office of the unction. Among some Protestant bodies, who do not consider it a sacrament, but instead as a practice suggested rather than commanded by Scripture, it is called anointing with oil. In the Greek Church the sacrament is called Euchelaion (Greek Εὐχέλαιον, from εὐχή, "prayer", and ἔλαιον, "oil"). Other names are also used, such as ἅγιον ἔλαιον (holy oil), ἡγιασμένον ἔλαιον (consecrated oil), and χρῖσις or χρῖσμα (anointing). The Community of Christ uses the term administration to the sick. The term "last rites" refers to administration to a dying person not only of this sacrament but also of Penance and Holy Communion, the last of which, when administered in such circumstances, is known as "Viaticum", a word whose original meaning in Latin was "provision for the journey". The normal order of administration is: first Penance (if the dying person is physically unable to confess, absolution, conditional on the existence of contrition, is given); next, Anointing; finally, Viaticum (if the person can receive it). Biblical texts The chief biblical text concerning the rite is James 5:14–15: "Is any among you sick? Let him call for the elders of the church, and let them pray over him, anointing him with oil in the name of the Lord; and the prayer of faith will save the sick man, and the Lord will raise him up; and if he has committed sins, he will be forgiven" (RSV). Matthew 10:8, Luke 10:8–9 and Mark 6:13 are also quoted in this context. Sacramental beliefs The Catholic, Eastern Orthodox and Coptic and Old Catholic Churches consider this anointing to be a sacrament. Other Christians too, in particular, Lutherans, Anglicans and some Protestant and other Christian communities use a rite of anointing the sick, without necessarily classifying it as a sacrament. In the Churches mentioned here by name, the oil used (called "oil of the sick" in both West and East) is blessed specifically for this purpose. Roman Catholic Church An extensive account of the teaching of the Catholic Church on Anointing of the Sick is given in Catechism of the Catholic Church. Anointing of the Sick is one of the seven Sacraments recognized by the Catholic Church, and is associated with not only bodily healing but also forgiveness of sins. Only ordained priests can administer it, and "any priest may carry the holy oil with him, so that in a case of necessity he can administer the sacrament of anointing of the sick." Sacramental graces The Catholic Church sees the effects of the sacrament as follows. As the sacrament of Marriage gives grace for the married state, the sacrament of Anointing of the Sick gives grace for the state into which people enter through sickness. Through the sacrament a gift of the Holy Spirit is given, that renews confidence and faith in God and strengthens against temptations to discouragement, despair and anguish at the thought of death and the struggle of death; it prevents from losing Christian hope in God's justice, truth and salvation. The special grace of the sacrament of the Anointing of the Sick has as its effects: the uniting of the sick person to the passion of Christ, for his own good and that of the whole Church; the strengthening, peace, and courage to endure, in a Christian manner, the sufferings of illness or old age; the forgiveness of sins, if the sick person was not able to obtain it through the sacrament of penance; the restoration of , if it is conducive to the salvation of his soul; the preparation for passing over to eternal life." Sacramental oil The duly blessed oil used in the sacrament is, as laid down in the Apostolic Constitution, Sacram unctionem infirmorum, pressed from olives or from other plants. It is blessed by the bishop of the diocese at the Chrism Mass he celebrates on Holy Thursday or on a day close to it. If oil blessed by the bishop is not available, the priest administering the sacrament may bless the oil, but only within the framework of the celebration. Ordinary Form of the Roman Rite (1972) The Roman Rite Anointing of the Sick, as revised in 1972, puts greater stress than in the immediately preceding centuries on the sacrament's aspect of healing, primarily spiritual but also physical, and points to the place sickness holds in the normal life of Christians and its part in the redemptive work of the Church. Canon law permits its administration to a Catholic who has reached the age of reason and is beginning to be put in danger by illness or old age, unless the person in question obstinately persists in a manifestly grave sin. "If there is any doubt as to whether the sick person has reached the use of reason, or is dangerously ill, or is dead, this sacrament is to be administered". There is an obligation to administer it to the sick who, when they were in possession of their faculties, at least implicitly asked for it. A new illness or a renewal or worsening of the first illness enables a person to receive the sacrament a further time. The ritual book on pastoral care of the sick provides three rites: anointing outside Mass, anointing within Mass, and anointing in a hospital or institution. The rite of anointing outside Mass begins with a greeting by the priest, followed by sprinkling of all present with holy water, if deemed desirable, and a short instruction. There follows a penitential act, as at the beginning of Mass. If the sick person wishes to receive the sacrament of penance, it is preferable that the priest make himself available for this during a previous visit; but if the sick person must confess during the celebration of the sacrament of anointing, this confession replaces the penitential rite A passage of Scripture is read, and the priest may give a brief explanation of the reading, a short litany is said, and the priest lays his hands on the head of the sick person and then says a prayer of thanksgiving over the already blessed oil or, if necessary, blesses the oil himself. The actual anointing of the sick person is done on the forehead, with the prayer: PER ISTAM SANCTAM UNCTIONEM ET SUAM PIISSIMAM MISERICORDIAM ADIUVET TE DOMINUS GRATIA SPIRITUS SANCTI, UT A PECCATIS LIBERATUM TE SALVET ATQUE PROPITIUS ALLEVIET. AMEN. "Through this holy anointing may the Lord in his love and mercy help you with the grace of the Holy Spirit," and on the hands, with the prayer "May the Lord who frees you from sin save you and raise you up". To each prayer the sick person, if able, responds: "Amen." It is permitted, in accordance with local culture and traditions and the condition of the sick person, to anoint other parts of the body in addition, such as the area of pain or injury, but without repeating the sacramental form. In case of emergency, a single anointing, if possible but not absolutely necessary if not possible on the forehead, is sufficient. Extraordinary Form of the Roman Rite From the early Middle Ages until after the Second Vatican Council the sacrament was administered, within the Latin Church, only when death was approaching and, in practice, bodily recovery was not ordinarily looked for, giving rise, as mentioned above to the name "Extreme Unction" (i.e. final anointing). The extraordinary form of the Roman Rite includes anointing of seven parts of the body while saying in Latin: Per istam sanctam Unctiónem + et suam piisimam misericórdiam, indúlgeat tibi Dóminus quidquid per (visum, auditorum, odorátum, gustum et locutiónem, tactum, gressum, lumborum delectationem) deliquisti. Through this holy unction and His own most tender mercy may the Lord pardon thee whatever sins thou hast committed by (sight by hearing, smell, taste, touch, walking, carnal delectation), the last phrase corresponding to the part of the body that was touched. The 1913 Catholic Encyclopedia explains that "the unction of the loins is generally, if not universally, omitted in English-speaking countries, and it is of course everywhere forbidden in case of women". Anointing in the extraordinary form is still permitted under the conditions mentioned in article 9 of the 2007 motu proprio Summorum Pontificum. In the case of necessity when only a single anointing on the forehead is possible, it suffices for valid administration of the sacrament to use the shortened form: Per istam sanctam unctionem indulgeat tibi Dominus, quidquid deliquisti. Amen. Through this holy anointing, may the Lord pardon thee whatever sins thou hast committed. Amen. When it become opportune, all the anointings are to be supplied together with their respective forms for the integrity of the sacrament. If the sacrament is conferred conditionally, for example, if a person is unconscious, "Si es capax (If you are capable)” is added to the beginning of the form, not "Si dispositus es (if you are disposed)." In doubt if the soul has left the body through death, the priest adds, "Si vivis (If you are alive)." Other Western historical forms Liturgical rites of the Catholic Church, both Western and Eastern, other than the Roman, have a variety of other forms for celebrating the sacrament. For example, according to Giovanni Diclich who cites De Rubeis, De Ritibus vestutis &c. cap. 28 p. 381, the Aquileian Rite, also called Rito Patriarchino, had twelve anointings, namely, of the head, forehead, eyes, ears, nose, lips, throat, chest, heart, shoulders, hands, and feet. The form used to anoint is the first person plural indicative, except for the anointing on the head which could be either in the first person singular or plural. For example, the form is given as: Ungo caput tuum Oleo benedicto + in nomine Patris, et Filii, et Spiritus Sancti. Vel Ungimus caput tuum Oleo divinitus sanctificato + in nomine Sanctae et Individuae Trinitatis ut more militis praeparatus ad luctamen, possis aereas superare catervas: per Christum Dominum nostrum. Amen. I anoint your head with blessed Oil + in the name of the Father, and of the Son, and of the Holy Spirit. Or We anoint your head with divinely sanctified Oil + in the name of the Holy and Undivided Trinity so that prepared for the conflict in the way of a soldier, you might be able to overcome the aereal throng: through Christ our Lord. Amen. The other anointings all mention an anointing with oil and are all made "through Christ our Lord," and "in the name of the Father, and of the Son, and of the Holy Spirit," except the anointing of the heart which, as in the second option for anointing of the head, is "in the name of the Holy and Undivided Trinity." the Latin forms are as follows: (Ad frontem) Ungimus frontem tuam Oleo sancto in nomine Patris, et Filii, et Spiritus Sancti, in remissionem omnium peccatorum; ut sit tibi haec unction sanctificationis ad purificationem mentis et corporis; ut non lateat in te spiritus immundus neque in membris, neque in medullis, neque in ulla compagine membrorum: sed habitet in te virtus Christi Altissimi et Spiritus Sancti: per Christum Dominum nostrum. Amen. (Ad oculos) Ungimus oculos tuos Oleo sanctificato, in nomine Patris, et Filii, et Spiritus Sancti: ut quidquid illicito visu deliquisti, hac unctione expietur per Christum Dominum nostrum. Amen. (Ad aures) Ungimus has aures sacri Olei liquore in nomine Patris, et Filii, et Spiritus Sancti: ut quidquid peccati delectatione nocivi auditus admissum est, medicina hac spirituali evacuetur: per Christum Dominum nostrum. Amen. (Ad nares) Ungimus has nares Olei hujus liquore in nomine Patris, et Filii, et Spiritus Sancti: ut quidquid noxio vapore contractum est, vel odore superfluo, ista evacuet unctio vel medicatio: per Christum Dominum nostrum. Amen. (Ad labia) Ungimus labia ista consecrati Olei medicamento, in nomine Patris, et Filii, et Spiritus Sancti: ut quidquid otiose, vel etiam crimnosa peccasti locutione, divina clementia miserante expurgetur: per Christum Dominum nostrum. Amen. (Ad guttur) Ungimus te in gutture Oleo sancto in nomine Patris, et Filii, et Spiritus Sancti, ut non lateat in te spiritus immundus, neque in membris, neque in medullis, neque in ulla compagine membrorum: sed habitet in te virtus Christi Altissimi et Spiritus Sancti:quatenus per hujus operationem mysterii, et per hanc sacrati Olei unctionem, atque nostrum deprecationem virtute Sanctae Trinitatis medicatus, sive fotus; pristinam, et meliorem percipere merearis sanitatem: per Christum Dominum nostrum. Amen. (Ad pectus) Ungimus pectus tuum Oleo divinitus sanctificato in nomine Patris, et Filii, et Spiritus Sancti, ut hac unctione pectoris fortiter certare valeas adversus aereas potestates: per Christum Dominum nostrum. Amen. (Ad cor) Ungimus locum cordis Oleo divinitus sanctificato, coelesti munere nobis attributo, in nomine Sanctae et Individuae Trinitatis, ut ipsa interius exteriusque te sanando vivificet, quae universum ne pereat continent: per Christum Dominum nostrum. Amen. (Ad scapulas) Ungimus has scapulas, sive in medio scapularum Oleo sacrato, in nomine Patris, et Filii, et Spiritus Sancti, ut ex omni parte spirituali protectione munitus, jacula diabolici impetus viriliter contemnere, ac procul possis cum robore superni juvaminis repellere: per Christum Dominum nostrum. Amen. (Ad manus) Ungimus has manus Oleo sacro, in nomine Patris, et Filii, et Spiritus Sancti, ut quidquid illicito opera, vel noxio peregerunt, per hanc sanctam unctionem evacuetur: per Christum Dominum nostrum. Amen. (Ad pedes) Ungimus hos pedes Oleo benedicto, in nomine Patris, et Filii, et Spiritus Sancti, ut quidquid superfluo, vel nocivo incessu commiserunt, ista aboleat perunctio: per Christum Dominum nostrum. Amen. Eastern Orthodox Church The teaching of the Eastern Orthodox Church on the Holy Mystery (sacrament) of Unction is similar to that of the Roman Catholic Church. However, the reception of the Mystery is not limited to those who are enduring physical illness. The Mystery is given for healing (both physical and spiritual) and for the forgiveness of sin. For this reason, it is normally required that one go to confession before receiving Unction. Because it is a Sacred Mystery of the Church, only Orthodox Christians may receive it. The solemn form of Eastern Christian anointing requires the ministry of seven priests. A table is prepared, upon which is set a vessel containing wheat. Into the wheat has been placed an empty shrine-lamp, seven candles, and seven anointing brushes. Candles are distributed for all to hold during the service. The rite begins with reading Psalm 50 (the great penitential psalm), followed by the chanting of a special canon. After this, the senior priest (or bishop) pours pure olive oil and a small amount of wine into the shrine lamp, and says the "Prayer of the Oil", which calls upon God to "...sanctify this Oil, that it may be effectual for those who shall be anointed therewith, unto healing, and unto relief from every passion, every malady of the flesh and of the spirit, and every ill..." Then follow seven series of epistles, gospels, long prayers, Ektenias (litanies) and anointings. Each series is served by one of the seven priests in turn. The afflicted one is anointed with the sign of the cross on seven places: the forehead, the nostrils, the cheeks, the lips, the breast, the palms of both hands, and the back of the hands. After the last anointing, the Gospel Book is opened and placed with the writing down upon the head of the one who was anointed, and the senior priest reads the "Prayer of the Gospel". At the end, the anointed kisses the Gospel, the Cross and the right hands of the priests, receiving their blessing. Anointing is considered to be a public rather than a private sacrament, and so as many of the faithful who are able are encouraged to attend. It should be celebrated in the church when possible, but if this is impossible, it may be served in the home or hospital room of the afflicted. Unction in the Greek Orthodox Church and Churches of Hellenic custom (Antiochian Eastern Orthodox, Melkite, etc.) is usually given with a minimum of ceremony. Anointing may also be given during Forgiveness Vespers and Great Week, on Great and Holy Wednesday, to all who are prepared. Those who receive Unction on Holy Wednesday should go to Holy Communion on Great Thursday. The significance of receiving Unction on Holy Wednesday is shored up by the hymns in the Triodion for that day, which speak of the sinful woman who anointed the feet of Christ. Just as her sins were forgiven because of her penitence, so the faithful are exhorted to repent of their sins. In the same narrative, Jesus says, "in that she hath poured this ointment on my body, she did it for my burial" (Id., v. 12), linking the unction with Christ's death and resurrection. In some dioceses of the Russian Orthodox Church it is customary for the bishop to visit each parish or region of the diocese some time during Great Lent and give Anointing for the faithful, together with the local clergy. Hussite Church The Hussite Church regards anointing of the sick as one of the seven sacraments. Lutheran churches Anointing of the sick has been retained in Lutheran churches since the Reformation. Although it is not considered a sacrament like baptism, confession and the Eucharist, it is known as a ritual in the same respect as confirmation, holy orders, and matrimony. Liturgy After the penitent has received absolution following confession, the presiding minister recites James 5:14-16. He goes on to recite the following: [Name], you have confessed your sins and received Holy Absolution. In remembrance of the grace of God given by the Holy Spirit in the waters of Holy Baptism, I will anoint you with oil. Confident in our Lord and in love for you, we also pray for you that you will not lose faith. Knowing that in Godly patience the Church endures with you and supports you during this affliction. We firmly believe that this illness is for the glory of God and that the Lord will both hear our prayer and work according to His good and gracious will. He anoints the person on the forehead and says this blessing: Almighty God, the Father of our Lord Jesus Christ, who has given you the new birth of water and the Spirit and has forgiven you all your sins, strengthen you with His grace to life everlasting. Amen. Anglican churches The 1552 and later editions of the Book of Common Prayer omitted the form of anointing given in the original (1549) version in its Order for the Visitation of the Sick, but most twentieth-century Anglican prayer books do have anointing of the sick. The Book of Common Prayer (1662) and the proposed revision of 1928 include the "visitation of the sick" and "communion of the sick" (which consist of various prayers, exhortations and psalms). Some Anglicans accept that anointing of the sick has a sacramental character and is therefore a channel of God's grace, seeing it as an "outward and visible sign of an inward and spiritual grace" which is the definition of a sacrament. The Catechism of the Episcopal Church of the United States of America includes Unction of the Sick as among the "other sacramental rites" and it states that unction can be done with oil or simply with laying on of hands. The rite of anointing is included in the Episcopal Church's "Ministration to the Sick" Article 25 of the Thirty-Nine Articles, which are one of the historical formularies of the Church of England (and as such, the Anglican Communion), speaking of the sacraments, says: "Those five commonly called Sacraments, that is to say, Confirmation, Penance, Orders, Matrimony, and extreme Unction, are not to be counted for Sacraments of the Gospel, being such as have grown partly of the corrupt following of the Apostles, partly are states of life allowed in the Scriptures; but yet have not like nature of Sacraments with Baptism, and the Lord's Supper, for that they have not any visible sign or ceremony ordained of God." Other Protestant communities Protestants provide anointing in a wide variety of formats. Protestant communities generally vary widely on the sacramental character of anointing. Most Mainline Protestants recognize only two sacraments, the eucharist and baptism, deeming anointing only a humanly-instituted rite. Non-traditional Protestant communities generally use the term ordinance rather than sacrament. Mainline beliefs Liturgical or Mainline Protestant communities (e.g. Presbyterian, Congregationalist/United Church of Christ, Methodist, etc.) all have official yet often optional liturgical rites for the anointing of the sick partly on the model of Western pre-Reformation rites. Anointing need not be associated with grave illness or imminent danger of death. Charismatic and Pentecostal beliefs In Charismatic and Pentecostal communities, anointing of the sick is a frequent practice and has been an important ritual in these communities since the respective movements were founded in the 19th and 20th centuries. These communities use extemporaneous forms of administration at the discretion of the minister, who need not be a pastor. There is minimal ceremony attached to its administration. Usually, several people physically touch (laying on of hands) the recipient during the anointing. It may be part of a worship service with the full assembly of the congregation present, but may also be done in more private settings, such as homes or hospital rooms. Some Pentecostals believe that physical healing is within the anointing and so there is often great expectation or at least great hope that a miraculous cure or improvement will occur when someone is being prayed over for healing. Evangelical and fundamentalist beliefs In Evangelical and Fundamentalist communities, anointing of the sick is performed with varying degrees of frequency, although laying on of hands may be more common than anointing. The rite would be similar to that of Pentecostals in its simplicity, but would usually not have the same emotionalism attached to it. Unlike some Pentecostals, Evangelicals and Fundamentalists generally do not believe that physical healing is within the anointing. Therefore, God may or may not grant physical healing to the sick. The healing conferred by anointing is thus a spiritual event that may not result in physical recovery. The Church of the Brethren practices Anointing with Oil as an ordinance along with Baptism, Communion, Laying on of Hands, and the Love Feast. Evangelical Protestants who use anointing differ about whether the person doing the anointing must be an ordained member of the clergy, whether the oil must necessarily be olive oil and have been previously specially consecrated, and about other details. Several Evangelical groups reject the practice so as not to be identified with charismatic and Pentecostal groups, which practice it widely. Latter Day Saint movement The Church of Jesus Christ of Latter-day Saints Latter-day Saints, who consider themselves restorationists, also practice ritual anointing of the sick, as well as other forms of anointing. Members of The Church of Jesus Christ of Latter-day Saints (LDS Church) consider anointing to be an ordinance. Members of the LDS Church who hold the Melchizedek priesthood may use consecrated olive oil in performing the ordinance of blessing of the "sick or afflicted", though oil is not required if it is unavailable. The priesthood holder anoints the recipient's head with a drop of oil, then lays hands upon that head and declare their act of anointing. Then another priesthood holder joins in, if available, and pronounces a "sealing" of the anointing and other words of blessing, as he feels inspired. Melchizedek priesthood holders are also authorized to consecrate any pure olive oil and often carry a personal supply in case they have need to perform an anointing. Oil is not used in other blessings, such as for people seeking comfort or counsel. In addition to the James 5:14-15 reference, the Doctrine and Covenants contains numerous references to the anointing and healing of the sick by those with authority to do so. Community of Christ Administration to the sick is one of the eight sacraments of the Community of Christ, in which it has also been used for people seeking spiritual, emotional or mental healing. See also Anointing of the Sick (Catholic Church) Faith healing References External links Church Fathers on the Anointing of the Sick Western The Anointing of the Sick Sacrament of the Anointing of the Sick "Extreme Unction" in Catholic Encyclopedia (1913) Apostolic Constitution "Sacram unctionem infirmorum" Eastern Holy Anointing of the Sick article from the Moscow Patriarchate Unction of the Sick article from the Sydney, Australia diocese of the Russian Orthodox Church Outside of Russia The Mystery of Unction Russian Orthodox Cathedral of St. John the Baptist, Washington, DC Coptic Unction on Holy Saturday (Photo) Christian terminology New Testament words and phrases Sacraments Supernatural healing
Anointing of the sick
An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion). To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. In contrast, the remainder of the antibody is relatively constant. It only occurs in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, or IgM. The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Together with B and T cells, antibodies comprise the most important part of the adaptive immune system. They occur in two forms: one that is attached to a B cell, and the other, a soluble form, that is unattached and found in extracellular fluids such as blood plasma. Initially, all antibodies are of the first form, attached to the surface of a B cell – these are then referred to as B-cell receptors (BCR). After an antigen binds to a BCR, the B cell activates to proliferate and differentiate into either plasma cells, which secrete soluble antibodies with the same paratope, or memory B cells, which survive in the body to enable long-lasting immunity to the antigen. Soluble antibodies are released into the blood and tissue fluids, as well as many secretions. Because these fluids were traditionally known as humors, antibody-mediated immunity is sometimes known as, or considered a part of, humoral immunity. The soluble Y-shaped units can occur individually as monomers, or in complexes of two to five units. Antibodies are glycoproteins belonging to the immunoglobulin superfamily. The terms antibody and immunoglobulin are often used interchangeably, though the term 'antibody' is sometimes reserved for the secreted, soluble form, i.e. excluding B-cell receptors. Structure Antibodies are heavy (~150 kDa) proteins of about 10 nm in size, arranged in three globular regions that roughly form a Y shape. In humans and most mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds. Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each. These domains are usually represented in simplified schematics as rectangles. Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ... Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape. In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily. In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction. Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ. This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ heavy chains which characterize the IgG class of antibodies. Antigen-binding site The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen. More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody. When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody. These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen. Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen. Typically however only a few residues contribute to most of the binding energy. The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes. The resulting cross-linking plays a role in activating other parts of the immune system. The structures of CDRs have been clustered and classified by Chothia et al. and more recently by North et al. and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes. Fc region The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen. Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway. Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues. These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules. Protein structure The N-terminus of each chain is situated at the tip. Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily: it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif. The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond. Antibody complexes Secreted antibodies can occur as a single Y-shaped unit, a monomer. However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex. Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc. Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies. An extreme example is the clumping, or agglutination, of red blood cells with antibodies in the Coombs test to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation. B cell receptors The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences. Classes Antibodies can come in different varieties known as isotypes or classes. In placental mammals there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2. The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively. The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region. The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table. For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules. The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system. Light chain types In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei). In non-mammalian animals In most placental mammals, the structure of antibodies is generally the same. Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier. Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies. Antibody–antigen interactions The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants. Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities. Function The main categories of antibody action include the following: Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following: Lysis of the foreign cell Encouragement of inflammation by chemotactically attracting inflammatory cells More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity. Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures. At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens). Activation of complement Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis). Activation of effector cells To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region. Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens. Natural antibodies Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue. Immunoglobulin diversity Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes. Domain variability The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences, between the variable domains, are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below. V(D)J recombination Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells. RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur. After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain. Somatic hypermutation and affinity maturation Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains. This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells. Class switching Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment. Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype. Specificity designations An antibody can be called monospecific if it has specificity for the same antigen or epitope, or bispecific if they have affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell. Asymmetrical antibodies Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but doesn't suppress homodimer formation. To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but aren't limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms. Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms don't have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality. History The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something. The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization. In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies. Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies. Medical applications Disease diagnosis Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests. New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. Disease therapy Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual. Prenatal therapy Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn. Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. It is important to note that this occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself. Research applications Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography. In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cell express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques. Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11). Regulations Production and testing Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically-induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include: The demonstration that the process is able to produce in good quality (the process should be validated) The efficiency of the antibody purification (all impurities and virus must be eliminated) The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...) Determination of the virus clearance studies Before clinical trials Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product. Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing). Preclinical studies Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models). Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible. Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects Structure prediction and computational antibody design The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enables computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs. There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches. Antibody mimetic Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have being developed and commercialized as research, diagnostic and therapeutic agents. Optimer ligands Optimer ligands are a novel class of antibody mimetics. These nucleic acid based affinity ligands are developed in vitro to generate specific and sensitive affinity ligands that are being applied across therapeutics, drug delivery, bioprocessing, diagnostics, and basic research. Binding antibody unit BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity. See also Affimer Anti-mitochondrial antibodies Anti-nuclear antibodies Antibody mimetic Aptamer Colostrum ELISA Humoral immunity Immunology Immunosuppressive drug Intravenous immunoglobulin (IVIg) Magnetic immunoassay Microantibody Monoclonal antibody Neutralizing antibody Optimer Ligand Secondary antibodies Single-domain antibody Slope spectroscopy Synthetic antibody Western blot normalization References External links Mike's Immunoglobulin Structure/Function Page at University of Cambridge Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford How Lymphocytes Produce Antibody from Cells Alive! Glycoproteins Immunology Reagents for biochemistry
Antibody
Albrecht Dürer (; ; ; 21 May 1471 – 6 April 1528), sometimes spelled in English as Durer (without an umlaut) or Duerer, was a German painter, printmaker, and theorist of the German Renaissance. Born in Nuremberg, Dürer established his reputation and influence across Europe in his twenties due to his high-quality woodcut prints. He was in contact with the major Italian artists of his time, including Raphael, Giovanni Bellini, and Leonardo da Vinci, and from 1512 was patronized by Emperor Maximilian I. Dürer's vast body of work includes engravings, his preferred technique in his later prints, altarpieces, portraits and self-portraits, watercolours and books. The woodcuts series are more Gothic than the rest of his work. His well-known engravings include the three Meisterstiche (master prints) Knight, Death and the Devil (1513), Saint Jerome in his Study (1514), and Melencolia I (1514). His watercolours mark him as one of the first European landscape artists, while his woodcuts revolutionised the potential of that medium. Dürer's introduction of classical motifs into Northern art, through his knowledge of Italian artists and German humanists, has secured his reputation as one of the most important figures of the Northern Renaissance. This is reinforced by his theoretical treatises, which involve principles of mathematics, perspective, and ideal proportions. Biography Early life (1471–1490) Dürer was born on 21 May 1471, the third child and second son of Albrecht Dürer the Elder and Barbara Holper, who married in 1467 and had eighteen children together. Albrecht Dürer the Elder (originally Albrecht Ajtósi), was a successful goldsmith who by 1455 had moved to Nuremberg from Ajtós, near Gyula in Hungary. He married Holper, his master's daughter, when he himself qualified as a master. One of Albrecht's brothers, Hans Dürer, was also a painter and trained under him. Another of Albrecht's brothers, Endres Dürer, took over their father's business and was a master goldsmith. The German name "Dürer" is a translation from the Hungarian, "Ajtósi". Initially, it was "Türer", meaning doormaker, which is "ajtós" in Hungarian (from "ajtó", meaning door). A door is featured in the coat-of-arms the family acquired. Albrecht Dürer the Younger later changed "Türer", his father's diction of the family's surname, to "Dürer", to adapt to the local Nuremberg dialect. Dürer's godfather Anton Koberger left goldsmithing to become a printer and publisher in the year of Dürer's birth. He became the most successful publisher in Germany, eventually owning twenty-four printing-presses and a number of offices in Germany and abroad. Koberger's most famous publication was the Nuremberg Chronicle, published in 1493 in German and Latin editions. It contained an unprecedented 1,809 woodcut illustrations (albeit with many repeated uses of the same block) by the Wolgemut workshop. Dürer may have worked on some of these, as the work on the project began while he was with Wolgemut. Because Dürer left autobiographical writings and was widely known by his mid-twenties, his life is well documented in several sources. After a few years of school, Dürer learned the basics of goldsmithing and drawing from his father. Though his father wanted him to continue his training as a goldsmith, he showed such a precocious talent in drawing that he started as an apprentice to Michael Wolgemut at the age of fifteen in 1486. A self-portrait, a drawing in silverpoint, is dated 1484 (Albertina, Vienna) "when I was a child", as his later inscription says. The drawing is one of the earliest surviving children's drawings of any kind, and, as Dürer's Opus One, has helped define his oeuvre as deriving from, and always linked to, himself. Wolgemut was the leading artist in Nuremberg at the time, with a large workshop producing a variety of works of art, in particular woodcuts for books. Nuremberg was then an important and prosperous city, a centre for publishing and many luxury trades. It had strong links with Italy, especially Venice, a relatively short distance across the Alps. Wanderjahre and marriage (1490–1494) After completing his apprenticeship, Dürer followed the common German custom of taking Wanderjahre—in effect gap years—in which the apprentice learned skills from artists in other areas; Dürer was to spend about four years away. He left in 1490, possibly to work under Martin Schongauer, the leading engraver of Northern Europe, but who died shortly before Dürer's arrival at Colmar in 1492. It is unclear where Dürer travelled in the intervening period, though it is likely that he went to Frankfurt and the Netherlands. In Colmar, Dürer was welcomed by Schongauer's brothers, the goldsmiths Caspar and Paul and the painter Ludwig. In 1493 Dürer went to Strasbourg, where he would have experienced the sculpture of Nikolaus Gerhaert. Dürer's first painted self-portrait (now in the Louvre) was painted at this time, probably to be sent back to his fiancée in Nuremberg. In early 1492 Dürer travelled to Basel to stay with another brother of Martin Schongauer, the goldsmith Georg. Very soon after his return to Nuremberg, on 7 July 1494, at the age of 23, Dürer was married to Agnes Frey following an arrangement made during his absence. Agnes was the daughter of a prominent brass worker (and amateur harpist) in the city. However, no children resulted from the marriage, and with Albrecht the Dürer name died out. The marriage between Agnes and Albrecht was not a generally happy one, as indicated by the letters of Dürer in which he quipped to Willibald Pirckheimer in an extremely rough tone about his wife. He called her an "old crow" and made other vulgar remarks. Pirckheimer also made no secret of his antipathy towards Agnes, describing her as a miserly shrew with a bitter tongue, who helped cause Dürer's death at a young age. One author speculates that Albrecht was bisexual, if not homosexual, due to several of his works containing themes of homosexual desire, as well as the intimate nature of his correspondence with certain very close male friends. First journey to Italy (1494–1495) Within three months of his marriage, Dürer left for Italy, alone, perhaps stimulated by an outbreak of plague in Nuremberg. He made watercolour sketches as he traveled over the Alps. Some have survived and others may be deduced from accurate landscapes of real places in his later work, for example his engraving Nemesis. In Italy, he went to Venice to study its more advanced artistic world. Through Wolgemut's tutelage, Dürer had learned how to make prints in drypoint and design woodcuts in the German style, based on the works of Schongauer and the Housebook Master. He also would have had access to some Italian works in Germany, but the two visits he made to Italy had an enormous influence on him. He wrote that Giovanni Bellini was the oldest and still the best of the artists in Venice. His drawings and engravings show the influence of others, notably Antonio Pollaiuolo, with his interest in the proportions of the body; Lorenzo di Credi; and Andrea Mantegna, whose work he produced copies of while training. Dürer probably also visited Padua and Mantua on this trip. Return to Nuremberg (1495–1505) On his return to Nuremberg in 1495, Dürer opened his own workshop (being married was a requirement for this). Over the next five years, his style increasingly integrated Italian influences into underlying Northern forms. Arguably his best works in the first years of the workshop were his woodcut prints, mostly religious, but including secular scenes such as The Men's Bath House (ca. 1496). These were larger and more finely cut than the great majority of German woodcuts hitherto, and far more complex and balanced in composition. It is now thought unlikely that Dürer cut any of the woodblocks himself; this task would have been performed by a specialist craftsman. However, his training in Wolgemut's studio, which made many carved and painted altarpieces and both designed and cut woodblocks for woodcut, evidently gave him great understanding of what the technique could be made to produce, and how to work with block cutters. Dürer either drew his design directly onto the woodblock itself, or glued a paper drawing to the block. Either way, his drawings were destroyed during the cutting of the block. His series of sixteen designs for the Apocalypse is dated 1498, as is his engraving of St. Michael Fighting the Dragon. He made the first seven scenes of the Great Passion in the same year, and a little later, a series of eleven on the Holy Family and saints. The Seven Sorrows Polyptych, commissioned by Frederick III of Saxony in 1496, was executed by Dürer and his assistants c. 1500. In 1502, Dürer's father died. Around 1503–1505 Dürer produced the first 17 of a set illustrating the Life of the Virgin, which he did not finish for some years. Neither these nor the Great Passion were published as sets until several years later, but prints were sold individually in considerable numbers. During the same period Dürer trained himself in the difficult art of using the burin to make engravings. It is possible he had begun learning this skill during his early training with his father, as it was also an essential skill of the goldsmith. In 1496 he executed the Prodigal Son, which the Italian Renaissance art historian Giorgio Vasari singled out for praise some decades later, noting its Germanic quality. He was soon producing some spectacular and original images, notably Nemesis (1502), The Sea Monster (1498), and Saint Eustace (c. 1501), with a highly detailed landscape background and animals. His landscapes of this period, such as Pond in the Woods and Willow Mill, are quite different from his earlier watercolours. There is a much greater emphasis on capturing atmosphere, rather than depicting topography. He made a number of Madonnas, single religious figures, and small scenes with comic peasant figures. Prints are highly portable and these works made Dürer famous throughout the main artistic centres of Europe within a very few years. The Venetian artist Jacopo de' Barbari, whom Dürer had met in Venice, visited Nuremberg in 1500, and Dürer said that he learned much about the new developments in perspective, anatomy, and proportion from him. De' Barbari was unwilling to explain everything he knew, so Dürer began his own studies, which would become a lifelong preoccupation. A series of extant drawings show Dürer's experiments in human proportion, leading to the famous engraving of Adam and Eve (1504), which shows his subtlety while using the burin in the texturing of flesh surfaces. This is the only existing engraving signed with his full name. Dürer created large numbers of preparatory drawings, especially for his paintings and engravings, and many survive, most famously the Betende Hände (Praying Hands) from circa 1508, a study for an apostle in the Heller altarpiece. He continued to make images in watercolour and bodycolour (usually combined), including a number of still lifes of meadow sections or animals, including his Young Hare (1502) and the Great Piece of Turf (1503). Second journey to Italy (1505–1507) In Italy, he returned to painting, at first producing a series of works executed in tempera on linen. These include portraits and altarpieces, notably, the Paumgartner altarpiece and the Adoration of the Magi. In early 1506, he returned to Venice and stayed there until the spring of 1507. By this time Dürer's engravings had attained great popularity and were being copied. In Venice he was given a valuable commission from the emigrant German community for the church of San Bartolomeo. This was the altar-piece known as the Adoration of the Virgin or the Feast of Rose Garlands. It includes portraits of members of Venice's German community, but shows a strong Italian influence. It was later acquired by the Emperor Rudolf II and taken to Prague. Nuremberg and the masterworks (1507–1520) Despite the regard in which he was held by the Venetians, Dürer returned to Nuremberg by mid-1507, remaining in Germany until 1520. His reputation had spread throughout Europe and he was on friendly terms and in communication with most of the major artists including Raphael. Between 1507 and 1511 Dürer worked on some of his most celebrated paintings: Adam and Eve (1507), Martyrdom of the Ten Thousand (1508, for Frederick of Saxony), Virgin with the Iris (1508), the altarpiece Assumption of the Virgin (1509, for Jacob Heller of Frankfurt), and Adoration of the Trinity (1511, for Matthaeus Landauer). During this period he also completed two woodcut series, the Great Passion and the Life of the Virgin, both published in 1511 together with a second edition of the Apocalypse series. The post-Venetian woodcuts show Dürer's development of chiaroscuro modelling effects, creating a mid-tone throughout the print to which the highlights and shadows can be contrasted. Other works from this period include the thirty-seven Little Passion woodcuts, first published in 1511, and a set of fifteen small engravings on the same theme in 1512. Complaining that painting did not make enough money to justify the time spent when compared to his prints, he produced no paintings from 1513 to 1516. In 1513 and 1514 Dürer created his three most famous engravings: Knight, Death and the Devil (1513, probably based on Erasmus's Handbook of a Christian Knight), St. Jerome in His Study, and the much-debated Melencolia I (both 1514, the year Dürer's mother died). Further outstanding pen and ink drawings of Dürer's period of art work of 1513 were drafts for his friend Pirckheimer. These drafts were later used to design Lusterweibchen chandeliers, combining an antler with a wooden sculpture. In 1515, he created his woodcut of a Rhinoceros which had arrived in Lisbon from a written description and sketch by another artist, without ever seeing the animal himself. An image of the Indian rhinoceros, the image has such force that it remains one of his best-known and was still used in some German school science text-books as late as last century. In the years leading to 1520 he produced a wide range of works, including the woodblocks for the first western printed star charts in 1515 and portraits in tempera on linen in 1516. His only experiments with etching came in this period, producing five between 1515–1516 and a sixth in 1518; a technique he may have abandoned as unsuited to his aesthetic of methodical, classical form. Patronage of Maximilian I From 1512, Maximilian I became Dürer's major patron. He commissioned The Triumphal Arch, a vast work printed from 192 separate blocks, the symbolism of which is partly informed by Pirckheimer's translation of Horapollo's Hieroglyphica. The design program and explanations were devised by Johannes Stabius, the architectural design by the master builder and court-painter Jörg Kölderer and the woodcutting itself by Hieronymous Andreae, with Dürer as designer-in-chief. The Arch was followed by The Triumphal Procession, the program of which was worked out in 1512 by Marx Treitz-Saurwein and includes woodcuts by Albrecht Altdorfer and Hans Springinklee, as well as Dürer. Dürer worked with pen on the marginal images for an edition of the Emperor's printed Prayer-Book; these were quite unknown until facsimiles were published in 1808 as part of the first book published in lithography. Dürer's work on the book was halted for an unknown reason, and the decoration was continued by artists including Lucas Cranach the Elder and Hans Baldung. Dürer also made several portraits of the Emperor, including one shortly before Maximilian's death in 1519. Maximilian was a very cash-strapped prince who sometimes failed to pay, yet turned out to be Dürer's most important patron. In his court, artists and learned men were respected, which was not common at that time (later, Dürer commented that in Germany, as a non-noble, he was treated a parasite). Pirckheimer (who he met in 1495, before entering the service of Maximilian) was also an important personage in the court and great cultural patron, who had a strong influence on Dürer as his tutor in classical knowledge and humanistic critical methodology, as well as collaborator. In Maximilian's court, Dürer also collaborated with a great number of other brilliant artists and scholars of the time who became his friends, like Johannes Stabius, Konrad Peutinger, Conrad Celtes, and Hans Tscherte (an imperial architect). Dürer manifested a strong pride in his ability, as a prince of his profession. One day, the emperor, trying to show Dürer an idea, tried to sketch with the charcoal himself, but always broke it. Dürer took the charcoal from Maximilian's hand, finished the drawing and told him: "This is my scepter." In another occasion, Maximilian noticed that the ladder Dürer used was too short and unstable, thus told a noble to hold it for him. The noble refused, saying that it was beneath him to serve a non-noble. Maximilian then came to hold the ladder himself, and told the noble that he could make a noble out of a peasant any day, but he could not make an artist like Dürer out of a noble. Cartographic and astronomical works Dürer's exploration of space led to a relationship and cooperation with the court astronomer Johannes Stabius. Stabius also often acted as Dürer's and Maximilian's go-between for their financial problems. In 1515 Dürer and Stabius created the first world map projected on a solid geometric sphere. Also in 1515, Stabius, Dürer and the astronomer Konrad Heinfogel produced the first planispheres of both southern and northerns hemispheres, as well as the first printed celestial maps, which prompted the revival of interest in the field of uranometry throughout Europe. Journey to the Netherlands (1520–1521) Maximilian's death came at a time when Dürer was concerned he was losing "my sight and freedom of hand" (perhaps caused by arthritis) and increasingly affected by the writings of Martin Luther. In July 1520 Dürer made his fourth and last major journey, to renew the Imperial pension Maximilian had given him and to secure the patronage of the new emperor, Charles V, who was to be crowned at Aachen. Dürer journeyed with his wife and her maid via the Rhine to Cologne and then to Antwerp, where he was well received and produced numerous drawings in silverpoint, chalk and charcoal. In addition to attending the coronation, he visited Cologne (where he admired the painting of Stefan Lochner), Nijmegen, 's-Hertogenbosch, Bruges (where he saw Michelangelo's Madonna of Bruges), Ghent (where he admired van Eyck's Ghent altarpiece), and Zeeland. Dürer took a large stock of prints with him and wrote in his diary to whom he gave, exchanged or sold them, and for how much. This provides rare information of the monetary value placed on prints at this time. Unlike paintings, their sale was very rarely documented. While providing valuable documentary evidence, Dürer's Netherlandish diary also reveals that the trip was not a profitable one. For example, Dürer offered his last portrait of Maximilian to his daughter, Margaret of Austria, but eventually traded the picture for some white cloth after Margaret disliked the portrait and declined to accept it. During this trip he also met Bernard van Orley, Jan Provoost, Gerard Horenbout, Jean Mone, Joachim Patinir and Tommaso Vincidor, though he did not, it seems, meet Quentin Matsys. Having secured his pension, Dürer returned home in July 1521, having caught an undetermined illness, which afflicted him for the rest of his life, and greatly reduced his rate of work. Final years, Nuremberg (1521–1528) On his return to Nuremberg, Dürer worked on a number of grand projects with religious themes, including a crucifixion scene and a Sacra conversazione, though neither was completed. This may have been due in part to his declining health, but perhaps also because of the time he gave to the preparation of his theoretical works on geometry and perspective, the proportions of men and horses, and fortification. However, one consequence of this shift in emphasis was that during the last years of his life, Dürer produced comparatively little as an artist. In painting, there was only a portrait of Hieronymus Holtzschuher, a Madonna and Child (1526), Salvator Mundi (1526), and two panels showing St. John with St. Peter in background and St. Paul with St. Mark in the background. This last great work, the Four Apostles, was given by Dürer to the City of Nuremberg—although he was given 100 guilders in return. As for engravings, Dürer's work was restricted to portraits and illustrations for his treatise. The portraits include Cardinal-Elector Albert of Mainz; Frederick the Wise, elector of Saxony; the humanist scholar Willibald Pirckheimer; Philipp Melanchthon, and Erasmus of Rotterdam. For those of the Cardinal, Melanchthon, and Dürer's final major work, a drawn portrait of the Nuremberg patrician Ulrich Starck, Dürer depicted the sitters in profile. Despite complaining of his lack of a formal classical education, Dürer was greatly interested in intellectual matters and learned much from his boyhood friend Willibald Pirckheimer, whom he no doubt consulted on the content of many of his images. He also derived great satisfaction from his friendships and correspondence with Erasmus and other scholars. Dürer succeeded in producing two books during his lifetime. "The Four Books on Measurement" were published at Nuremberg in 1525 and was the first book for adults on mathematics in German, as well as being cited later by Galileo and Kepler. The other, a work on city fortifications, was published in 1527. "The Four Books on Human Proportion" were published posthumously, shortly after his death in 1528. Dürer died in Nuremberg at the age of 56, leaving an estate valued at 6,874 florins – a considerable sum. He is buried in the Johannisfriedhof cemetery. His large house (purchased in 1509 from the heirs of the astronomer Bernhard Walther), where his workshop was located and where his widow lived until her death in 1539, remains a prominent Nuremberg landmark. Dürer and the Reformation Dürer's writings suggest that he may have been sympathetic to Luther's ideas, though it is unclear if he ever left the Catholic Church. Dürer wrote of his desire to draw Luther in his diary in 1520: "And God help me that I may go to Dr. Martin Luther; thus I intend to make a portrait of him with great care and engrave him on a copper plate to create a lasting memorial of the Christian man who helped me overcome so many difficulties." In a letter to Nicholas Kratzer in 1524, Dürer wrote, "because of our Christian faith we have to stand in scorn and danger, for we are reviled and called heretics". Most tellingly, Pirckheimer wrote in a letter to Johann Tscherte in 1530: "I confess that in the beginning I believed in Luther, like our Albert of blessed memory ... but as anyone can see, the situation has become worse." Dürer may even have contributed to the Nuremberg City Council's mandating Lutheran sermons and services in March 1525. Notably, Dürer had contacts with various reformers, such as Zwingli, Andreas Karlstadt, Melanchthon, Erasmus and Cornelius Grapheus from whom Dürer received Luther's Babylonian Captivity in 1520. Yet Erasmus and C. Grapheus are better said to be Catholic change agents. Also, from 1525, "the year that saw the peak and collapse of the Peasants' War, the artist can be seen to distance himself somewhat from the [Lutheran] movement..." Dürer's later works have also been claimed to show Protestant sympathies. His 1523 The Last Supper woodcut has often been understood to have an evangelical theme, focusing as it does on Christ espousing the Gospel, as well as the inclusion of the Eucharistic cup, an expression of Protestant utraquism, although this interpretation has been questioned. The delaying of the engraving of St Philip, completed in 1523 but not distributed until 1526, may have been due to Dürer's uneasiness with images of saints; even if Dürer was not an iconoclast, in his last years he evaluated and questioned the role of art in religion. Legacy and influence Dürer exerted a huge influence on the artists of succeeding generations, especially in printmaking, the medium through which his contemporaries mostly experienced his art, as his paintings were predominantly in private collections located in only a few cities. His success in spreading his reputation across Europe through prints was undoubtedly an inspiration for major artists such as Raphael, Titian, and Parmigianino, all of whom collaborated with printmakers to promote and distribute their work. His engravings seem to have had an intimidating effect upon his German successors; the "Little Masters" who attempted few large engravings but continued Dürer's themes in small, rather cramped compositions. Lucas van Leyden was the only Northern European engraver to successfully continue to produce large engravings in the first third of the 16th century. The generation of Italian engravers who trained in the shadow of Dürer all either directly copied parts of his landscape backgrounds (Giulio Campagnola, Giovanni Battista Palumba, Benedetto Montagna and Cristofano Robetta), or whole prints (Marcantonio Raimondi and Agostino Veneziano). However, Dürer's influence became less dominant after 1515, when Marcantonio perfected his new engraving style, which in turn travelled over the Alps to also dominate Northern engraving. In painting, Dürer had relatively little influence in Italy, where probably only his altarpiece in Venice was seen, and his German successors were less effective in blending German and Italian styles. His intense and self-dramatizing self-portraits have continued to have a strong influence up to the present, especially on painters in the 19th and 20th century who desired a more dramatic portrait style. Dürer has never fallen from critical favour, and there have been significant revivals of interest in his works in Germany in the Dürer Renaissance of about 1570 to 1630, in the early nineteenth century, and in German nationalism from 1870 to 1945. The Lutheran Church commemorates Dürer annually on 6 April, along with Michelangelo, Lucas Cranach the Elder and Hans Burgkmair. The liturgical calendar of the Episcopal Church (United States) remembers him, Cranach and Matthias Grünewald on 5 August. Theoretical works In all his theoretical works, in order to communicate his theories in the German language rather than in Latin, Dürer used graphic expressions based on a vernacular, craftsmen's language. For example, "Schneckenlinie" ("snail-line") was his term for a spiral form. Thus, Dürer contributed to the expansion in German prose which Luther had begun with his translation of the Bible. Four Books on Measurement Dürer's work on geometry is called the Four Books on Measurement (Underweysung der Messung mit dem Zirckel und Richtscheyt or Instructions for Measuring with Compass and Ruler). The first book focuses on linear geometry. Dürer's geometric constructions include helices, conchoids and epicycloids. He also draws on Apollonius, and Johannes Werner's 'Libellus super viginti duobus elementis conicis' of 1522. The second book moves onto two-dimensional geometry, i.e. the construction of regular polygons. Here Dürer favours the methods of Ptolemy over Euclid. The third book applies these principles of geometry to architecture, engineering and typography. In architecture Dürer cites Vitruvius but elaborates his own classical designs and columns. In typography, Dürer depicts the geometric construction of the Latin alphabet, relying on Italian precedent. However, his construction of the Gothic alphabet is based upon an entirely different modular system. The fourth book completes the progression of the first and second by moving to three-dimensional forms and the construction of polyhedra. Here Dürer discusses the five Platonic solids, as well as seven Archimedean semi-regular solids, as well as several of his own invention. In all these, Dürer shows the objects as nets. Finally, Dürer discusses the Delian Problem and moves on to the 'construzione legittima', a method of depicting a cube in two dimensions through linear perspective. He is thought to be the first to describe a visualization technique used in modern computers, ray tracing. It was in Bologna that Dürer was taught (possibly by Luca Pacioli or Bramante) the principles of linear perspective, and evidently became familiar with the 'costruzione legittima' in a written description of these principles found only, at this time, in the unpublished treatise of Piero della Francesca. He was also familiar with the 'abbreviated construction' as described by Alberti and the geometrical construction of shadows, a technique of Leonardo da Vinci. Although Dürer made no innovations in these areas, he is notable as the first Northern European to treat matters of visual representation in a scientific way, and with understanding of Euclidean principles. In addition to these geometrical constructions, Dürer discusses in this last book of Underweysung der Messung an assortment of mechanisms for drawing in perspective from models and provides woodcut illustrations of these methods that are often reproduced in discussions of perspective. Four Books on Human Proportion Dürer's work on human proportions is called the Four Books on Human Proportion (Vier Bücher von Menschlicher Proportion) of 1528. The first book was mainly composed by 1512/13 and completed by 1523, showing five differently constructed types of both male and female figures, all parts of the body expressed in fractions of the total height. Dürer based these constructions on both Vitruvius and empirical observations of "two to three hundred living persons", in his own words. The second book includes eight further types, broken down not into fractions but an Albertian system, which Dürer probably learned from Francesco di Giorgio's 'De harmonica mundi totius' of 1525. In the third book, Dürer gives principles by which the proportions of the figures can be modified, including the mathematical simulation of convex and concave mirrors; here Dürer also deals with human physiognomy. The fourth book is devoted to the theory of movement. Appended to the last book, however, is a self-contained essay on aesthetics, which Dürer worked on between 1512 and 1528, and it is here that we learn of his theories concerning 'ideal beauty'. Dürer rejected Alberti's concept of an objective beauty, proposing a relativist notion of beauty based on variety. Nonetheless, Dürer still believed that truth was hidden within nature, and that there were rules which ordered beauty, even though he found it difficult to define the criteria for such a code. In 1512/13 his three criteria were function ('Nutz'), naïve approval ('Wohlgefallen') and the happy medium ('Mittelmass'). However, unlike Alberti and Leonardo, Dürer was most troubled by understanding not just the abstract notions of beauty but also as to how an artist can create beautiful images. Between 1512 and the final draft in 1528, Dürer's belief developed from an understanding of human creativity as spontaneous or inspired to a concept of 'selective inward synthesis'. In other words, that an artist builds on a wealth of visual experiences in order to imagine beautiful things. Dürer's belief in the abilities of a single artist over inspiration prompted him to assert that "one man may sketch something with his pen on half a sheet of paper in one day, or may cut it into a tiny piece of wood with his little iron, and it turns out to be better and more artistic than another's work at which its author labours with the utmost diligence for a whole year". Book on Fortification In 1527, Dürer also published Various Lessons on the Fortification of Cities, Castles, and Localities (Etliche Underricht zu Befestigung der Stett, Schloss und Flecken). It was printed in Nuremberg, probably by Hieronymus Andreae and reprinted in 1603 by Johan Janssenn in Arnhem. In 1535 it was also translated into Latin as On Cities, Forts, and Castles, Designed and Strengthened by Several Manners: Presented for the Most Necessary Accommodation of War (De vrbibus, arcibus, castellisque condendis, ac muniendis rationes aliquot : praesenti bellorum necessitati accommodatissimae), published by Christian Wechel (Wecheli/Wechelus) in Paris. The work is less proscriptively theoretical than his other works, and was soon overshadowed by the Italian theory of polygonal fortification (the trace italienne – see Bastion fort), though his designs seem to have had some influence in the eastern German lands and up into the Baltic States. Fencing Dürer created many sketches and woodcuts of soldiers and knights over the course of his life. His most significant martial works, however, were made in 1512 as part of his efforts to secure the patronage of Maximilian I. Using existing manuscripts from the Nuremberg Group as his reference, his workshop produced the extensive Οπλοδιδασκαλια sive Armorvm Tractandorvm Meditatio Alberti Dvreri ("Weapon Training, or Albrecht Dürer's Meditation on the Handling of Weapons", MS 26-232). Another manuscript based on the Nuremberg texts as well as one of Hans Talhoffer's works, the untitled Berlin Picture Book (Libr.Pict.A.83), is also thought to have originated in his workshop around this time. These sketches and watercolors show the same careful attention to detail and human proportion as Dürer's other work, and his illustrations of grappling, long sword, dagger, and messer are among the highest-quality in any fencing manual. Gallery List of works List of paintings by Albrecht Dürer List of engravings by Albrecht Dürer List of woodcuts by Albrecht Dürer References Notes Citations Sources Bartrum, Giulia. Albrecht Dürer and his Legacy. London: British Museum Press, 2002. Brand Philip, Lotte; Anzelewsky, Fedja. "The Portrait Diptych of Dürer's parents". Simiolus: Netherlands Quarterly for the History of Art, Volume 10, No. 1, 1978–79. 5–18 Brion, Marcel. Dürer. London: Thames and Hudson, 1960 Harbison, Craig. "Dürer and the Reformation: The Problem of the Re-dating of the St. Philip Engraving". The Art Bulletin, Vol. 58, No. 3, 368–373. September 1976 Koerner, Joseph Leo. The Moment of Self-Portraiture in German Renaissance Art. Chicago/London: University of Chicago Press, 1993. Landau David; Parshall, Peter. The Renaissance Print. Yale, 1996. Panofsky, Erwin. The Life and Art of Albrecht Dürer. NJ: Princeton, 1945. Price, David Hotchkiss. Albrecht Dürer's Renaissance: Humanism, Reformation and the Art of Faith. Michigan, 2003. . Strauss, Walter L. (ed.). The Complete Engravings, Etchings and Drypoints of Albrecht Durer. Mineola NY: Dover Publications, 1973. Borchert, Till-Holger. Van Eyck to Dürer: The Influence of Early Netherlandish painting on European Art, 1430–1530. London: Thames & Hudson, 2011. Wolf, Norbert. Albrecht Dürer. Taschen, 2010. Hoffmann, Rainer. Im Paradies - Adam und Eva und der Sündenfall - Albrecht Dürers Darstellungen, Böhlau-Verlag, 2021, ISBN 9783412523852 Further reading Campbell Hutchison, Jane. Albrecht Dürer: A Biography. Princeton University Press, 1990. Demele, Christine. Dürers Nacktheit – Das Weimarer Selbstbildnis. Rhema Verlag, Münster 2012, Dürer, Albrecht (translated by R.T. Nichol from the Latin text), Of the Just Shaping of Letters, Dover Publications. Hart, Vaughan. 'Navel Gazing. On Albrecht Dürer's Adam and Eve (1504)', The International Journal of Arts Theory and History, 2016, vol.12.1 pp. 1–10 https://doi.org/10.18848/2326-9960/CGP/v12i01/1-10 Korolija Fontana-Giusti, Gordana. "The Unconscious and Space: Venice and the work of Albrecht Dürer", in Architecture and the Unconscious, eds. J. Hendrix and L.Holm, Farnham Surrey: Ashgate, 2016. pp. 27–44, . Wilhelm, Kurth (ed.). The Complete Woodcuts of Albrecht Durer, Dover Publications, 2000. External links The Strange World of Albrecht Dürer at the Sterling and Francine Clark Art Institute. 14 November 2010 – 13 March 2011 Dürer Prints Close-up. Made to accompany The Strange World of Albrecht Dürer at the Sterling and Francine Clark Art Institute. 14 November 2010 – 13 March 2011 Albrecht Dürer: Vier Bücher von menschlicher Proportion (Nuremberg, 1528). Selected pages scanned from the original work. Historical Anatomies on the Web. US National Library of Medicine. "Albrecht Dürer (1471–1528)". In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art Albrecht Durer, Exhibition, Albertina, Vienna. 20 September 2019 – 6 January 2020 1471 births 1528 deaths 15th-century engravers 15th-century German painters 16th-century engravers 16th-century German painters Animal artists Artist authors Artists from Nuremberg Catholic decorative artists Catholic draughtsmen Catholic engravers Catholic painters German draughtsmen German engravers German Lutherans German male painters German people of Hungarian descent German printmakers German Renaissance painters German Roman Catholics Heraldic artists Manuscript illuminators Mathematical artists People celebrated in the Lutheran liturgical calendar Renaissance engravers Woodcut designers
Albrecht Dürer
Analytical chemistry studies and uses instruments and methods used to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry is the science of obtaining, processing, and communicating information about the composition and structure of matter. In other words, it is the art and science of determining what matter is and how much of it exists. ... It is one of the most popular fields of work for ACS chemists. Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry take place after 1900. During this period instrumental analysis becomes progressively dominant in the field. In particular many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also become increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in approximately the 1970s into the present day analytical chemistry has progressively become more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used in chemistry as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate ranges of possibilities and then confirms suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis Gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis Titration involves the addition of a reactant to a solution being analyzed until some equivalence point is reached. Often the amount of material in the solution being analyzed may be determined. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator. There are many other types of titrations, for example, potentiometric titrations. These titrations may use different types of indicators to reach some equivalence point. Instrumental methods Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, x-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Hybrid techniques Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy. liquid chromagraphy-infrared spectroscopy and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where is the absolute error. is the true value. is the observed value. Error of a measurement is an inverse measure of accurate measurement i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in : Standards Standard curve A general method for analysis of concentration involves the creation of a calibration curve. This allows for determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added, and the concentration observed is the amount actually in the sample. Internal standards Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically-enriched analyte which gives rise to the method of isotope dilution. Standard addition The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where kB is Boltzmann's constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency . Shot noise Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where e is the elementary charge and I is the average current. Shot noise is white noise. Flicker noise Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example through the use of a lock-in amplifier. Environmental noise Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and therefore can be avoided. Temperature and vibration isolation may be required for some instruments. Noise reduction Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put in shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (µTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metalomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played critical roles in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments of computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis to complete human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations. See also Important publications in analytical chemistry List of chemical analysis methods List of materials analysis methods Measurement uncertainty Metrology Sensory analysis - in the field of Food science Virtual instrumentation Microanalysis Quality of analytical results Working range References Further reading Gurdeep, Chatwal Anand (2008). Instrumental Methods of Chemical Analysis Himalaya Publishing House (India) Ralph L. Shriner, Reynold C. Fuson, David Y. Curtin, Terence C. Morill: The systematic identification of organic compounds - a laboratory manual, Verlag Wiley, New York 1980, 6. edition, . Bettencourt da Silva, R; Bulska, E; Godlewska-Zylkiewicz, B; Hedrich, M; Majcen, N; Magnusson, B; Marincic, S; Papadakis, I; Patriarca, M; Vassileva, E; Taylor, P; Analytical measurement: measurement uncertainty and statistics, 2012, . External links Infografik and animation showing the progress of analytical chemistry aas Atomic Absorption Spectrophotometer Materials science
Analytical chemistry
An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Systems for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the seconds, minutes and hours needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers Precursors This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism was an orrery and is considered an early mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to during the Hellenistic period of Greece. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, . The castle clock, a hydropowered mechanical astronomical clock invented by Al-Jazari in 1206, was the first programmable analog computer. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Modern era The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912 Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I. Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s. World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system (mixing device) to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, The Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950 this idea was developed into the Deltar, an analogue computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works). The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4000 electron tubes and used 100 dials and 6000 plug-in connectors to program. The MONIAC Computer was a hydraulic model of a national economy first unveiled in 1949. Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi. Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of a two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate—however, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PE Analogue Computer, published in Practical Electronics in the September 1978 edition. Another more modern hybrid computer design was published in Everyday Practical Electronics in 2002. An example described in the EPE Hybrid Computer was the flight of a VTOL aircraft like the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. Electronic analog computers The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty. By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation , with as the vertical position of a mass , the damping coefficient, the spring constant and the gravity of Earth. For analog computing, the equation is programmed as . The equivalent analog circuit consists of two integrators for the state variables (speed) and (position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.) The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s every big company and administration concerned with problems in dynamics had a big analog computing center, for example: In the US: NASA (Huntsville, Houston), Martin Marietta (Orlando), Lockheed, Westinghouse, Hughes Aircraft In Europe: CEA (French Atomic Energy Commission), MATRA, Aérospatiale, BAC (British Aircraft Corporation). Analog–digital hybrids Analog computing devices are fast, digital computing devices are more versatile and accurate, so the idea is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical as signal processing for radars and generally for controllers in embedded systems. In the early 1970s analog computer manufacturers tried to tie together their analog computer with a digital computer to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters. The largest manufacturer of hybrid computers was Electronics Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated part. Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft. After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Today there are no more big hybrid computers, but only hybrid components. Implementations Mechanical analog computers While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities. Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System. Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms. For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them. Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery. Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side. At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of 1 the distance from the vertex, and 2 the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle. Although they did not accomplish any computation, electromechanical position servos were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators. Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984). Components Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos. Key electrical/electronic components might include: precision resistors and capacitors operational amplifiers multipliers potentiometers fixed-function generators The core mathematical operations used in an electric analog computer are: addition integration with respect to time inversion multiplication exponentiation logarithm division In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. Limitations In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit. Decline In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still popular among aircraft personnel. Resurgence With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits. Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers. Practical examples These are examples of analog computers that have been constructed or practically used: Boeing B-29 Superfortress Central Fire Control System Deltar E6B flight computer Kerrison Predictor Leonardo Torres y Quevedo's Analogue Calculating Machines based on "fusee sans fin" Librascope, aircraft weight and balance computer Mechanical computer Mechanical integrators, for example, the planimeter Nomogram Norden bombsight Rangekeeper, and related fire control computers Scanimate Torpedo Data Computer Torquetum Water integrator MONIAC, economic modelling Ishiguro Storm Surge Computer Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry. See also Analog neural network Analogical models Chaos theory Differential equation Dynamical system Field-programmable analog array General purpose analog computer Lotfernrohr 7 series of WW II German bombsights Signal (electrical engineering) Voskhod Spacecraft "Globus" IMP navigation instrument XY-writer Notes References A.K. Dewdney. "On the Spaghetti Computer and Other Analog Gadgets for Problem Solving", Scientific American, 250(6):19–26, June 1984. Reprinted in The Armchair Universe, by A.K. Dewdney, published by W.H. Freeman & Company (1988), . Universiteit van Amsterdam Computer Museum. (2007). Analog Computers. Jackson, Albert S., "Analog Computation". London & New York: McGraw-Hill, 1960. External links Biruni's eight-geared lunisolar calendar in "Archaeology: High tech from Ancient Greece", François Charette, Nature 444, 551–552(30 November 2006), The first computers Large collection of electronic analog computers with lots of pictures, documentation and samples of implementations (some in German) Large collection of old analog and digital computers at Old Computer Museum A great disappearing act: the electronic analogue computer Chris Bissell, The Open University, Milton Keynes, UK Accessed February 2007 German computer museum with still runnable analog computers Analog computer basics Analog computer trumps Turing model Jonathan W. Mills's Analog Notebook Harvard Robotics Laboratory Analog Computation The Enns Power Network Computer – an analog computer for the analysis of electric power systems (advertisement from 1955) Librascope Development Company – Type LC-1 WWII Navy PV-1 "Balance Computor" Kronis Technology More information on Analog and Hybrid computers History of computing hardware Chinese inventions Greek inventions
Analog computer
Apoptosis (from Ancient Greek ἀπόπτωσις, apóptōsis, 'falling off') is a form of programmed cell death that occurs in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, approximately twenty to thirty billion cells die per day. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis. Discovery and etymology German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz. For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of components of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis. In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (), as in the original Greek. In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation: We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid. Activation mechanisms The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC). A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis. Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain. Intrinsic pathway The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. They are very closely related to intrinsic pathway, and tumors arise more frequently through intrinsic pathway than the extrinsic pathway because of sensitivity. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis. During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3. Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Extrinsic pathway Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals. TNF pathway TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the proteinTRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis. Fas pathway The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8. Common components Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family. Caspases Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases, caspase 2,8,9,10,11,12, and effector caspases, caspase 3,6,7. The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program. Caspase-independent apoptotic pathway There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). Apoptosis model in amphibians Amphibian frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibians metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog. Negative regulators of apoptosis Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB. Proteolytic caspase cascade: Killing the cell Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis. A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include: Cell shrinkage and rounding occur because of the retraction lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases. The cytoplasm appears dense, and the organelles appear tightly packed. Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis. The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA. Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Apoptotic cell disassembly Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly: Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1). Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia. Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes. Removal of dead cells The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation. Pathway knock-outs Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons. The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation. A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality. However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure. These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation. A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist. Methods for distinguishing apoptotic from necrotic (necroptotic) cells In order to perform analysis of apoptotic versus necrotic (necroptotic) cells, one can do analysis of morphology by label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). It is important to know how primary and secondary necrotic cells can be distinguished by analysis of supernatant for caspases, HMGB1, and release of cytokeratin 18. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references. Implication in disease Defective pathways The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased. A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis. Dysregulation of p53 The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors. Inhibition Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the “Warburg hypothesis”. HeLa cell Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Canine distemper virus (CDV) is able to induce apoptosis despite the presence of these inhibitory proteins. This is an important oncolytic property of CDV: this virus is capable of killing canine lymphoma cells. Oncoproteins E6 and E7 still leave p53 inactive, but they are not able to avoid the activation of caspases induced from the stress of viral infection. These oncolytic properties provided a promising link between CDV and lymphoma apoptosis, which can lead to development of alternative treatment methods for both canine lymphoma and human non-Hodgkin lymphoma. Defects in the cell cycle are thought to be responsible for the resistance to chemotherapy or radiation by certain tumor cells, so a virus that can induce apoptosis despite defects in the cell cycle is useful for cancer treatment. Treatments The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway. Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis. Hyperactive apoptosis On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. It is of interest to note that neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the “Inverse Warburg hypothesis”). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated. At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM. Treatments Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type. HIV progression The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways: HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis. HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis. HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane. Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells. HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue. The infected CD4+ cell may also receive the death signal from a cytotoxic T cell. Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200. Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV. Viral infection Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells. Viruses can trigger apoptosis of infected cells via a range of mechanisms including: Receptor binding Activation of protein kinase R (PKR) Interaction with p53 Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as Natural Killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis. Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro. Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade. The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice. OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever. The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected. With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway. In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria. Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function. Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Plants Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear. Caspase-independent apoptosis The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease. See also Anoikis Apaf-1 Apo2.7 Apoptotic DNA fragmentation Atromentin induces apoptosis in human leukemia U937 cells. Autolysis Autophagy Cisplatin Cytotoxicity Entosis Ferroptosis Homeostasis Immunology Necrobiosis Necrosis Necrotaxis Nemosis p53 Paraptosis Pseudoapoptosis PI3K/AKT/mTOR pathway Explanatory footnotes Citations General bibliography External links Apoptosis & cell surface Apoptosis & Caspase 3, The Proteolysis Map – animation Apoptosis & Caspase 8, The Proteolysis Map – animation Apoptosis & Caspase 7, The Proteolysis Map – animation Apoptosis MiniCOPE Dictionary – list of apoptosis terms and acronyms Apoptosis (Programmed Cell Death) – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Apoptosis Research Portal Apoptosis Info Apoptosis protocols, articles, news, and recent publications. Database of proteins involved in apoptosis Apoptosis Video Apoptosis Video (WEHI on YouTube ) The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007. WikiPathways – Apoptosis pathway "Finding Cancer's Self-Destruct Button". CR magazine (Spring 2007). Article on apoptosis and cancer. Xiaodong Wang's lecture: Introduction to Apoptosis Robert Horvitz's Short Clip: Discovering Programmed Cell Death The Bcl-2 Database DeathBase: a database of proteins involved in cell death, curated by experts European Cell Death Organization Apoptosis signaling pathway created by Cusabio Cell signaling Cellular senescence Immunology Medical aspects of death Programmed cell death
Apoptosis
ATM or atm often refers to: Atmosphere (unit) or atm, a unit of atmospheric pressure Automated teller machine, a cash dispenser or cash machine ATM or atm may also refer to: Computing ATM (computer), a ZX Spectrum clone developed in Moscow in 1991 Adobe Type Manager, a computer program for managing fonts Accelerated Turing machine, or Zeno machine, a model of computation used in theoretical computer science Alternating Turing machine, a model of computation used in theoretical computer science Asynchronous Transfer Mode, a telecommunications protocol used in networking ATM adaptation layer ATM Adaptation Layer 5 Media Amateur Telescope Making, a series of books by Albert Graham Ingalls ATM (2012 film), an American film ATM (2015 film), a Malayalam film ATM: Er Rak Error, a 2012 Thai film Azhagiya Tamil Magan, a 2007 Indian film "ATM" (song), a 2018 song by J. Cole from KOD People and organizations Abiding Truth Ministries, in Springfield, Massachusetts, US Association of Teachers of Mathematics, UK Acrylic Tank Manufacturing, US aquarium manufacturer, televised in Tanked ATM FA, a football club in Malaysia A. T. M. Wilson (1906–1978), British psychiatrist African Transformation Movement, South African political party founded in 2018 The a2 Milk Company (NZX ticker symbol ATM) Science Apollo Telescope Mount, a solar observatory ATM serine/threonine kinase, a serine/threonine kinase activated by DNA damage The Airborne Topographic Mapper, a laser altimeter among the instruments used by NASA's Operation IceBridge Transportation Active traffic management, a motorway scheme on the M42 in England Air traffic management, a concept in aviation Altamira Airport, in Brazil (IATA code ATM) Azienda Trasporti Milanesi, the municipal public transport company of Milan Airlines of Tasmania (ICAO code ATM) Catalonia, Spain Autoritat del Transport Metropolità (ATM Àrea de Barcelona), in the Barcelona metropolitan area Autoritat Territorial de la Mobilitat del Camp de Tarragona (ATM Camp de Tarragona), in the Camp de Tarragona area Autoritat Territorial de la Mobilitat de l'Àrea de Girona (ATM Àrea de Girona), in the Girona area Autoritat Territorial de la Mobilitat de l'Àrea de Lleida (ATM Àrea de Lleida), in the Lleida area Other uses Actun Tunichil Muknal, a cave in Belize Anti-tank missile, a missile designed to destroy tanks Ass to mouth, a sexual act At the money, moneyness where the strike price is the same as the current spot price Automatenmarken, a variable value stamp Contracted form of Atlético Madrid, football club in Spain Common abbreviation in SMS language for "at the moment"
ATM
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by ANSI and ITU-T (formerly CCITT) for digital transmission of multiple types of traffic, including telephony (voice), data, and video signals in one network without the use of separate overlay networks. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as voice and video. ATM provides functionality that uses features of circuit switching and packet switching networks. It uses asynchronous time-division multiplexing. In the OSI reference model data link layer (layer 2), the basic transfer units are generically called frames. In ATM these frames are of a fixed (53 octets or bytes) length and specifically called cells. This differs from approaches such as IP or Ethernet that use variable sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent, i.e. dedicated connections that are usually preconfigured by the service provider, or switched, i.e. set up on a per-call basis using signaling and disconnected when the call is terminated. The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the SONET/SDH backbone of the public switched telephone network (PSTN) and in the Integrated Services Digital Network (ISDN), but has largely been superseded in favor of next-generation networks based on Internet Protocol (IP) technology, while wireless and mobile ATM never established a significant foothold. Protocol architecture If a speech signal is reduced to packets, and it is forced to share a link with bursty data traffic (traffic with an abnormally large number of packets over a brief period of time, such as could occur during a large scale emergency and the cellular network has become oversubscribed) then no matter how small the speech packets could be made, they would always encounter full-size data packets. Under normal queuing conditions the cells might experience maximum queuing delays. To avoid this issue, all ATM packets, or "cells," are the same small size. In addition, the fixed cell structure means that ATM can be readily switched by hardware without the inherent delays introduced by software switched and routed frames. Thus, the designers of ATM utilized small data cells to reduce jitter (large changes in the packet round trip time or one-way time, in this case) in the multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, because the conversion of digitized voice into an analogue audio signal is an inherently real-time process, and to do a good job, the decoder that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess – and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed. At the time of the design of ATM, 155 Mbit/s synchronous digital hierarchy (SDH) with 135 Mbit/s payload was considered a fast optical network link, and many plesiochronous digital hierarchy (PDH) links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the US, and 2 to 34 Mbit/s in Europe. At 155 Mbit/s, a typical full-length 1,500 byte (12,000-bit) data packet, sufficient to contain a maximum-sized IP packet for Ethernet, would take 77.42 µs to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 line, the same packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over, in addition to any packet generation delay in the shorter speech packet. This was considered unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this low jitter in a number of ways: Using a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer require echo cancellers even in local networks; this was considered too expensive at the time. Also, it increased the delay across the channel, and made conversation difficult over high-delay channels. Using a system that inherently provides low jitter (and minimal overall delay) to traffic that needs it. Operate on a 1:1 user basis (i.e., a dedicated pipe). The design of ATM aimed for a low-jitter network interface. However, "cells" were introduced into the design to provide short queuing delays while continuing to support datagram traffic. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise in larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of almost 30, reducing the need for echo cancellers. Cell structure An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen as described above. ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format. GFC = The generic flow control (GFC) field is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks such as a distributed queue dual bus (DQDB) ring. The GFC field was designed to give the User-Network Interface (UNI) 4 bits in which to negotiate multiplexing and flow control among the cells of various ATM connections. However, the use and exact values of the GFC field have not been standardized, and the field is always set to 0000. VPI = Virtual path identifier (8 bits UNI, or 12 bits NNI) VCI = Virtual channel identifier (16 bits) PT = Payload type (3 bits) PT bit 3 (msbit): Network management cell. If 0, user data cell and the following apply: PT bit 2: Explicit forward congestion indication (EFCI); 1 = network congestion experienced PT bit 1 (lsbit): ATM user-to-user (AAU) bit. Used by AAL5 to indicate packet boundaries. CLP = Cell loss priority (1-bit) HEC = Header error control (8-bit CRC, polynomial = X8 + X2 + X + 1) ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type. (Network management segment, network management end-to-end, resource management, and reserved for future use.) Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found. A UNI cell reserves the GFC field for a local flow control/submultiplexing system between users. This was intended to allow several terminals to share a single network connection, in the same way that two Integrated Services Digital Network (ISDN) phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default. The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are reserved). Service types ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis. Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 µs to transmit on a 10 Gbit/s network, reducing the need for small cells to reduce jitter due to contention. The increased link speeds by themselves do not alleviate jitter due to queuing. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP, Ethernet VLANs, and multi-protocol support over SONET. Virtual circuits A network must establish a connection before two parties can send cells to each other. In ATM this is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. "Call admission" is then performed by the network to confirm that the requested resources are available and that a route exists for the connection. Motivation ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network). As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in frame relay and the logical channel number and logical channel group number in X.25. Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, frame relay, n* 64 channels, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths. Types ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes. PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service "contract") and the two endpoints. ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected using ATM. SVCs were also used in attempts to replace local area networks with ATM. Routing Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP. Traffic engineering Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which "quality of service" (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection. CBR - Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant. VBR - Variable bit rate: an average or Sustainable Cell Rate (SCR) is specified, which can peak at a certain level, a PCR, for a maximum interval before being problematic. ABR - Available bit rate: a minimum guaranteed rate is specified. UBR - Unspecified bit rate: traffic is allocated to all remaining transmission capacity. VBR has real-time and non-real-time variants, and serves for "bursty" traffic. Non-real-time is sometimes abbreviated to vbr-nrt. Most traffic classes also introduce the concept of Cell-delay variation tolerance (CDVT), which defines the "clumping" of cells in time. Traffic policing To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs): usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVt alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVt and an SCR and Maximum Burst Size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells. If the traffic on a virtual circuit is exceeding its traffic contract, as determined by the GCRA, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as potentially redundant). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been created that will discard a whole series of cells until the next packet starts. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU. Traffic shaping Traffic shaping usually takes place in the network interface card (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate. Reference model The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers: At the physical network level, ATM specifies a layer that is equivalent to the OSI physical layer. The ATM layer 2 roughly corresponds to the OSI data link layer. The OSI network layer is implemented as the ATM adaptation layer (AAL). Deployment ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price/performance of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum. Wireless or mobile ATM Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as "crossover switch", which is similar to the MSC (mobile switching center) of GSM networks. The advantage of wireless ATM is its high bandwidth and high speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from Cambridge University Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs. Versions One version of ATM is ATM25, where data is transferred at 25 Mbit/s. See also VoATM References ATM Cell formats- Cisco Systems External links ATM Info and resources ATM ChipWeb - Chip and NIC database A tutorial from Juniper web site ATM Tutorial ITU-T recommendations Link protocols Networking standards
Asynchronous Transfer Mode
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered in 1887 and exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Uses Medical Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy (a sleep disorder), and obesity, and is sometimes prescribed for its past medical indications, particularly for depression and chronic pain. Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, pharmaceutical amphetamines, at therapeutic dosages, appear to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. Enhancing performance Cognitive performance In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Physical performance Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. Contraindications According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. Adverse effects The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the USFDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes. Physical Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. Psychological At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. Reinforcement disorders Addiction Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increase the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Sustained aerobic exercise on a regular basis also appears to be an effective treatment for amphetamine addiction; exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction. Biomolecular mechanisms Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. Pharmacological treatments there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in RCTs for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil. Behavioral treatments A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. Dependence and withdrawal Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect. According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. Overdose An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Toxicity In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. Psychosis An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use. Drug interactions Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Pharmacology Pharmacodynamics Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. Dopamine In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Norepinephrine Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Serotonin Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Other neurotransmitters, peptides, hormones, and enzymes Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma. Pharmacokinetics The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Pharmacomicrobiomics The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics. Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds. Related endogenous compounds Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Chemistry Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . Substituted derivatives The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Synthesis Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Detection in body fluids Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. History, society, and culture Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. Legal status As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Pharmaceutical products Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below. Notes Image legend Reference notes References External links  – Dextroamphetamine  – Levoamphetamine Comparative Toxicogenomics Database entry: Amphetamine Comparative Toxicogenomics Database entry: CARTPT 5-HT1A agonists Anorectics Aphrodisiacs Carbonic anhydrase activators Drugs acting on the cardiovascular system Drugs acting on the nervous system Drugs in sport Ergogenic aids Euphoriants Excitatory amino acid reuptake inhibitors German inventions Management of obesity Narcolepsy Nootropics Norepinephrine-dopamine releasing agents Phenethylamines Stimulants Substituted amphetamines TAAR1 agonists Attention deficit hyperactivity disorder management VMAT inhibitors World Anti-Doping Agency prohibited substances
Amphetamine
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science. Logical foundations While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions. First implementations Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Institute for Advanced Study in Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia. The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious. Decidability of the problem Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized. The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers). Related problems A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by the first player. Industrial uses Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors. First-order theorem proving In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as Higher-order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed. Benchmarks, competitions, and sources The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. E is a high-performance prover for full first-order logic, but built on a purely equational calculus, originally developed in the automated reasoning group of Technical University of Munich under the direction of Wolfgang Bibel, and now at Baden-Württemberg Cooperative State University in Stuttgart. Otter, developed at the Argonne National Laboratory, is based on first-order resolution and paramodulation. Otter has since been replaced by Prover9, which is paired with Mace4. SETHEO is a high-performance system based on the goal-directed model elimination calculus, originally developed by a team under direction of Wolfgang Bibel. E and SETHEO have been combined (with other systems) in the composite theorem prover E-SETHEO. Vampire was originally developed and implemented at Manchester University by Andrei Voronkov and Krystof Hoder. It is now developed by a growing international team. It has won the FOF division (among other divisions) at the CADE ATP System Competition regularly since 2001. Waldmeister is a specialized system for unit-equational first-order logic developed by Arnim Buch and Thomas Hillenbrand. It won the CASC UEQ division for fourteen consecutive years (1997–2010). SPASS is a first order logic theorem prover with equality. This is developed by the research group Automation of Logic, Max Planck Institute for Computer Science. The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. Popular techniques First-order resolution with unification Model elimination Method of analytic tableaux Superposition and term rewriting Model checking Mathematical induction Binary decision diagrams DPLL Higher-order unification Software systems Free software Alt-Ergo Automath CVC E GKC Gödel machine iProver IsaPlanner KED theorem prover leanCoP Leo II LCF Logictools online theorem prover LoTREC MetaPRL Mizar NuPRL Paradox Prover9 PVS Simplify SPARK (programming language) Twelf Z3 Theorem Prover Proprietary software Acumen RuleManager (commercial product) ALLIGATOR (CC BY-NC-SA 2.0 UK) CARINE KIV (freely available as a plugin for Eclipse) Prover Plug-In (commercial proof engine product) ProverBox Wolfram Mathematica ResearchCyc Spear modular arithmetic theorem prover See also Curry–Howard correspondence Symbolic computation Ramanujan machine Computer-aided proof Formal verification Logic programming Proof checking Model checking Proof complexity Computer algebra system Program analysis (computer science) General Problem Solver Metamath language for formalized mathematics Notes References External links A list of theorem proving tools Formal methods
Automated theorem proving
Agent Orange is a herbicide and defoliant chemical, one of the "tactical use" Rainbow Herbicides. It is widely known for its use by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. In addition to its damaging environmental effects, traces of dioxin (mainly TCDD, the most toxic of its type) found in the mixture have caused major health problems for many individuals who were exposed, and their offspring. Agent Orange was produced in the United States from the late 1940s and was used in industrial agriculture and was also sprayed along railroads and power lines to control undergrowth in forests. During the Vietnam War the U.S military procured over 20 million gallons consisting of a fifty-fifty mixture of 2,4-D and Dioxin-contaminated 2,4,5-T. Nine chemical companies produced it: Dow Chemical Company, Monsanto Company, Diamond Shamrock Corporation, Hercules Inc., Thompson Hayward Chemical Co., United States Rubber Company (Uniroyal), Thompson Chemical Co., Hoffman-Taff Chemicals, Inc., and Agriselect. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Red Cross of Vietnam estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable, while documenting cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed U.S. military veterans. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel as a result of Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over 3,100,000 hectares (31,000 km2 or 11,969 mi2) of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity sharply reduced in contrast with unsprayed areas. The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages. Agent Orange was first used by the British Armed Forces in Malaya during the Malayan Emergency. It was also used by the U.S. military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. The herbicide was more recently used in Brazil to clear out sections of the Amazon rainforest for agriculture. Chemical composition The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). TCDD was a trace (typically 2-3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange. Toxicology TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to readily enter the body through physical contact or ingestion. Dioxin easily accumulates in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the nucleus, where it influences gene expression. According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful. Development Several herbicides were developed as part of efforts by the United States and the United Kingdom to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33). In 1943, the United States Department of the Army contracted botanist and bioethicist Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois Urbana-Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued. In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S.'s testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly. Early use In Malaya the local unit of Imperial Chemical Industries researched defoliants as weed killers for rubber plantations. Roadside ambushes by the Malayan National Liberation Army were a danger to the British military during the Malayan Emergency (1948–1960) so trials were made to defoliate vegetation that might hide ambush sites, but hand removal was found cheaper. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E.K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H.G.H. Kearns of the University of Bristol. After the Malayan Emergency ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya. Use in the Vietnam War In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to help defoliate the lush jungle that was providing cover to his Communist enemies. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. Many U.S. officials supporting herbicide operations, pointing out that the British had already used herbicides and defoliants in Malaya during the 1950's. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam. The herbicide operations were formally directed by the government of South Vietnam. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters and possible ambush sites along roads and canals. Samuel P. Huntington argued that the program was also a part of a policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers. Altogether, over 80 million litres of Agent Orange were applied. The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA. The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period. 3.2% of South Vietnam’s cultivated land was sprayed at least once between 1965 and 1971. 90% of herbicide use was directed at defoliation. The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. In 1965, members of the U.S. Congress were told, "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." The first official acknowledgment of the programs came from the State Department in March 1966. When crops were destroyed, the Viet Cong would compensate for the loss of food by confiscating more food from local villages. Some military personnel reported being told they were destroying crops used to feed guerrillas, only to later discover, most of the destroyed food was actually produced to support the local civilian population. For example, according to Wil Verwey, 85% of the crop lands in Quang Ngai province were scheduled to be destroyed in 1970 alone. He estimated this would have caused famine and left hundreds of thousands of people without food or malnourished in the province. According to a report by the American Association for the Advancement of Science, the herbicide campaign had disrupted the food supply of more than 600,000 people by 1970. Many experts at the time, including Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then the United Kingdom and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, the United Kingdom commented on the draft Resolution 2603 (XXIV): "The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law." A study carried out by the Bionetic Research Laboratories between 1965 and 1968 found malformations in test animals caused by 2,4,5-T, a component of Agent Orange. The study was later brought to the attention of the White House in October 1969. Other studies reported similar results and the Department of Defense began to reduce the herbicide operation. On April 15, 1970, it was announced that the use of Agent Orange was suspended. Two brigades of the Americal Division in the summer of 1970 continued to use Agent Orange for crop destruction in violation of the suspension. An investigation led to disciplinary action against the brigade and division commanders because they had falsified reports to hide its use. Defoliation and crop destruction were completely stopped by June 30, 1971. Health effects There are various types of cancer associated with Agent Orange, including chronic B-cell leukemia, Hodgkin's lymphoma, multiple myeloma, non-Hodgkin's lymphoma, prostate cancer, respiratory cancer, lung cancer, and soft tissue sarcomas. Vietnamese people The government of Vietnam states that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to contaminated Agent Orange. The United States government has challenged these figures as being unreliable. According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases. In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies. Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate. U.S. veterans While in Vietnam, the veterans were told not to worry and were persuaded the chemical was harmless. After returning home, Vietnam veterans began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge. In order to qualify for compensation, veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S. By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam. In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam. Also, 51% said they supported compensation for Vietnamese Agent Orange victims. National Academy of Medicine Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled "Veterans and Agent Orange: Update 2014." The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange. The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence." At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report "Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft" confirmed it. U.S. Public Health Service Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment. Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims. Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-p-dioxin. U.S. Veterans of Laos and Cambodia The United States fought secret wars in Laos and Cambodia, dropping large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped 475,500 gallons of Agent Orange in Laos and 40,900 in Cambodia. Because Laos and Cambodia were neutral during the Vietnam War, the U.S. attempted to keep secret its wars, including its bombing campaigns against those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. One noteworthy exception, according to the U.S. Department of Labor, is a claim filed with the CIA by an employee of "a self-insured contractor to the CIA that was no longer in business." The CIA advised the Department of Labor that it "had no objections" to paying the claim and Labor accepted the claim for payment: Ecological impact About 17.8%——of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were 145 and 170 species of birds and 30 and 55 species of mammals. Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases. Sociopolitical impact American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's Memorandum 5446-ISA/ARPA states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange and areas were bulldozed clear of vegetation forcing many rural civilians to cities. Legal and diplomatic proceedings International The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. In the Conference on Disarmament, Article 2(4) Protocol III of the weaponry convention contains "The Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange and is clear that it was designed to cover situations like U.S. tactics in Vietnam. Class action lawsuit Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government. In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction. The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700. In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects." New Jersey Agent Orange Commission In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine small dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the former group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Army, Marines and brown water riverboat Navy personnel. U.S. Congress In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA expired in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included. In 2011, an appraisal of the 20 year long Air Force Health Study that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange". The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses. U.S.–Vietnamese government negotiations In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held. Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin. A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011. Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea. Vietnamese victims class action lawsuit in U.S. courts On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. In the dismissal statement issued by Weinstein, he wrote "The prohibition extended only to gases deployed for their asphyxiating or toxic effects on man, not to herbicides designed to affect plants that may have unintended harmful side-effects on people." Author and activist George Jackson had written previously that "if the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to the United Kingdom's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare." The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity. The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals. Help for those affected in Vietnam To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi. The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange. The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need. Vuong Mo of the Vietnam News Agency described one of the centers: May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful. On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy. Use outside of Vietnam Australia In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state. Canada The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence. In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe. On February 17, 2011, the Toronto Star revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The Toronto Star reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the Toronto Star article, the Ontario provincial government launched a probe into the use of Agent Orange. Guam An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Andersen Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam. Korea Agent Orange was used in Korea in the late 1960s. In 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims". In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 the United States Army had buried 250 drums of Agent Orange in Camp Carroll, the U.S. Army base in Gyeongsangbuk-do, Korea. Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in Korea and are able to prove they were assigned to certain specified around the DMZ during a specific time frame are afforded similar presumption. New Zealand The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of herbicide used in Agent Orange which has been alleged at various times to have been exported for use in the Vietnam War and to other users by the Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted. There are cases of New Zealand soldiers developing cancers such as bone cancer, but none has been scientifically connected to exposure to herbicides. Philippines Herbicide persistence studies of Agents Orange and White were conducted in the Philippines. Johnston Atoll The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi. Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport. From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship . As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat. Okinawa, Japan There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013. In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa. Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa. Thailand Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits. A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides. United States The University of Hawaii has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Kaua'i Island in 1967–68 and on Hawaii Island in 1966; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs. In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015. In 1978, the EPA suspended spraying of Agent Orange in National Forests. Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were doused with Agent Orange along power lines throughout the National Forest. In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, tainting a 17-mile stretch of riverbed in one of New Jersey's most populous areas. A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, as well as in Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s. Cleanup programs In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees. On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Danang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai—a "hotspot" for dioxin—and Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper Nhân Dân, the U.S. government provided $41 million to the project. As of 2017, some 110,000 cubic meters of soil have been "cleaned." The Seabee's Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was 30 odd acres in size and was still being cleaned up in 2013. In 2016, the EPA laid out its plan for cleaning up an 8-mile stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over 1,000 °C (2000°F), the destruction process is energy intensive. See also Environmental impact of war Orange Crush (song) Rainbow herbicides Scorched earth Teratology Vietnam Syndrome Notes References NTP (National Toxicology Program); "Toxicology and Carcinogenesis Studies of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) in Female Harlan Sprague-Dawley Rats (Gavage Studies)", CASRN 1746-01-6, April 2006. – both of Young's books were commissioned by the U.S. Department of Defense, Office of the Deputy Under Secretary of Defense (Installations and Environment) Further reading Books see pages 245–252. with a foreword by Howard Zinn. Government/NGO reports "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Ms. Tran Thi Hoan", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 "Agent Orange in Vietnam: Recent Developments in Remediation: Testimony of Dr. Nguyen Thi Ngoc Phuong", Subcommittee on Asia, the Pacific and the Global Environment, U.S. House of Representatives, Committee on Foreign Affairs. July 15, 2010 Agent Orange Policy, American Public Health Association, 2007 "Assessment of the health risk of dioxins", World Health Organization/International Programme on Chemical Safety, 1998 Operation Ranch Hand: Herbicides In Southeast Asia History of Operation Ranch Hand, 1983 "Agent Orange Dioxin Contamination in the Environment and Food Chain at Key Hotspots in Viet Nam" Boivin, TG, et al., 2011 News Fawthrop, Tom; Agent of suffering, Guardian, February 10, 2008 Cox, Paul; "The Legacy of Agent Orange is a Continuing Focus of VVAW", The Veteran, Vietnam Veterans Against the War, Volume 38, No. 2, Fall 2008. Barlett, Donald P. and Steele, James B.; "Monsanto's Harvest of Fear", Vanity Fair May 2008 Quick, Ben "The Boneyard" Orion Magazine, March/April 2008 Cheng, Eva; "Vietnam's Agent Orange victims call for solidarity", Green Left Weekly, September 28, 2005 Children and the Vietnam War 30–40 years after the use of Agent Orange Tokar, Brian; "Monsanto: A Checkered History", Z Magazine, March 1999 Video Agent Orange: The Last Battle. Dir. Stephanie Jobe, Adam Scholl. DVD. 2005 "HADES" Dir. Caroline Delerue, Screenplay Mauro Bellanova 2011 The Man With The Wooden Face, James Nguyen. Short Film. 2017. Photojournalism CNN Al Jazeera America External links U.S. Environmental Protection Agency – Dioxin Web site Agent Orange Office of Public Health and Environmental Hazards, U.S. Department of Veteran Affairs Report from the National Birth Defects Registry - Birth Defects in Vietnam Veterans' Children "An Ecological Assessment of Johnston Atoll" Aftermath of the Vietnam War Articles containing video clips Auxinic herbicides Carcinogens Defoliants Dioxins Environmental controversies Environmental impact of war Imperial Chemical Industries Malayan Emergency Medical controversies Military equipment of the Vietnam War Anti-communist terrorism Monsanto Operation Ranch Hand Teratogens Chemical weapons
Agent Orange
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007). The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year n BC/BCE is numbered "−(n − 1)" (a negative number equal to 1 − n). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general n AD/CE is simply n or +n. For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other. The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred. Usage of the year zero In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled Christi (Christ's) between years labeled Ante Christum (Before Christ) and Post Christum (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled ante Christum (BC), and immediately before years labeled post Christum (AD) on the mean motion pages in his Tabulæ Astronomicæ, thus adding the designation 0 to Kepler's Christi. Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his Tables astronomiques, simply labeling this year 0, which he placed at the end of Julian years labeled avant Jesus-Christ (before Jesus Christ or BC), and immediately before Julian years labeled après Jesus-Christ (after Jesus Christ or AD). Cassini gave the following reasons for using a year 0: Fred Espenak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation: Signed years without the year zero Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel (1890-1967) used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He may have done so to save space and he put no year 0 between them. Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility. See also Julian day, another calendar commonly used by astronomers Astronomical chronology Holocene calendar ISO 8601 References Calendar eras Chronology Specific calendars Year numbering
Astronomical year numbering
Ab urbe condita ( 'from the founding of the City'), or anno urbis conditae (; 'in the year since the city's founding'), abbreviated as AUC or AVC, express a date in years since 753 BC, the traditional founding of Rome. It is an expression used in antiquity and by classical historians to refer to a given year in Ancient Rome. In reference to the traditional year of the foundation of Rome, the year 1 BC would be written AUC 753, whereas AD 1 would be AUC 754. The foundation of the Roman Empire in 27 BC would be AUC 727. The current common era year 2022 coincides with the AUC year 2775. Usage of the term was more common during the Renaissance, when editors sometimes added AUC to Roman manuscripts they published, giving the false impression that the convention was commonly used in antiquity. In reality, the dominant method of identifying years in Roman times was to name the two consuls who held office that year. In late antiquity, regnal years were also in use, as in Roman Egypt during the Diocletian era after AD 293, and in the Byzantine Empire from AD 537, following a decree by Justinian. Significance The traditional date for the founding of Rome, 21 April 753 BC, is due to Marcus Terentius Varro (1st century BC). Varro may have used the consular list (with its mistakes) and called the year of the first consuls "ab Urbe condita 245," accepting the 244-year interval from Dionysius of Halicarnassus for the kings after the foundation of Rome. The correctness of this calculation has not been confirmed, but it is still used worldwide. From the time of Claudius (fl. AD 41 to AD 54) onward, this calculation superseded other contemporary calculations. Celebrating the anniversary of the city became part of imperial propaganda. Claudius was the first to hold magnificent celebrations in honor of the anniversary of the city, in AD 48, the eight hundredth year from the founding of the city. Hadrian, in AD 121, and Antoninus Pius, in AD 147 and AD 148, held similar celebrations respectively. In AD 248, Philip the Arab celebrated Rome's first millennium, together with Ludi saeculares for Rome's alleged tenth saeculum. Coins from his reign commemorate the celebrations. A coin by a contender for the imperial throne, Pacatianus, explicitly states "[y]ear one thousand and first," which is an indication that the citizens of the empire had a sense of the beginning of a new era, a Sæculum Novum. Calendar era The Anno Domini (AD) year numbering was developed by a monk named Dionysius Exiguus in Rome in AD 525, as a result of his work on calculating the date of Easter. Dionysius did not use the AUC convention, but instead based his calculations on the Diocletian era. This convention had been in use since AD 293, the year of the tetrarchy, as it became impractical to use regnal years of the current emperor. In his Easter table, the year AD 532 was equated with the 248th regnal year of Diocletian. The table counted the years starting from the presumed birth of Christ, rather than the accession of the emperor Diocletian on 20 November AD 284 or, as stated by Dionysius: "sed magis elegimus ab incarnatione Domini nostri Jesu Christi annorum tempora praenotare" ("but rather we choose to name the times of the years from the incarnation of our Lord Jesus Christ"). Blackburn and Holford-Strevens review interpretations of Dionysius which place the Incarnation in 2 BC, 1 BC, or AD 1. The year AD 1 corresponds to AUC 754, based on the epoch of Varro. Thus: See also Calendar era History of Italy List of Latin phrases Roman calendar Notes Citations External links 1st-century BC establishments in the Roman Empire 8th century BC in the Roman Kingdom Calendar eras Chronology Latin words and phrases Roman calendar Diocletian
Ab urbe condita
Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants appear in the fossil record across the globe in considerable diversity during the latest Early Cretaceous and early Late Cretaceous, suggesting an earlier origin. Ants evolved from vespoid wasp ancestors in the Cretaceous period, and diversified after the rise of flowering plants. More than 13,800 of an estimated total of 22,000 species have been classified. They are easily identified by their geniculate (elbowed) antennae and the distinctive node-like structure that forms their slender waists. Ants form colonies that range in size from a few dozen predatory individuals living in small natural cavities to highly organised colonies that may occupy large territories and consist of millions of individuals. Larger colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony. Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships. Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rites. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant (Solenopsis invicta), are regarded as invasive species, establishing themselves in areas where they have been introduced accidentally. Etymology The word ant and the chiefly dialectal form emmet come from , of Middle English, which come from of Old English; these are all related to Low Saxon , and varieties (Old Saxon ) and to German (Old High German ). All of these words come from West Germanic *, and the original meaning of the word was "the biter" (from Proto-Germanic , "off, away" + "cut"). The family name Formicidae is derived from the Latin ("ant") from which the words in other Romance languages, such as the Portuguese , Italian , Spanish , Romanian , and French are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was used, cf. Sanskrit vamrah, Greek μύρμηξ mýrmēx, Old Church Slavonic mraviji, Old Irish moirb, Old Norse maurr, Dutch mier, Swedish myra, Danish myre, Middle Dutch miere, Crimean Gothic miera. Taxonomy and evolution The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the stinging wasps, and a 2013 study suggests that they are a sister group of the Apoidea. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant (Sphecomyrma) that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. Sphecomyrma was possibly a ground forager, while Haidomyrmex and Haidomyrmodes, related genera in subfamily Sphecomyrminae, are reconstructed as active arboreal predators. Older ants in the genus Sphecomyrmodes have been found in 99 million year-old amber from Myanmar. A 2006 study suggested that ants arose tens of millions of years earlier than previously thought, up to 168 million years ago. After the rise of flowering plants about 100 million years ago they diversified and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil. During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). Their representation in the fossil record is poor, in comparison to the populations of other insects, representing only about 1% of fossil evidence of insects in the era. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene). Termites live in colonies and are sometimes called ‘white ants’, but termites are not ants. They are the sub-order Isoptera, and together with cockroaches they form the order Blattodea. Blattodeans are related to mantids, crickets, and other winged insects that do not undergo full metamorphosis. Like ants, termites are eusocial, with sterile workers, but they differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps. Distribution and diversity Ants have a cosmopolitan distribution. They are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. Their ecological dominance is demonstrated by their biomass: ants are estimated to contribute 15–20 % (on average and nearly 25% in the tropics) of terrestrial animal biomass, exceeding that of the vertebrates. Ants range in size from , the largest species being the fossil Titanomyrma giganteum, the queen of which was long with a wingspan of . Ants vary in colour; most ants are red or black, but a few species are green and some tropical species have a metallic lustre. More than 13,800 species are currently known (with upper estimates of the potential existence of about 22,000; see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntWeb and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies. Morphology Ants are distinct in their morphology from other insects in having geniculate (elbowed) antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments (formally tagmata). The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments). Like other insects, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages. Head An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, ants tend to have blurrier eyesight, particularly in smaller species, and a few subterranean taxa are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a meter away. Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae. Mesosoma Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants (queens and males) have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur. Metasoma The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests. Polymorphism In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor, median, and major ergates. Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. These are known as macrergates while smaller workers are known as micrergates. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in Carebara diversa show nearly a 500-fold difference in their dry weights. Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called repletes. For instance, these replete workers develop in the North American honeypot ant Myrmecocystus mexicanus. Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in Acromyrmex sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of Solenopsis invicta can decide whether the colony will have single or multiple queens. The Australian jack jumper ant (Myrmecia pilosula) has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects. Genome size Genome size is a fundamental characteristic of an organism. Ants have been found to have tiny genomes, with the evolution of genome size suggested to occur through loss and accumulation of non-coding regions, mainly transposable elements, and occasionally by whole genome duplication. This may be related to colonisation processes, but further studies are needed to verify this. Life cycle The life of an ant starts from an egg; if the egg is fertilised, the progeny will be female diploid, if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers. The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones (termed "aner" in old literature), emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often are moved around among the various brood chambers within the colony. A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. An explanation for the sequence is suggested by the high casualties involved in foraging, making it an acceptable risk only for ants who are older and are likely to die soon of natural causes. Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size. Ants are active all year long in the tropics, but, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity. Reproduction A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right. Drones can also mate with existing queens by entering a foreign colony, such as in army ants. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed. Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, winged females and winged males, known to entomologists as alates, leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae. In Cardiocondyla elegans, workers may transport newly emerged queens to other conspecific nests where wingless males from unrelated colonies can mate with them, a behavioural adaptation that may reduce the chances of inbreeding. Mated females then seek a suitable place to begin a colony. There, they break off their wings using their tibial spurs and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch are known as nanitics, and are weaker and smaller than later workers, but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees. Behaviour and ecology Communication Ants communicate with each other using pheromones, sounds, and touch. The use of pheromones as chemical signals is more developed in ants, such as the red harvester ant, than in other hymenopteran groups. Like other insects, ants perceive smells with their long, thin, and mobile antennae. The paired antennae provide information about the direction and intensity of scents. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path. Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony. Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species. Defence Ants attack and defend themselves by biting and, in many species, by stinging, often injecting or spraying chemicals, such as formic acid in the case of formicine ants, alkaloids and piperidines in fire ants, and a variety of protein components in other ants. Bullet ants (Paraponera), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt sting pain index. The sting of jack jumper ants can be fatal, and an antivenom has been developed for it. Fire ants, Solenopsis spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people. Trap-jaw ants of the genus Odontomachus are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of Odontomachus bauri recorded peak speeds of between , with the jaws closing within 130 microseconds on average. The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in other ponerines such as Anochetus, as well as some genera in the tribe Attini, such as Daceton, Orectognathus, and Strumigenys, which are viewed as examples of convergent evolution. A Malaysian species of ant in the Camponotus cylindricus group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies. Suicidal defences by workers are also noted in a Brazilian ant, Forelius pusillus, where a small group of ants leaves the security of the nest after sealing the entrance from the outside each evening. In addition to defence against predators, ants need to protect their colonies from pathogens. Some worker ants maintain the hygiene of the colony and their activities include undertaking or necrophory, the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in Atta mexicana while workers of Linepithema humile react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour. Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of Cataulacus muticus, an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. Camponotus anderseni, which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration. Learning Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of Temnothorax albipennis can lead a naïve nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close. Controlled experiments with colonies of Cerapachys biroi suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care. Nest construction Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; Temnothorax albipennis will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats. The army ants of South America, such as the Eciton burchellii species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together. Weaver ant (Oecophylla spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of Polyrhachis. Formica polyctena, among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials. Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures. Cultivation of food Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called gongylidia. Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi. Navigation Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus Cataglyphis such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun. They integrate this information to find the shortest route back to their nest. Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction. These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion. Locomotion The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant (Harpegnathos saltator) is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including Cephalotes atratus; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy. Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. Polyrhachis sokolova, a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe. Cooperation and competition Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey. Some species (such as Tetramorium caespitum) attack and take over neighbouring ant colonies. Others are less expansionist, but just as aggressive; they invade colonies to steal eggs or larvae, which they either eat or raise as workers or slaves. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of enslaved Temnothorax species have evolved a counter-strategy, destroying just the female pupae of the slave-making Temnothorax americanus, but sparing the males (who do not take part in slave-raiding as adults). Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked. Also, the reason why two separate colonies of ants will attack each other even if they are of the same species is because the genes responsible for pheromone production are different between them. The Argentine ant, however, does not have this characteristic, due to lack of genetic diversity, and has become a global pest because of it. Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as Strumigenys xenos are entirely parasitic and do not have workers, but instead, rely on the food gathered by their Strumigenys perplexa hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest. A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in Wasmannia auropunctata, where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father. Disposing of their dead Ants either separate the bodies of their dead from the rest of the colony, or they bury them. Workers do this job in species that have them, or the queen might do it new colonies. This is done for health reasons. Relationships with other organisms Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines. Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies. Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacteria have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Hyphomicrobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera Leucoagaricus or Leucocoprinus of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant Allomerus decemarticulatus has evolved a three-way association with the host plant, Hirtella physophora (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey. Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, (Duroia hirsuta). This modification of the forest provides the ants with more nesting sites inside the stems of the Duroia trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare, one example being of the pollination of the orchid Leporella fimbriata which induces male Myrmecia urens to pseudocopulate with the flowers, transferring pollen in the process. One theory that has been proposed for the rarity of pollination is that the secretions of the metapleural gland inactivate and reduce the viability of pollen. Some plants have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia (Acacia cornigera) in Central America have hollow thorns that house colonies of stinging ants (Pseudomyrmex ferruginea) who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji Philidris nagasau (Dolichoderinae) are known to selectively grow species of epiphytic Squamellaria (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the Macaranga tree, which has stems adapted to house colonies of Crematogaster ants. Many plant species have seeds that are adapted for dispersal by ants. Seed dispersal by ants or myrmecochory is widespread, and new estimates suggest that nearly 9% of all plant species may have such ant associations. Often, seed-dispersing ants perform directed dispersal, depositing the seeds in locations that increase the likelihood of seed survival to reproduction. Some plants in arid, fire-prone systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food. Ants can substantially alter rate of decomposition and nutrient cycling in their nest. By myrmecochory and modification of soil conditions they substantially alter vegetation and nutrient cycling in surrounding ecosystem. A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch. Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites (Megaponera and Termitopone) while a few Cerapachyinae prey on other ants. Some termites, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The tropical wasp Mischocyttarus drewseni coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as A. multipicta, defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees (Trigona and Melipona) use chemical defences against ants. Flies in the Old World genus Bengalia (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly (Vestigipoda myrmolarvoidea) live in the nests of ants of the genus Aenictus and are cared for by the ants. Fungi in the genera Cordyceps and Ophiocordyceps infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates. A nematode (Myrmeconema neotropicum) that infects canopy ants (Cephalotes atratus) causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as Hyeronima alchorneoides, and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode. A study of Temnothorax nylanderi colonies in Germany found that workers parasitized by the tapeworm Anomotaenia brevis (ants are intermediate hosts, the definitive hosts are woodpeckers) lived much longer than unparasitized workers and had a reduced mortality rate, comparable to that of the queens of the same species, which live for as long as two decades. South American poison dart frogs in the genus Dendrobates feed mainly on ants, and the toxins in their skin may come from the ants. Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, Eciton burchellii is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of Pogonomyrmex. Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds. Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears (Ursus arctos) have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer and autumn, respectively, is composed of ants. Relationship with humans Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings or cause economic losses. In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant Atta cephalotes are also used by native surgeons in closing wounds. Some ants have toxic venom and are of medical importance. The species include Paraponera clavata (tocandira) and Dinoponera spp. (false tocandiras) of South America and the Myrmecia ants of Australia. In South Africa, ants are used to help harvest the seeds of rooibos (Aspalathus linearis), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them en masse. Up to half a pound (200 g) of seeds may be collected from one ant-heap. Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant (Aneuretus simoni) and Adetomyrma venatrix of Madagascar. E. O. Wilson has estimated that the total number of individual ants alive in the world at any one time is between one and ten quadrillion (short scale) (i.e., between 1015 and 1016). According to this estimate, the total biomass of all the ants in the world is approximately equal to the total biomass of the entire human race. According to this estimate, there are also approximately 1 million ants for every human on Earth. As food Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican escamoles. They are considered a form of insect caviar and can sell for as much as US$40 per pound ($90/kg) because they are seasonal and hard to find. In the Colombian department of Santander, hormigas culonas (roughly interpreted as "large-bottomed ants") Atta laevigata are toasted alive and eaten. In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant (Oecophylla smaragdina) is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, yam (), in a dish called yam khai mot daeng () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the Naturalist in Australia wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates". In his First Summer in the Sierra, John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the repletes, or living honey-pots, of the honey ant (Myrmecocystus). As pests Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, immigrant pavement ant, yellow crazy ant, banded sugar ant, pharaoh ant, red wood ant, black carpenter ant, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, some will seek water sources, others may damage indoor structures, some may damage agricultural crops directly or by aiding sucking pests. Some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological, and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques may vary according to the location and circumstance. In science and technology Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in formicaria, specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours. The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms. As pets From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light. In culture Anthropomorphised ants have often been used in fables and children's stories to represent industriousness and cooperative effort. They also are mentioned in religious texts. In the Book of Proverbs in the Bible, ants are held up as a good example of hard work and cooperation. Aesop did the same in his fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army., In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the very first animals. Ant bites are often said to have curative properties. The sting of some species of Pseudomyrmex is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance. Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book A Tramp Abroad. Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel The Once and Future King. The plot in French entomologist and writer Bernard Werber's Les Fourmis science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour is described using contemporary scientific knowledge. H.G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, The Empire of the Ants. In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including Antz, A Bug's Life, The Ant Bully, The Ant and the Aardvark, Ferdy the Ant and Atom Ant. Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for The New Yorker magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. The French neuroanatomist, psychiatrist and eugenicist Auguste Forel believed that ant societies were models for human society. He published a five volume work from 1921 to 1923 that examined ant biology and society. In the early 1990s, the video game SimAnt, which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program". Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of Ender's Game, the Bugs of Starship Troopers, the giant ants in the films Them! and Empire of the Ants, Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in Phase IV. In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the Master of Orion series of games or the ChCht in Deadlock II. These characters are often credited with a hive mind, a common misconception about ant colonies. See also Ant venom Glossary of ant terms International Union for the Study of Social Insects Myrmecological News (journal) Task allocation and partitioning of social insects References Cited texts Further reading External links AntWeb from The California Academy of Sciences AntWiki – Bringing Ants to the World Ant Species Fact Sheets from the National Pest Management Association on Argentine, Carpenter, Pharaoh, Odorous, and other ant species Ant Genera of the World – distribution maps The super-nettles. A dermatologist's guide to ants-in-the-plants Symbiosis Extant Albian first appearances Articles containing video clips Insects in culture
Ant
See also, Abatement. Abated, an ancient technical term applied in masonry and metal work to those portions which are sunk beneath the surface, as in inscriptions where the ground is sunk round the letters so as to leave the letters or ornament in relief. References Construction Masonry
Abated
Atomic absorption spectroscopy (AAS) and atomic emission spectroscopy (AES) is a spectroanalytical procedure for the quantitative determination of chemical elements using the absorption of optical radiation (light) by free atoms in the gaseous state. Atomic absorption spectroscopy is based on absorption of light by free metallic ions. In analytical chemistry the technique is used for determining the concentration of a particular element (the analyte) in a sample to be analyzed. AAS can be used to determine over 70 different elements in solution, or directly in solid samples via electrothermal vaporization, and is used in pharmacology, biophysics, archaeology and toxicology research. Atomic emission spectroscopy was first used as an analytical technique, and the underlying principles were established in the second half of the 19th century by Robert Wilhelm Bunsen and Gustav Robert Kirchhoff, both professors at the University of Heidelberg, Germany. The modern form of AAS was largely developed during the 1950s by a team of Australian chemists. They were led by Sir Alan Walsh at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Division of Chemical Physics, in Melbourne, Australia. Atomic absorption spectrometry has many uses in different areas of chemistry such as clinical analysis of metals in biological fluids and tissues such as whole blood, plasma, urine, saliva, brain tissue, liver, hair, muscle tissue. Atomic absorption spectrometry can be used in qualitative and quantitative analysis. Principles The technique makes use of the atomic absorption spectrum of a sample in order to assess the concentration of specific analytes within it. It requires standards with known analyte content to establish the relation between the measured absorbance and the analyte concentration and relies therefore on the Beer–Lambert law. Instrumentation In order to analyze a sample for its atomic constituents, it has to be atomized. The atomizers most commonly used nowadays are flames and electrothermal (graphite tube) atomizers. The atoms should then be irradiated by optical radiation, and the radiation source could be an element-specific line radiation source or a continuum radiation source. The radiation then passes through a monochromator in order to separate the element-specific radiation from any other radiation emitted by the radiation source, which is finally measured by a detector. Atomizers The atomizers most commonly used nowadays are (spectroscopic) flames and electrothermal (graphite tube) atomizers. Other atomizers, such as glow-discharge atomization, hydride atomization, or cold-vapor atomization, might be used for special purposes. Flame atomizers The oldest and most commonly used atomizers in AAS are flames, principally the air-acetylene flame with a temperature of about 2300 °C and the nitrous oxide system (N2O)-acetylene flame with a temperature of about 2700 °C. The latter flame, in addition, offers a more reducing environment, being ideally suited for analytes with high affinity to oxygen. Liquid or dissolved samples are typically used with flame atomizers. The sample solution is aspirated by a pneumatic analytical nebulizer, transformed into an aerosol, which is introduced into a spray chamber, where it is mixed with the flame gases and conditioned in a way that only the finest aerosol droplets (< 10 μm) enter the flame. This conditioning process reduces interference, but only about 5% of the aerosolized solution reaches the flame because of it. On top of the spray chamber is a burner head that produces a flame that is laterally long (usually 5–10 cm) and only a few mm deep. The radiation beam passes through this flame at its longest axis, and the flame gas flow-rates may be adjusted to produce the highest concentration of free atoms. The burner height may also be adjusted, so that the radiation beam passes through the zone of highest atom cloud density in the flame, resulting in the highest sensitivity. The processes in a flame include the stages of desolvation (drying) in which the solvent is evaporated and the dry sample nano-particles remain, vaporization (transfer to the gaseous phase) in which the solid particles are converted into gaseous molecule, atomization in which the molecules are dissociated into free atoms, and ionization where (depending on the ionization potential of the analyte atoms and the energy available in a particular flame) atoms may be in part converted to gaseous ions. Each of these stages includes the risk of interference in case the degree of phase transfer is different for the analyte in the calibration standard and in the sample. Ionization is generally undesirable, as it reduces the number of atoms that are available for measurement, i.e., the sensitivity. In flame AAS a steady-state signal is generated during the time period when the sample is aspirated. This technique is typically used for determinations in the mg L−1 range, and may be extended down to a few μg L−1 for some elements. Electrothermal atomizers Electrothermal AAS (ET AAS) using graphite tube atomizers was pioneered by Boris V. L’vov at the Saint Petersburg Polytechnical Institute, Russia, since the late 1950s, and investigated in parallel by Hans Massmann at the Institute of Spectrochemistry and Applied Spectroscopy (ISAS) in Dortmund, Germany. Although a wide variety of graphite tube designs have been used over the years, the dimensions nowadays are typically 20–25 mm in length and 5–6 mm inner diameter. With this technique liquid/dissolved, solid and gaseous samples may be analyzed directly. A measured volume (typically 10–50 μL) or a weighed mass (typically around 1 mg) of a solid sample are introduced into the graphite tube and subject to a temperature program. This typically consists of stages, such as drying – the solvent is evaporated; pyrolysis – the majority of the matrix constituents are removed; atomization – the analyte element is released to the gaseous phase; and cleaning – eventual residues in the graphite tube are removed at high temperature. The graphite tubes are heated via their ohmic resistance using a low-voltage high-current power supply; the temperature in the individual stages can be controlled very closely, and temperature ramps between the individual stages facilitate separation of sample components. Tubes may be heated transversely or longitudinally, where the former ones have the advantage of a more homogeneous temperature distribution over their length. The so-called stabilized temperature platform furnace (STPF) concept, proposed by Walter Slavin, based on research of Boris L’vov, makes ET AAS essentially free from interference. The major components of this concept are atomization of the sample from a graphite platform inserted into the graphite tube (L’vov platform) instead of from the tube wall in order to delay atomization until the gas phase in the atomizer has reached a stable temperature; use of a chemical modifier in order to stabilize the analyte to a pyrolysis temperature that is sufficient to remove the majority of the matrix components; and integration of the absorbance over the time of the transient absorption signal instead of using peak height absorbance for quantification. In ET AAS a transient signal is generated, the area of which is directly proportional to the mass of analyte (not its concentration) introduced into the graphite tube. This technique has the advantage that any kind of sample, solid, liquid or gaseous, can be analyzed directly. Its sensitivity is 2–3 orders of magnitude higher than that of flame AAS, so that determinations in the low μg L−1 range (for a typical sample volume of 20 μL) and ng g−1 range (for a typical sample mass of 1 mg) can be carried out. It shows a very high degree of freedom from interferences, so that ET AAS might be considered the most robust technique available nowadays for the determination of trace elements in complex matrices. Specialized atomization techniques While flame and electrothermal vaporizers are the most common atomization techniques, several other atomization methods are utilized for specialized use. Glow-discharge atomization A glow-discharge device (GD) serves as a versatile source, as it can simultaneously introduce and atomize the sample. The glow discharge occurs in a low-pressure argon gas atmosphere between 1 and 10 torr. In this atmosphere lies a pair of electrodes applying a DC voltage of 250 to 1000 V to break down the argon gas into positively charged ions and electrons. These ions, under the influence of the electric field, are accelerated into the cathode surface containing the sample, bombarding the sample and causing neutral sample atom ejection through the process known as sputtering. The atomic vapor produced by this discharge is composed of ions, ground state atoms, and fraction of excited atoms. When the excited atoms relax back into their ground state, a low-intensity glow is emitted, giving the technique its name. The requirement for samples of glow discharge atomizers is that they are electrical conductors. Consequently, atomizers are most commonly used in the analysis of metals and other conducting samples. However, with proper modifications, it can be utilized to analyze liquid samples as well as nonconducting materials by mixing them with a conductor (e.g. graphite). Hydride atomization Hydride generation techniques are specialized in solutions of specific elements. The technique provides a means of introducing samples containing arsenic, antimony, selenium, bismuth, and lead into an atomizer in the gas phase. With these elements, hydride atomization enhances detection limits by a factor of 10 to 100 compared to alternative methods. Hydride generation occurs by adding an acidified aqueous solution of the sample to a 1% aqueous solution of sodium borohydride, all of which is contained in a glass vessel. The volatile hydride generated by the reaction that occurs is swept into the atomization chamber by an inert gas, where it undergoes decomposition. This process forms an atomized form of the analyte, which can then be measured by absorption or emission spectrometry. Cold-vapor atomization The cold-vapor technique is an atomization method limited to only the determination of mercury, due to it being the only metallic element to have a large enough vapor pressure at ambient temperature. Because of this, it has an important use in determining organic mercury compounds in samples and their distribution in the environment. The method initiates by converting mercury into Hg2+ by oxidation from nitric and sulfuric acids, followed by a reduction of Hg2+ with tin(II) chloride. The mercury, is then swept into a long-pass absorption tube by bubbling a stream of inert gas through the reaction mixture. The concentration is determined by measuring the absorbance of this gas at 253.7 nm. Detection limits for this technique are in the parts-per-billion range making it an excellent mercury detection atomization method. Two types of burners are used: total consumption burner and premix burner. Radiation sources We have to distinguish between line source AAS (LS AAS) and continuum source AAS (CS AAS). In classical LS AAS, as it has been proposed by Alan Walsh, the high spectral resolution required for AAS measurements is provided by the radiation source itself that emits the spectrum of the analyte in the form of lines that are narrower than the absorption lines. Continuum sources, such as deuterium lamps, are only used for background correction purposes. The advantage of this technique is that only a medium-resolution monochromator is necessary for measuring AAS; however, it has the disadvantage that usually a separate lamp is required for each element that has to be determined. In CS AAS, in contrast, a single lamp, emitting a continuum spectrum over the entire spectral range of interest is used for all elements. Obviously, a high-resolution monochromator is required for this technique, as will be discussed later. Hollow cathode lamps Hollow cathode lamps (HCL) are the most common radiation source in LS AAS. Inside the sealed lamp, filled with argon or neon gas at low pressure, is a cylindrical metal cathode containing the element of interest and an anode. A high voltage is applied across the anode and cathode, resulting in an ionization of the fill gas. The gas ions are accelerated towards the cathode and, upon impact on the cathode, sputter cathode material that is excited in the glow discharge to emit the radiation of the sputtered material, i.e., the element of interest. In the majority of cases single element lamps are used, where the cathode is pressed out of predominantly compounds of the target element. Multi-element lamps are available with combinations of compounds of the target elements pressed in the cathode. Multi element lamps produce slightly less sensitivity than single element lamps and the combinations of elements have to be selected carefully to avoid spectral interferences. Most multi-element lamps combine a handful of elements, e.g.: 2 - 8. Atomic Absorption Spectrometers can feature as few as 1-2 hollow cathode lamp positions or in automated multi-element spectrometers, a 8-12 lamp positions may be typically available. Electrodeless discharge lamps Electrodeless discharge lamps (EDL) contain a small quantity of the analyte as a metal or a salt in a quartz bulb together with an inert gas, typically argon gas, at low pressure. The bulb is inserted into a coil that is generating an electromagnetic radio frequency field, resulting in a low-pressure inductively coupled discharge in the lamp. The emission from an EDL is higher than that from an HCL, and the line width is generally narrower, but EDLs need a separate power supply and might need a longer time to stabilize. Deuterium lamps Deuterium HCL or even hydrogen HCL and deuterium discharge lamps are used in LS AAS for background correction purposes. The radiation intensity emitted by these lamps decreases significantly with increasing wavelength, so that they can be only used in the wavelength range between 190 and about 320 nm. Continuum sources When a continuum radiation source is used for AAS, it is necessary to use a high-resolution monochromator, as will be discussed later. In addition, it is necessary that the lamp emits radiation of intensity at least an order of magnitude above that of a typical HCL over the entire wavelength range from 190 nm to 900 nm. A special high-pressure xenon short arc lamp, operating in a hot-spot mode has been developed to fulfill these requirements. Spectrometer As already pointed out above, there is a difference between medium-resolution spectrometers that are used for LS AAS and high-resolution spectrometers that are designed for CS AAS. The spectrometer includes the spectral sorting device (monochromator) and the detector. Spectrometers for LS AAS In LS AAS the high resolution that is required for the measurement of atomic absorption is provided by the narrow line emission of the radiation source, and the monochromator simply has to resolve the analytical line from other radiation emitted by the lamp. This can usually be accomplished with a band pass between 0.2 and 2 nm, i.e., a medium-resolution monochromator. Another feature to make LS AAS element-specific is modulation of the primary radiation and the use of a selective amplifier that is tuned to the same modulation frequency, as already postulated by Alan Walsh. This way any (unmodulated) radiation emitted for example by the atomizer can be excluded, which is imperative for LS AAS. Simple monochromators of the Littrow or (better) the Czerny-Turner design are typically used for LS AAS. Photomultiplier tubes are the most frequently used detectors in LS AAS, although solid state detectors might be preferred because of their better signal-to-noise ratio. Spectrometers for CS AAS When a continuum radiation source is used for AAS measurement it is indispensable to work with a high-resolution monochromator. The resolution has to be equal to or better than the half-width of an atomic absorption line (about 2 pm) in order to avoid losses of sensitivity and linearity of the calibration graph. The research with high-resolution (HR) CS AAS was pioneered by the groups of O’Haver and Harnly in the US, who also developed the (up until now) only simultaneous multi-element spectrometer for this technique. The breakthrough, however, came when the group of Becker-Ross in Berlin, Germany, built a spectrometer entirely designed for HR-CS AAS. The first commercial equipment for HR-CS AAS was introduced by Analytik Jena (Jena, Germany) at the beginning of the 21st century, based on the design proposed by Becker-Ross and Florek. These spectrometers use a compact double monochromator with a prism pre-monochromator and an echelle grating monochromator for high resolution. A linear charge-coupled device (CCD) array with 200 pixels is used as the detector. The second monochromator does not have an exit slit; hence the spectral environment at both sides of the analytical line becomes visible at high resolution. As typically only 3–5 pixels are used to measure the atomic absorption, the other pixels are available for correction purposes. One of these corrections is that for lamp flicker noise, which is independent of wavelength, resulting in measurements with very low noise level; other corrections are those for background absorption, as will be discussed later. Background absorption and background correction The relatively small number of atomic absorption lines (compared to atomic emission lines) and their narrow width (a few pm) make spectral overlap rare; there are only few examples known that an absorption line from one element will overlap with another. Molecular absorption, in contrast, is much broader, so that it is more likely that some molecular absorption band will overlap with an atomic line. This kind of absorption might be caused by un-dissociated molecules of concomitant elements of the sample or by flame gases. We have to distinguish between the spectra of di-atomic molecules, which exhibit a pronounced fine structure, and those of larger (usually tri-atomic) molecules that don't show such fine structure. Another source of background absorption, particularly in ET AAS, is scattering of the primary radiation at particles that are generated in the atomization stage, when the matrix could not be removed sufficiently in the pyrolysis stage. All these phenomena, molecular absorption and radiation scattering, can result in artificially high absorption and an improperly high (erroneous) calculation for the concentration or mass of the analyte in the sample. There are several techniques available to correct for background absorption, and they are significantly different for LS AAS and HR-CS AAS. Background correction techniques in LS AAS In LS AAS background absorption can only be corrected using instrumental techniques, and all of them are based on two sequential measurements: firstly, total absorption (atomic plus background), secondly, background absorption only. The difference of the two measurements gives the net atomic absorption. Because of this, and because of the use of additional devices in the spectrometer, the signal-to-noise ratio of background-corrected signals is always significantly inferior compared to uncorrected signals. It should also be pointed out that in LS AAS there is no way to correct for (the rare case of) a direct overlap of two atomic lines. In essence there are three techniques used for background correction in LS AAS: Deuterium background correction This is the oldest and still most commonly used technique, particularly for flame AAS. In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background absorption over the entire width of the exit slit of the spectrometer. The use of a separate lamp makes this technique the least accurate one, as it cannot correct for any structured background. It also cannot be used at wavelengths above about 320 nm, as the emission intensity of the deuterium lamp becomes very weak. The use of deuterium HCL is preferable compared to an arc lamp due to the better fit of the image of the former lamp with that of the analyte HCL. Smith-Hieftje background correction This technique (named after their inventors) is based on the line-broadening and self-reversal of emission lines from HCL when high current is applied. Total absorption is measured with normal lamp current, i.e., with a narrow emission line, and background absorption after application of a high-current pulse with the profile of the self-reversed line, which has little emission at the original wavelength, but strong emission on both sides of the analytical line. The advantage of this technique is that only one radiation source is used; among the disadvantages are that the high-current pulses reduce lamp lifetime, and that the technique can only be used for relatively volatile elements, as only those exhibit sufficient self-reversal to avoid dramatic loss of sensitivity. Another problem is that background is not measured at the same wavelength as total absorption, making the technique unsuitable for correcting structured background. Zeeman-effect background correction An alternating magnetic field is applied at the atomizer (graphite furnace) to split the absorption line into three components, the π component, which remains at the same position as the original absorption line, and two σ components, which are moved to higher and lower wavelengths, respectively. Total absorption is measured without magnetic field and background absorption with the magnetic field on. The π component has to be removed in this case, e.g. using a polarizer, and the σ components do not overlap with the emission profile of the lamp, so that only the background absorption is measured. The advantages of this technique are that total and background absorption are measured with the same emission profile of the same lamp, so that any kind of background, including background with fine structure can be corrected accurately, unless the molecule responsible for the background is also affected by the magnetic field and using a chopper as a polariser reduces the signal to noise ratio. While the disadvantages are the increased complexity of the spectrometer and power supply needed for running the powerful magnet needed to split the absorption line. Background correction techniques in HR-CS AAS In HR-CS AAS background correction is carried out mathematically in the software using information from detector pixels that are not used for measuring atomic absorption; hence, in contrast to LS AAS, no additional components are required for background correction. Background correction using correction pixels It has already been mentioned that in HR-CS AAS lamp flicker noise is eliminated using correction pixels. In fact, any increase or decrease in radiation intensity that is observed to the same extent at all pixels chosen for correction is eliminated by the correction algorithm. This obviously also includes a reduction of the measured intensity due to radiation scattering or molecular absorption, which is corrected in the same way. As measurement of total and background absorption, and correction for the latter, are strictly simultaneous (in contrast to LS AAS), even the fastest changes of background absorption, as they may be observed in ET AAS, do not cause any problem. In addition, as the same algorithm is used for background correction and elimination of lamp noise, the background corrected signals show a much better signal-to-noise ratio compared to the uncorrected signals, which is also in contrast to LS AAS. Background correction using a least-squares algorithm The above technique can obviously not correct for a background with fine structure, as in this case the absorbance will be different at each of the correction pixels. In this case HR-CS AAS is offering the possibility to measure correction spectra of the molecule(s) that is (are) responsible for the background and store them in the computer. These spectra are then multiplied with a factor to match the intensity of the sample spectrum and subtracted pixel by pixel and spectrum by spectrum from the sample spectrum using a least-squares algorithm. This might sound complex, but first of all the number of di-atomic molecules that can exist at the temperatures of the atomizers used in AAS is relatively small, and second, the correction is performed by the computer within a few seconds. The same algorithm can actually also be used to correct for direct line overlap of two atomic absorption lines, making HR-CS AAS the only AAS technique that can correct for this kind of spectral interference. See also Absorption spectroscopy Beer–Lambert law Inductively coupled plasma mass spectrometry Laser absorption spectrometry References Further reading B. Welz, M. Sperling (1999), Atomic Absorption Spectrometry, Wiley-VCH, Weinheim, Germany, . A. Walsh (1955), The application of atomic absorption spectra to chemical analysis, Spectrochim. Acta 7: 108–117. J.A.C. Broekaert (1998), Analytical Atomic Spectrometry with Flames and Plasmas, 3rd Edition, Wiley-VCH, Weinheim, Germany. B.V. L’vov (1984), Twenty-five years of furnace atomic absorption spectroscopy, Spectrochim. Acta Part B, 39: 149–157. B.V. L’vov (2005), Fifty years of atomic absorption spectrometry; J. Anal. Chem., 60: 382–392. H. Massmann (1968), Vergleich von Atomabsorption und Atomfluoreszenz in der Graphitküvette, Spectrochim. Acta Part B, 23: 215–226. W. Slavin, D.C. Manning, G.R. Carnrick (1981), The stabilized temperature platform furnace, At. Spectrosc. 2: 137–145. B. Welz, H. Becker-Ross, S. Florek, U. Heitmann (2005), High-resolution Continuum Source AAS, Wiley-VCH, Weinheim, Germany, . H. Becker-Ross, S. Florek, U. Heitmann, R. Weisse (1996), Influence of the spectral bandwidth of the spectrometer on the sensitivity using continuum source AAS, Fresenius J. Anal. Chem. 355: 300–303. J.M. Harnly (1986), Multi element atomic absorption with a continuum source, Anal. Chem. 58: 933A-943A. Skoog, Douglas (2007). Principles of Instrumental Analysis (6th ed.). Canada: Thomson Brooks/Cole. . External links Absorption spectroscopy Australian inventions Scientific techniques Analytical chemistry
Atomic absorption spectroscopy
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976. Coding system This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone. The ATC classification system is a strict hierarchy, meaning that each code necessarily has one and only one parent code, except for the 14 codes at the topmost level which have no parents. The codes are semantic identifiers, meaning they depict information by themselves beyond serving as identifiers (namely, the codes depict themselves the complete lineage of parenthood). As of 7 May 2020, there are 6,331 codes in ATC; the table below gives the count per level. History The ATC system is based on the earlier Anatomical Classification System, which is intended as a tool for the pharmaceutical industry to classify pharmaceutical products (as opposed to their active ingredients). This system, confusingly also called ATC, was initiated in 1971 by the European Pharmaceutical Market Research Association (EphMRA) and is being maintained by the EphMRA and Intellus. Its codes are organised into four levels. The WHO's system, having five levels, is an extension and modification of the EphMRA's. It was first published in 1976. Classification In this system, drugs are classified into groups at five different levels: First level The first level of the code indicates the anatomical main group and consists of one letter. There are 14 main groups: Example: C Cardiovascular system Second level The second level of the code indicates the therapeutic subgroup and consists of two digits. Example: C03 Diuretics Third level The third level of the code indicates the therapeutic/pharmacological subgroup and consists of one letter. Example: C03C High-ceiling diuretics Fourth level The fourth level of the code indicates the chemical/therapeutic/pharmacological subgroup and consists of one letter. Example: C03CA Sulfonamides Fifth level The fifth level of the code indicates the chemical substance and consists of two digits. Example: C03CA01 furosemide Other ATC classification systems ATCvet The Anatomical Therapeutic Chemical Classification System for veterinary medicinal products (ATCvet) is used to classify veterinary drugs. ATCvet codes can be created by placing the letter Q in front of the ATC code of most human medications. For example, furosemide for veterinary use has the code QC03CA01. Some codes are used exclusively for veterinary drugs, such as QI Immunologicals, QJ51 Antibacterials for intramammary use or QN05AX90 amperozide. Herbal ATC (HATC) The Herbal ATC system (HATC) is an ATC classification of herbal substances; it differs from the regular ATC system by using 4 digits instead of 2 at the 5th level group. The herbal classification is not adopted by WHO. The Uppsala Monitoring Centre is responsible for the Herbal ATC classification, and it is part of the WHODrug Global portfolio available by subscription. Defined daily dose The ATC system also includes defined daily doses (DDDs) for many drugs. This is a measurement of drug consumption based on the usual daily dose for a given drug. According to the definition, "[t]he DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults." Adaptations and updates National issues of the ATC classification, such as the German Anatomisch-therapeutisch-chemische Klassifikation mit Tagesdosen, may include additional codes and DDDs not present in the WHO version. ATC follows guidelines in creating new codes for newly approved drugs. An application is submitted to WHO for ATC classification and DDD assignment. A preliminary or temporary code is assigned and published on the website and in the WHO Drug Information for comment or objection. New ATC/DDD codes are discussed at the semi-annual Working Group meeting. If accepted it becomes a final decision and published semi-annually on the website and WHO Drug Information and implemented in the annual print/on-line ACT/DDD Index on January 1. Changes to existing ATC/DDD follow a similar process to become temporary codes and if accepted become a final decision as ATC/DDD alterations. ATC and DDD alterations are only valid and implemented in the coming annual updates; the original codes must continue until the end of the year. An updated version of the complete on-line/print ATC index with DDDs is published annually on January 1. See also Classification of Pharmaco-Therapeutic Referrals (CPR) ICD-10 International Classification of Diseases International Classification of Primary Care (ICPC-2) / ICPC-2 PLUS Medical classification Pharmaceutical care Pharmacotherapy RxNorm References External links Quarterly journal providing an overview of topics relating to medicines development and regulation. from EphMRA Anatomical Classification (ATC and NFC) atcd. R script to scrape the ATC data from the WHOCC website; contains link to download entire ATC tree. Drugs Pharmacological classification systems World Health Organization
Anatomical Therapeutic Chemical Classification System
Parallel ATA (PATA), originally , also known as ATA or IDE is standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, and optical disc drives in computers. The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of Serial ATA (SATA) in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems. History and terminology The standard was originally conceived as the "AT Bus Attachment," officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Physical ATA interfaces became a standard component in all PCs, initially on host bus adapters, sometimes on a sound card but ultimately as two physical interfaces embedded in a Southbridge chip on a motherboard. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems. They were replaced by SATA interfaces. IDE and ATA-1 The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Control Data Corporation (the hard drive manufacturer) and Compaq Computer (the initial customer), they developed the connector, the signaling protocols and so on, with the goal of remaining software compatible with the existing ST-506 hard drive interface. The first such drives appeared internally in Compaq PCs in 1986. and were first separately offered by Conner Peripherals as the CP342 in June 1987. The term Integrated Drive Electronics refers not just to the connector and interface definition, but also to the fact that the drive controller is integrated into the drive, as opposed to a separate controller on or connected to the motherboard. The interface cards used to connect a parallel ATA drive to, for example, a PCI slot are not drive controllers: they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus in disguise, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it. The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus. It has been referred to as "XT-IDE", "XTA" or "XT Attachment". EIDE and ATA-2 In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2". The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants. ATA-2 also was the first to note that devices other than hard drives could be attached to the interface: ATAPI As mentioned in the previous sections, ATA was originally designed for, and worked only with hard disk drives and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee (SFF) allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard disk drives. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol. The Small Form Factor committee approached this problem by defining ATAPI, the "ATA Packet Interface". ATAPI is actually a protocol allowing the ATA interface to carry SCSI commands and responses; therefore, all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. In fact, some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI. ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI. ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview. One commonly used set is defined in the MMC SCSI command set. ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4). UDMA and ATA-4 The ATA/ATAPI-4 standard also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 MByte/s to 33 MByte/second. In later versions, faster Ultra DMA modes were added, requiring new 80-wire cables to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MByte/s. Ultra ATA Ultra ATA, abbreviated UATA, is a designation that has been primarily used by Western Digital for different speed enhancements to the ATA/ATAPI standards. For example, in 2000 Western Digital published a document describing "Ultra ATA/100", which brought performance improvements for the then-current ATA/ATAPI-5 standard by improving maximum speed of the Parallel ATA interface from 66 to 100 MB/s. Most of Western Digital's changes, along with others, were included in the ATA/ATAPI-6 standard (2002). Current terminology The terms "integrated drive electronics" (IDE), "enhanced IDE" and "EIDE" have come to be used interchangeably with ATA (now Parallel ATA, or PATA). In addition, there have been several generations of "EIDE" drives marketed, compliant with various versions of the ATA specification. An early "EIDE" drive might be compatible with ATA-2, while a later one with ATA-6. Nevertheless, a request for an "IDE" or "EIDE" drive from a computer parts vendor will almost always yield a drive that will work with most Parallel ATA interfaces. Another common usage is to refer to the specification version by the fastest mode supported. For example, ATA-4 supported Ultra DMA modes 0 through 2, the latter providing a maximum transfer rate of 33 megabytes per second. ATA-4 drives are thus sometimes called "UDMA-33" drives, and sometimes "ATA-33" drives. Similarly, ATA-6 introduced a maximum transfer speed of 100 megabytes per second, and some drives complying with this version of the standard are marketed as "PATA/100" drives. x86 BIOS size limitations Initially, the size of an ATA drive was stored in the system x86 BIOS using a type number (1 through 45) that predefined the C/H/S parameters and also often the landing zone, in which the drive heads are parked while not in use. Later, a "user definable" format called C/H/S or cylinders, heads, sectors was made available. These numbers were important for the earlier ST-506 interface, but were generally meaningless for ATA—the CHS parameters for later ATA large drives often specified impossibly high numbers of heads or sectors that did not actually define the internal physical layout of the drive at all. From the start, and up to ATA-2, every user had to specify explicitly how large every attached drive was. From ATA-2 on, an "identify drive" command was implemented that can be sent and which will return all drive parameters. Owing to a lack of foresight by motherboard manufacturers, the system BIOS was often hobbled by artificial C/H/S size limitations due to the manufacturer assuming certain values would never exceed a particular numerical maximum. The first of these BIOS limits occurred when ATA drives reached sizes in excess of 504 MiB, because some motherboard BIOSes would not allow C/H/S values above 1024 cylinders, 16 heads, and 63 sectors. Multiplied by 512 bytes per sector, this totals bytes which, divided by bytes per MiB, equals 504 MiB (528 MB). The second of these BIOS limitations occurred at 1024 cylinders, 256 heads, and 63 sectors, and a problem in MS-DOS limited the number of heads to 255. This totals to bytes (8032.5 MiB), commonly referred to as the 8.4 gigabyte barrier. This is again a limit imposed by x86 BIOSes, and not a limit imposed by the ATA interface. It was eventually determined that these size limitations could be overridden with a tiny program loaded at startup from a hard drive's boot sector. Some hard drive manufacturers, such as Western Digital, started including these override utilities with new large hard drives to help overcome these problems. However, if the computer was booted in some other manner without loading the special utility, the invalid BIOS settings would be used and the drive could either be inaccessible or appear to the operating system to be damaged. Later, an extension to the x86 BIOS disk services called the "Enhanced Disk Drive" (EDD) was made available, which makes it possible to address drives as large as 264 sectors. Interface size limitations The first drive interface used 22-bit addressing mode which resulted in a maximum drive capacity of two gigabytes. Later, the first formalized ATA specification used a 28-bit addressing mode through LBA28, allowing for the addressing of 228 () sectors (blocks) of 512 bytes each, resulting in a maximum capacity of 128 GiB (137 GB). ATA-6 introduced 48-bit addressing, increasing the limit to 128 PiB (144 PB). As a consequence, any ATA drive of capacity larger than about 137 GB must be an ATA-6 or later drive. Connecting such a drive to a host with an ATA-5 or earlier interface will limit the usable capacity to the maximum of the interface. Some operating systems, including Windows XP pre-SP1, and Windows 2000 pre-SP3, disable LBA48 by default, requiring the user to take extra steps to use the entire capacity of an ATA drive larger than about 137 gigabytes. Older operating systems, such as Windows 98, do not support 48-bit LBA at all. However, members of the third-party group MSFN have modified the Windows 98 disk drivers to add unofficial support for 48-bit LBA to Windows 95 OSR2, Windows 98, Windows 98 SE and Windows ME. Some 16-bit and 32-bit operating systems supporting LBA48 may still not support disks larger than 2 TiB due to using 32-bit arithmetics only; a limitation also applying to many boot sectors. Primacy and obsolescence Parallel ATA (then simply called ATA or IDE) became the primary storage device interface for PCs soon after its introduction. In some systems, a third and fourth motherboard interface was provided, allowing up to eight ATA devices to be attached to the motherboard. Often, these additional connectors were implemented by inexpensive RAID controllers. Soon after the introduction of Serial ATA (SATA) in 2003, use of Parallel ATA declined. The first motherboards with built-in SATA interfaces usually had only a single PATA connector (for up to two PATA devices), along with multiple SATA connectors. Some PCs and laptops of the era have a SATA hard disk and an optical drive connected to PATA. As of 2007, some PC chipsets, for example the Intel ICH10, had removed support for PATA. Motherboard vendors still wishing to offer Parallel ATA with those chipsets must include an additional interface chip. In more recent computers, the Parallel ATA interface is rarely used even if present, as four or more Serial ATA connectors are usually provided on the motherboard and SATA devices of all types are common. With Western Digital's withdrawal from the PATA market, hard disk drives with the PATA interface were no longer in production after December 2013 for other than specialty applications. Parallel ATA interface Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin female connectors attached to a 40- or 80-conductor ribbon cable. Each cable has two or three connectors, one of which plugs into a host adapter interfacing with the rest of the computer system. The remaining connector(s) plug into storage devices, most commonly hard disk drives or optical drives. Each connector has 39 physical pins arranged into two rows, with a gap or key at pin 20. Round parallel ATA cables (as opposed to ribbon cables) were eventually made available for 'case modders' for cosmetic reasons, as well as claims of improved computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications. Pin 20 In the ATA standard, pin 20 is defined as a mechanical key and is not used. This pin's socket on the female connector is often obstructed, requiring pin 20 to be omitted from the male cable or drive connector; it is thus impossible to plug it in the wrong way round. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20. Pin 28 Pin 28 of the gray (slave/middle) connector of an 80-conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors. This enables cable select functionality. Pin 34 Pin 34 is connected to ground inside the blue connector of an 80-conductor cable but not attached to any conductor of the cable, allowing for detection of such a cable. It is attached normally on the gray and black connectors. 44-pin variant A 44-pin variant PATA connector is used for 2.5 inch drives inside laptops. The pins are closer together and the connector is physically smaller than the 40-pin connector. The extra pins carry power. 80-conductor variant ATA's cables have had 40 conductors for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives—the extra four for power), but an 80-conductor version appeared with the introduction of the UDMA/66 mode. All of the additional conductors in the new cable are grounds, interleaved with the signal conductors to reduce the effects of capacitive coupling between neighboring signal conductors, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables. Though the number of conductors doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally, the connectors are different; the connectors for the 80-conductor cable connect a larger number of ground conductors to the ground pins, while the connectors for the 40-conductor cable connect ground conductors to ground pins one-to-one. 80-conductor cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-conductor cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select. Differences between connectors The image on the right shows PATA connectors after removal of strain relief, cover, and cable. Pin one is at bottom left of the connectors, pin 2 is top left, etc., except that the lower image of the blue connector shows the view from the opposite side, and pin one is at top right. The connector is an insulation-displacement connector: each contact comprises a pair of points which together pierce the insulation of the ribbon cable with such precision that they make a connection to the desired conductor without harming the insulation on the neighboring conductors. The center row of contacts are all connected to the common ground bus and attach to the odd numbered conductors of the cable. The top row of contacts are the even-numbered sockets of the connector (mating with the even-numbered pins of the receptacle) and attach to every other even-numbered conductor of the cable. The bottom row of contacts are the odd-numbered sockets of the connector (mating with the odd-numbered pins of the receptacle) and attach to the remaining even-numbered conductors of the cable. Note the connections to the common ground bus from sockets 2 (top left), 19 (center bottom row), 22, 24, 26, 30, and 40 on all connectors. Also note (enlarged detail, bottom, looking from the opposite side of the connector) that socket 34 of the blue connector does not contact any conductor but unlike socket 34 of the other two connectors, it does connect to the common ground bus. On the gray connector, note that socket 28 is completely missing, so that pin 28 of the drive attached to the gray connector will be open. On the black connector, sockets 28 and 34 are completely normal, so that pins 28 and 34 of the drive attached to the black connector will be connected to the cable. Pin 28 of the black drive reaches pin 28 of the host receptacle but not pin 28 of the gray drive, while pin 34 of the black drive reaches pin 34 of the gray drive but not pin 34 of the host. Instead, pin 34 of the host is grounded. The standard dictates color-coded connectors for easy identification by both installer and cable maker. All three connectors are different from one another. The blue (host) connector has the socket for pin 34 connected to ground inside the connector but not attached to any conductor of the cable. Since the old 40 conductor cables do not ground pin 34, the presence of a ground connection indicates that an 80 conductor cable is installed. The conductor for pin 34 is attached normally on the other types and is not grounded. Installing the cable backwards (with the black connector on the system board, the blue connector on the remote device and the gray connector on the center device) will ground pin 34 of the remote device and connect host pin 34 through to pin 34 of the center device. The gray center connector omits the connection to pin 28 but connects pin 34 normally, while the black end connector connects both pins 28 and 34 normally. Multiple devices on a cable If two devices are attached to a single cable, one must be designated as Device 0 (in the past, commonly designated master) and the other as Device 1 (in the past, commonly designated as slave). This distinction is necessary to allow both drives to share the cable without conflict. The Device 0 drive is the drive that usually appears "first" to the computer's BIOS and/or operating system. In most personal computers the drives are often designated as "C:" for the Device 0 and "D:" for the Device 1 referring to one active primary partitions on each. The terms device and drive are used interchangeably in the industry, as in master drive or master device. The mode that a device must use is often set by a jumper setting on the device itself, which must be manually set to Device 0 (Master) or Device 1 (Slave). If there is a single device on a cable, it should be configured as Device 0. However, some certain era drives have a special setting called Single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a Single drive on a cable will often work reliably even though configured as the Device 1 drive (most often seen where an optical drive is the only device on the secondary ATA interface). The words primary and secondary typically refers to the two IDE cables, which can have two drives each (primary master, primary slave, secondary master, secondary slave). Cable select A drive mode called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as Device 0 or Device 1, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the Device 0 device; if it sees that pin 28 is open, the device becomes the Device 1 device. This setting is usually chosen by a jumper setting on the drive called "cable select", usually marked CS, which is separate from the Device 0/1 setting. Note that if two drives are configured as Device 0 and Device 1 manually, this configuration does not need to correspond to their position on the cable. Pin 28 is only used to let the drives know their position on the cable; it is not used by the host when communicating with the drives. With the 40-conductor cable, it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors; putting the Device 1 device at the end of the cable, and the Device 0 on the middle connector. This arrangement eventually was standardized in later versions. If there is just one device on a 2-drive cable, using the middle connector, this results in an unused stub of cable, which is undesirable for physical convenience and electrical reasons. The stub causes signal reflections, particularly at higher transfer rates. Starting with the 80-conductor cable defined for use in ATAPI5/UDMA4, the Device 0 device goes at the far-from-the-host end of the cable on the black connector, the slave Device 1 goes on the gray middle connector, and the blue connector goes to the host (e.g. motherboard IDE connector, or IDE card). So, if there is only one (Device 0) device on a two-drive cable, using the black connector, there is no cable stub to cause reflections. Also, cable select is now implemented in the device 1 device connector, usually simply by omitting the contact from the connector body. Serialized, overlapped, and queued operations The parallel ATA protocols up through ATA-3 require that once a command has been given on an ATA interface, it must complete before any subsequent command may be given. Operations on the devices must be serializedwith only one operation in progress at a timewith respect to the ATA host interface. A useful mental model is that the host ATA interface is busy with the first request for its entire duration, and therefore can not be told about another request until the first one is complete. The function of serializing requests to the interface is usually performed by a device driver in the host operating system. The ATA-4 and subsequent versions of the specification have included an "overlapped feature set" and a "queued feature set" as optional features, both being given the name "Tagged Command Queuing" (TCQ), a reference to a set of features from SCSI which the ATA version attempts to emulate. However, support for these is extremely rare in actual parallel ATA products and device drivers because these feature sets were implemented in such a way as to maintain software compatibility with its heritage as originally an extension of the ISA bus. This implementation resulted in excessive CPU utilization which largely negated the advantages of command queuing. By contrast, overlapped and queued operations have been common in other storage buses; in particular, SCSI's version of tagged command queuing had no need to be compatible with APIs designed for ISA, allowing it to attain high performance with low overhead on buses which supported first party DMA like PCI. This has long been seen as a major advantage of SCSI. The Serial ATA standard has supported native command queueing (NCQ) since its first release, but it is an optional feature for both host adapters and target devices. Many obsolete PC motherboards do not support NCQ, but modern SATA hard disk drives and SATA solid-state drives usually support NCQ, which is not the case for removable (CD/DVD) drives because the ATAPI command set used to control them prohibits queued operations. Two devices on one cable—speed impact There are many debates about how much a slow device can impact the performance of a faster device on the same cable. There is an effect, but the debate is confused by the blurring of two quite different causes, called here "Lowest speed" and "One operation at a time". "Lowest speed" On early ATA host adapters, both devices' data transfers can be constrained to the speed of the slower device, if two devices of different speed capabilities are on the same cable. For all modern ATA host adapters, this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed. Even with earlier adapters without independent timing, this effect applies only to the data transfer phase of a read or write operation. "One operation at a time" This is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. Only one device on a cable can perform a read or write operation at one time; therefore, a fast device on the same cable as a slow device under heavy use will find it has to wait for the slow device to complete its task first. However, most modern devices will report write operations as complete once the data is stored in their onboard cache memory, before the data is written to the (slow) magnetic storage. This allows commands to be sent to the other device on the cable, reducing the impact of the "one operation at a time" limit. The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably will not matter. Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive. HDD passwords and security ATA devices may support an optional security feature which is defined in an ATA specification, and thus not specific to any brand or device. The security feature can be enabled and disabled by sending special ATA commands to the drive. If a device is locked, it will refuse all access until it is unlocked. A device can have two passwords: A User Password and a Master Password; either or both may be set. There is a Master Password identifier feature which, if supported and used, can identify the current Master Password (without disclosing it). A device can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response shows which mode the disk is in: 0 = High, 1 = Maximum. In High security mode, the device can be unlocked with either the User or Master password, using the "SECURITY UNLOCK DEVICE" ATA command. There is an attempt limit, normally set to 5, after which the disk must be power cycled or hard-reset before unlocking can be attempted again. Also in High security mode, the SECURITY ERASE UNIT command can be used with either the User or Master password. In Maximum security mode, the device can be unlocked only with the User password. If the User password is not available, the only remaining way to get at least the bare hardware back to a usable state is to issue the SECURITY ERASE PREPARE command, immediately followed by SECURITY ERASE UNIT. In Maximum security mode, the SECURITY ERASE UNIT command requires the Master password and will completely erase all data on the disk. Word 89 in the IDENTIFY response indicates how long the operation will take. While the ATA lock is intended to be impossible to defeat without a valid password, there are purported workarounds to unlock a device. External parallel ATA devices Due to a short cable length specification and shielding issues it is extremely uncommon to find external PATA devices that directly use PATA for connection to a computer. A device connected externally needs additional cable length to form a U-shaped bend so that the external device may be placed alongside, or on top of the computer case, and the standard cable length is too short to permit this. For ease of reach from motherboard to device, the connectors tend to be positioned towards the front edge of motherboards, for connection to devices protruding from the front of the computer case. This front-edge position makes extension out the back to an external device even more difficult. Ribbon cables are poorly shielded, and the standard relies upon the cabling to be installed inside a shielded computer case to meet RF emissions limits. External hard disk drives or optical disk drives that have an internal PATA interface, use some other interface technology to bridge the distance between the external device and the computer. USB is the most common external interface, followed by Firewire. A bridge chip inside the external devices converts from the USB interface to PATA, and typically only supports a single external device without cable select or master/slave. Compact Flash interface Compact Flash in its IDE mode is essentially a miniaturized ATA interface, intended for use on devices that use flash memory storage. No interfacing chips or circuitry are required, other than to directly adapt the smaller CF socket onto the larger ATA connector. (Although most CF cards only support IDE mode up to PIO4, making them much slower in IDE mode than their CF capable speed) The ATA connector specification does not include pins for supplying power to a CF device, so power is inserted into the connector from a separate source. The exception to this is when the CF device is connected to a 44-pin ATA bus designed for 2.5-inch hard disk drives, commonly found in notebook computers, as this bus implementation must provide power to a standard hard disk drive. CF devices can be designated as devices 0 or 1 on an ATA interface, though since most CF devices offer only a single socket, it is not necessary to offer this selection to end users. Although CF can be hot-pluggable with additional design methods, by default when wired directly to an ATA interface, it is not intended to be hot-pluggable. ATA standards versions, transfer rates, and features The following table shows the names of the versions of the ATA standards and the transfer modes and rates supported by each. Note that the transfer rate for each mode (for example, 66.7 MB/s for UDMA4, commonly called "Ultra-DMA 66", defined by ATA-5) gives its maximum theoretical transfer rate on the cable. This is simply two bytes multiplied by the effective clock rate, and presumes that every clock cycle is used to transfer end-user data. In practice, of course, protocol overhead reduces this value. Congestion on the host bus to which the ATA adapter is attached may also limit the maximum burst transfer rate. For example, the maximum data transfer rate for conventional PCI bus is 133 MB/s, and this is shared among all active devices on the bus. In addition, no ATA hard drives existed in 2005 that were capable of measured sustained transfer rates of above 80 MB/s. Furthermore, sustained transfer rate tests do not give realistic throughput expectations for most workloads: They use I/O loads specifically designed to encounter almost no delays from seek time or rotational latency. Hard drive performance under most workloads is limited first and second by those two factors; the transfer rate on the bus is a distant third in importance. Therefore, transfer speed limits above 66 MB/s really affect performance only when the hard drive can satisfy all I/O requests by reading from its internal cache—a very unusual situation, especially considering that such data is usually already buffered by the operating system. , mechanical hard disk drives can transfer data at up to 524 MB/s, which is far beyond the capabilities of the PATA/133 specification. High-performance solid state drives can transfer data at up to 7000–7500 MB/s. Only the Ultra DMA modes use CRC to detect errors in data transfer between the controller and drive. This is a 16-bit CRC, and it is used for data blocks only. Transmission of command and status blocks do not use the fast signaling methods that would necessitate CRC. For comparison, in Serial ATA, 32-bit CRC is used for both commands and data. Features introduced with each ATA revision Speed of defined transfer modes Related standards, features, and proposals ATAPI Removable Media Device (ARMD) ATAPI devices with removable media, other than CD and DVD drives, are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system. These can be supported as bootable devices by a BIOS complying with the ATAPI Removable Media Device BIOS Specification, originally developed by Compaq Computer Corporation and Phoenix Technologies. It specifies provisions in the BIOS of a personal computer to allow the computer to be bootstrapped from devices such as Zip drives, Jaz drives, SuperDisk (LS-120) drives, and similar devices. These devices have removable media like floppy disk drives, but capacities more commensurate with hard drives, and programming requirements unlike either. Due to limitations in the floppy controller interface most of these devices were ATAPI devices, connected to one of the host computer's ATA interfaces, similarly to a hard drive or CD-ROM device. However, existing BIOS standards did not support these devices. An ARMD-compliant BIOS allows these devices to be booted from and used under the operating system without requiring device-specific code in the OS. A BIOS implementing ARMD allows the user to include ARMD devices in the boot search order. Usually an ARMD device is configured earlier in the boot order than the hard drive. Similarly to a floppy drive, if bootable media is present in the ARMD drive, the BIOS will boot from it; if not, the BIOS will continue in the search order, usually with the hard drive last. There are two variants of ARMD, ARMD-FDD and ARMD-HDD. Originally ARMD caused the devices to appear as a sort of very large floppy drive, either the primary floppy drive device 00h or the secondary device 01h. Some operating systems required code changes to support floppy disks with capacities far larger than any standard floppy disk drive. Also, standard-floppy disk drive emulation proved to be unsuitable for certain high-capacity floppy disk drives such as Iomega Zip drives. Later the ARMD-HDD, ARMD-"Hard disk device", variant was developed to address these issues. Under ARMD-HDD, an ARMD device appears to the BIOS and the operating system as a hard drive. ATA over Ethernet In August 2004, Sam Hopkins and Brantley Coile of Coraid specified a lightweight ATA over Ethernet protocol to carry ATA commands over Ethernet instead of directly connecting them to a PATA host adapter. This permitted the established block protocol to be reused in storage area network (SAN) applications. See also Advanced Host Controller Interface (AHCI) ATA over Ethernet (AoE) BIOS for BIOS Boot Specification (BBS) CE-ATA Consumer Electronics (CE) ATA FATA (hard drive) INT 13H for BIOS Enhanced Disk Drive Specification (SFF-8039i) IT8212, a low-end Parallel ATA controller Master/slave (technology) SCSI (Small Computer System Interface) Serial ATA List of device bandwidths References External links CE-ATA Workgroup AT Attachment Computer storage buses Computer connectors Computer hardware standards
Parallel ATA
Astrobiology, known as exobiology, is an interdisciplinary scientific field that studies the origins, early evolution, distribution, and future of life in the universe. Astrobiology is the multidisciplinary field that investigates the deterministic conditions and contingent events with which life arises, distributes, and evolves in the universe. It considers the question of whether extraterrestrial life exists, and if it does, how humans can detect it. Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology, geology, paleontology, and ichnology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. The origin and early evolution of life is an inseparable part of the discipline of astrobiology. Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space. Biochemistry may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the Universe was only 10–17 million years old. According to the panspermia hypothesis, microscopic life—distributed by meteoroids, asteroids and other small Solar System bodies—may exist throughout the universe. According to research published in August 2015, very large galaxies may be more favorable to the creation and development of habitable planets than such smaller galaxies as the Milky Way. Nonetheless, Earth is the only place in the universe humans know to harbor life. Estimates of habitable zones around other stars, sometimes referred to as "Goldilocks zones", along with the discovery of thousands of extrasolar planets and new insights into extreme habitats here on Earth, suggest that there may be many more habitable places in the universe than considered possible until very recently. Current studies on the planet Mars by the Curiosity and Perseverance rovers are searching for evidence of ancient life as well as plains related to ancient rivers or lakes that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic molecules on the planet Mars is now a primary NASA and ESA objective. Even if extraterrestrial life is never discovered, the interdisciplinary nature of astrobiology, and the cosmic and evolutionary perspectives engendered by it, may still result in a range of benefits here on Earth. Overview The term was first proposed by the Russian (Soviet) astronomer Gavriil Tikhov in 1953. Astrobiology is etymologically derived from the Greek , astron, "constellation, star"; , bios, "life"; and , -logia, study. The synonyms of astrobiology are diverse; however, the synonyms were structured in relation to the most important sciences implied in its development: astronomy and biology. A close synonym is exobiology from the Greek , "external"; Βίος, bios, "life"; and λογία, -logia, study. The term exobiology was coined by molecular biologist and Nobel Prize winner Joshua Lederberg. Exobiology is considered to have a narrow scope limited to search of life external to Earth, whereas subject area of astrobiology is wider and investigates the link between life and the universe, which includes the search for extraterrestrial life, but also includes the study of life on Earth, its origin, evolution and limits. Another term used in the past is xenobiology, ("biology of the foreigners") a word used in 1954 by science fiction writer Robert Heinlein in his work The Star Beast. The term xenobiology is now used in a more specialized sense, to mean "biology based on foreign chemistry", whether of extraterrestrial or terrestrial (possibly synthetic) origin. Since alternate chemistry analogs to some life-processes have been created in the laboratory, xenobiology is now considered as an extant subject. While it is an emerging and developing field, the question of whether life exists elsewhere in the universe is a verifiable hypothesis and thus a valid line of scientific inquiry. Though once considered outside the mainstream of scientific inquiry, astrobiology has become a formalized field of study. Planetary scientist David Grinspoon calls astrobiology a field of natural philosophy, grounding speculation on the unknown, in known scientific theory. NASA's interest in exobiology first began with the development of the U.S. Space Program. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded an Exobiology Program, which is now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded the search for extraterrestrial intelligence (SETI) to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. NASA's Viking missions to Mars, launched in 1976, included three biology experiments designed to look for metabolism of present life on Mars. Advancements in the fields of astrobiology, observational astronomy and discovery of large varieties of extremophiles with extraordinary capability to thrive in the harshest environments on Earth, have led to speculation that life may possibly be thriving on many of the extraterrestrial bodies in the universe. A particular focus of current astrobiology research is the search for life on Mars due to this planet's proximity to Earth and geological history. There is a growing body of evidence to suggest that Mars has previously had a considerable amount of water on its surface, water being considered an essential precursor to the development of carbon-based life. Missions specifically designed to search for current life on Mars were the Viking program and Beagle 2 probes. The Viking results were inconclusive, and Beagle 2 failed minutes after landing. A future mission with a strong astrobiology role would have been the Jupiter Icy Moons Orbiter, designed to study the frozen moons of Jupiter—some of which may have liquid water—had it not been cancelled. In late 2008, the Phoenix lander probed the environment for past and present planetary habitability of microbial life on Mars, and researched the history of water there. The European Space Agency's astrobiology roadmap from 2016, identified five main research topics, and specifies several key scientific objectives for each topic. The five research topics are: 1) Origin and evolution of planetary systems; 2) Origins of organic compounds in space; 3) Rock-water-carbon interactions, organic synthesis on Earth, and steps to life; 4) Life and habitability; 5) Biosignatures as facilitating life detection. In November 2011, NASA launched the Mars Science Laboratory mission carrying the Curiosity rover, which landed on Mars at Gale Crater in August 2012. The Curiosity rover is currently probing the environment for past and present planetary habitability of microbial life on Mars. On 9 December 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus, Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life. The European Space Agency is currently collaborating with the Russian Federal Space Agency (Roscosmos) and developing the ExoMars astrobiology rover, which was scheduled to be launched in July 2020, but was postponed to 2022. Meanwhile, NASA launched the Mars 2020 astrobiology rover and sample cacher for a later return to Earth. Methodology Planetary habitability When looking for life on other planets like Earth, some simplifying assumptions are useful to reduce the size of the task of the astrobiologist. One is the informed assumption that the vast majority of life forms in our galaxy are based on carbon chemistries, as are all life forms on Earth. Carbon is well known for the unusually wide variety of molecules that can be formed around it. Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. The presence of liquid water is an assumed requirement, as it is a common molecule and provides an excellent environment for the formation of complicated carbon-based molecules that could eventually lead to the emergence of life. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. A third assumption is to focus on planets orbiting Sun-like stars for increased probabilities of planetary habitability. Very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them. Very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally "locked" to the star. The long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant, as red dwarfs are extremely common. (See Habitability of red dwarf systems). Since Earth is the only planet known to harbor life, there is no evident way to know if any of these simplifying assumptions are correct. Communication attempts Research on communication with extraterrestrial intelligence (CETI) focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message and computational approaches to detecting and deciphering 'natural' language communication. The SETI program, for example, uses both radio telescopes and optical telescopes to search for deliberate signals from an extraterrestrial intelligence. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, scientist Stephen Hawking warned against it, suggesting that aliens might simply raid Earth for its resources and then move on. Elements of astrobiology Astronomy Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway. The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life. An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise: where: N = The number of communicative civilizations R* = The rate of formation of suitable stars (stars such as our Sun) fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun) ne = The number of Earth-sized worlds per planetary system fl = The fraction of those Earth-sized planets where life actually develops fi = The fraction of life sites where intelligence develops fc = The fraction of communicative planets (those on which electromagnetic communications technology develops) L = The "lifetime" of communicating civilizations However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it. Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on our planet. Biology Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life. Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they comprise an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sun-dependent; it only requires water and an energy gradient in order to exist. Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Xanthoria elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere. Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist. The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth. The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets. Astroecology Astroecology concerns the interactions of life with space environments and resources, in planets, asteroids and comets. On a larger scale, astroecology concerns resources for life about stars in the galaxy through the cosmological future. Astroecology attempts to quantify future life in space, addressing this area of astrobiology. Experimental astroecology investigates resources in planetary soils, using actual space materials in meteorites. The results suggest that Martian and carbonaceous chondrite materials can support bacteria, algae and plant (asparagus, potato) cultures, with high soil fertilities. The results support that life could have survived in early aqueous asteroids and on similar materials imported to Earth by dust, comets and meteorites, and that such asteroid materials can be used as soil for future space colonies. On the largest scale, cosmoecology concerns life in the universe over cosmological times. The main sources of energy may be red giant stars and white and red dwarf stars, sustaining life for 1020 years. Astroecologists suggest that their mathematical models may quantify the potential amounts of future life in space, allowing a comparable expansion in biodiversity, potentially leading to diverse intelligent life forms. Astrogeology Astrogeology is a planetary science discipline concerned with the geology of celestial bodies such as the planets and their moons, asteroids, comets, and meteorites. The information gathered by this discipline allows the measure of a planet's or a natural satellite's potential to develop and sustain life, or planetary habitability. An additional discipline of astrogeology is geochemistry, which involves study of the chemical composition of the Earth and other planets, chemical processes and reactions that govern the composition of rocks and soils, the cycles of matter and energy and their interaction with the hydrosphere and the atmosphere of the planet. Specializations include cosmochemistry, biochemistry and organic geochemistry. The fossil record provides the oldest known evidence for life on Earth. By examining the fossil evidence, paleontologists are able to better understand the types of organisms that arose on the early Earth. Some regions on Earth, such as the Pilbara in Western Australia and the McMurdo Dry Valleys of Antarctica, are also considered to be geological analogs to regions of Mars, and as such, might be able to provide clues on how to search for past life on Mars. The various organic functional groups, composed of hydrogen, oxygen, nitrogen, phosphorus, sulfur, and a host of metals, such as iron, magnesium, and zinc, provide the enormous diversity of chemical reactions necessarily catalyzed by a living organism. Silicon, in contrast, interacts with only a few other atoms, and the large silicon molecules are monotonous compared with the combinatorial universe of organic macromolecules. Indeed, it seems likely that the basic building blocks of life anywhere will be similar to those on Earth, in the generality if not in the detail. Although terrestrial life and life that might arise independently of Earth are expected to use many similar, if not identical, building blocks, they also are expected to have some biochemical qualities that are unique. If life has had a comparable impact elsewhere in the Solar System, the relative abundances of chemicals key for its survival—whatever they may be—could betray its presence. Whatever extraterrestrial life may be, its tendency to chemically alter its environment might just give it away. Life in the Solar System People have long speculated about the possibility of life in settings other than Earth, however, speculation on the nature of life elsewhere often has paid little heed to constraints imposed by the nature of biochemistry. The likelihood that life throughout the universe is probably carbon-based is suggested by the fact that carbon is one of the most abundant of the higher elements. Only two of the natural atoms, carbon and silicon, are known to serve as the backbones of molecules sufficiently large to carry biological information. As the structural basis for life, one of carbon's important features is that, unlike silicon, it can readily engage in the formation of chemical bonds with many other atoms, thereby allowing for the chemical versatility required to conduct the reactions of biological metabolism and propagation. Discussion on where in the Solar System life might occur was limited historically by the understanding that life relies ultimately on light and warmth from the Sun and, therefore, is restricted to the surfaces of planets. The four most likely candidates for life in the Solar System are the planet Mars, the Jovian moon Europa, and Saturn's moons Titan and Enceladus. Mars, Enceladus and Europa are considered likely candidates in the search for life primarily because they may have underground liquid water, a molecule essential for life as we know it for its use as a solvent in cells. Water on Mars is found frozen in its polar ice caps, and newly carved gullies recently observed on Mars suggest that liquid water may exist, at least transiently, on the planet's surface. At the Martian low temperatures and low pressure, liquid water is likely to be highly saline. As for Europa and Enceladus, large global oceans of liquid water exist beneath these moons' icy outer crusts. This water may be warmed to a liquid state by volcanic vents on the ocean floor, but the primary source of heat is probably tidal heating. On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet according to the scientists. Additionally, on 27 June 2018, astronomers reported the detection of complex macromolecular organics on Enceladus and, according to NASA scientists in May 2011, "is emerging as the most habitable spot beyond Earth in the Solar System for life as we know it". Another planetary body that could potentially sustain extraterrestrial life is Saturn's largest moon, Titan. Titan has been described as having conditions similar to those of early Earth. On its surface, scientists have discovered the first liquid lakes outside Earth, but these lakes seem to be composed of ethane and/or methane, not water. Some scientists think it possible that these liquid hydrocarbons might take the place of water in living cells different from those on Earth. After Cassini data were studied, it was reported in March 2008 that Titan may also have an underground ocean composed of liquid water and ammonia. Phosphine has been detected in the atmosphere of the planet Venus. There are no known abiotic processes on the planet that could cause its presence. Given that Venus has the hottest surface temperature of any planet in the solar system, Venusian life, if it exists, is most likely limited to extremophile microorganisms that float in the planet's upper atmosphere, where conditions are almost Earth-like. Measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Complex organic compounds of life, including uracil, cytosine and thymine, have been formed in a laboratory under outer space conditions, using starting chemicals such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), is the most carbon-rich chemical found in the universe. Rare Earth hypothesis The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, solar system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The Principle of Mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds. Research The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells. Research outcomes , no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial. Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists. On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone." Extreme environments on Earth On 17 March 2013, researchers reported that microbial life forms thrive in the Mariana Trench, the deepest spot on the Earth. Other researchers reported that microbes thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are." Evidence of perchlorates have been found throughout the solar system, and specifically on Mars. Dr. Kennda Lynch discovered the first known instance of perchlorates and perchlorates-reducing microbes in a paleolake in Pilot Valley, Utah. These finds expand the potential habitability of certain niches of other planets. Methane In 2004, the spectral signature of methane () was detected in the Martian atmosphere by both Earth-based telescopes as well as by the Mars Express orbiter. Because of solar radiation and cosmic radiation, methane is predicted to disappear from the Martian atmosphere within several years, so the gas must be actively replenished in order to maintain the present concentration. On 7 June 2018, NASA announced a cyclical seasonal variation in atmospheric methane, which may be produced by geological or biological sources. The European ExoMars Trace Gas Orbiter is currently measuring and mapping the atmospheric methane. Planetary systems It is possible that some exoplanets may have moons with solid surfaces or liquid oceans that are hospitable. Most of the planets so far discovered outside the Solar System are hot gas giants thought to be inhospitable to life, so it is not yet known whether the Solar System, with a warm, rocky, metal-rich inner planet such as Earth, is of an aberrant composition. Improved detection methods and increased observation time will undoubtedly discover more planetary systems, and possibly some more like ours. For example, NASA's Kepler Mission seeks to discover Earth-sized planets around other stars by measuring minute changes in the star's light curve as the planet passes between the star and the spacecraft. Progress in infrared astronomy and submillimeter astronomy has revealed the constituents of other star systems. Planetary habitability Efforts to answer questions such as the abundance of potentially habitable planets in habitable zones and chemical precursors have had much success. Numerous extrasolar planets have been detected using the wobble method and transit method, showing that planets around other stars are more numerous than previously postulated. The first Earth-sized extrasolar planet to be discovered within its star's habitable zone is Gliese 581 c. Extremophiles Studying extremophiles is useful for understanding the possible origin of life on Earth as well as for finding the most likely candidates for future colonization of other planets. The aim is to detect those organisms that are able to survive space travel conditions and to maintain the proliferating capacity. The best candidates are extremophiles, since they have adapted to survive in different kind of extreme conditions on earth. During the course of evolution, extremophiles have developed various strategies to survive the different stress conditions of different extreme environments. These stress responses could also allow them to survive in harsh space conditions, although evolution also puts some restrictions on their use as analogues to extraterrestrial life. Thermophilic species G. thermantarcticus is a good example of a microorganism that could survive space travel. It is a bacterium of the spore-forming genus Bacillus. The formation of spores allows for it to survive extreme environments while still being able to restart cellular growth. It is capable of effectively protecting its DNA, membrane and proteins integrity in different extreme conditions (desiccation, temperatures up to -196 °C, UVC and C-ray radiation...). It is also able to repair the damage produced by space environment. Some locations on Earth are particularly well-suited for astrobiological studies of extremophiles. For example, Valeria Souza and colleagues proposed that the Cuatro Ciénegas basin in Coahuila, Mexico, could serve as an "astrobiological Precambrian park" due to the similarity of some of its ecosystems to an earlier time in Earth's history when multicellular life began to dominate. By understanding how extremophilic organisms can survive the Earth's extreme environments, we can also understand how microorganisms could have survived space travel and how the panspermia hypothesis could be possible. Missions Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System. Viking program The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists. Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Beagle 2 Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna. EXPOSE EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit. Mars Science Laboratory The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars. Tanpopo The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet. ExoMars rover ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission is currently under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it is planned for a 2022 launch. Mars 2020 Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel. Europa Clipper Europa Clipper is a mission planned by NASA for a 2025 launch that will conduct detailed reconnaissance of Jupiter's moon Europa and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites. Proposed concepts Icebreaker Life Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation. Journey to Enceladus and Titan Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter. Enceladus Life Finder Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon. Life Investigation For Enceladus Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan. Oceanus Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water. Explorer of Enceladus and Titan Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency. See also Astrobiology.com Top ranked news source for Astrobiology The Living Cosmos References Bibliography The International Journal of Astrobiology, published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field. Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe. Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt. Further reading D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition). Andy Weir's best-selling 2021 novel, Project Hail Mary, centers on astrobiology. Dealing with climate change caused by space-dwelling microbes, an astronaut finds that another civilization is suffering from the same problem. External links Astrobiology.nasa.gov UK Centre for Astrobiology Spanish Centro de Astrobiología Astrobiology Research at The Library of Congress Astrobiology Magazine Exploring Solar System and Beyond Astrobiology Survey – An introductory course on astrobiology Summary - Search For Life Beyond Earth (NASA; 25 June 2021) Extraterrestrial life Origin of life Astronomical sub-disciplines Branches of biology Speculative evolution
Astrobiology
Aerodynamics, from Greek ἀήρ aero (air) + δυναμική (dynamics), is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics.The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature. History Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier-Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier-Stokes equations. Fundamental concepts Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields. Flow classification Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results. Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine). Continuum assumption Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow. The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. Conservation laws The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used: Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation. Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components). Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest. Together, these equations are known as the Navier-Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier-Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier-Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations. The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables. Branches of aerodynamics Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe. Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic. The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows. Incompressible aerodynamics An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. Subsonic flow Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions. In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics. Compressible aerodynamics According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows. Transonic flow The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic. Supersonic flow Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem. Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes. Hypersonic flow In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. Associated terminology The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence. Boundary layers The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically. Turbulence In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow. Aerodynamics in other fields Engineering design Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine. Environmental design Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems. Aerodynamic equations are used in numerical weather prediction. Ball-control in sports Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect". See also Aeronautics Aerostatics Aviation Insect flight – how bugs fly List of aerospace engineering topics List of engineering topics Nose cone design Fluid dynamics Computational fluid dynamics References Further reading General aerodynamics Subsonic aerodynamics Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. . Transonic aerodynamics Supersonic aerodynamics Hypersonic aerodynamics History of aerodynamics Aerodynamics related to engineering Ground vehicles Fixed-wing aircraft Helicopters Missiles Model aircraft Related branches of aerodynamics Aerothermodynamics Aeroelasticity Boundary layers Turbulence External links NASA Beginner's Guide to Aerodynamics Aerodynamics for Students Aerodynamics for Pilots Aerodynamics and Race Car Tuning Aerodynamic Related Projects eFluids Bicycle Aerodynamics Application of Aerodynamics in Formula One (F1) Aerodynamics in Car Racing Aerodynamics of Birds NASA Aerodynamics Index Dynamics Energy in transport
Aerodynamics
The Aberdeen Bestiary (Aberdeen University Library, Univ Lib. MS 24) is a 12th-century English illuminated manuscript bestiary that was first listed in 1542 in the inventory of the Old Royal Library at the Palace of Westminster. Due to similarities, it is often considered to be the "sister" manuscript of the Ashmole Bestiary. The connection between the ancient Greek didactic text Physiologus and similar bestiary manuscripts is also often noted. Information about the manuscript's origins and patrons are circumstantial, although the manuscript most likely originated from the 13th century and was owned by a wealthy ecclesiastical patron from north or south England. Currently, the Aberdeen Bestiary resides in the Aberdeen University Library in Scotland. History The Aberdeen Bestiary and the Ashmole Bestiary are considered by Xenia Muratova, a professor of art history, to be "the work of different artists belonging to the same artistic milieu." Due to their "striking similarities" they are often compared and described by scholars as being "sister manuscripts." The medievalist scholar M. R. James considered the Aberdeen Bestiary ''a replica of Ashmole 1511" a view echoed by many other art historians. Provenance The original patron of both the Aberdeen and Ashmole Bestiary was considered to be a high-ranking member of society such as a prince, king or another high ranking church official or monastery. However, since the section related to monastery life that was commonly depicted within the Aviarium manuscript was missing the original patron remains uncertain but it appears less likely to be a church member. The Aberdeen Bestiary was kept in Church and monastic settings for a majority of its history. However at some point it entered into the English royal collections library. The royal Westminster Library shelf stamp of King Henry the VIII is stamped on the side of the bestiary. How King Henry acquired the manuscript remains unknown although it was probably taken from a monastery. The manuscript appears to have been well-read by the family based on the amount of reading wear on the edges of the pages. Around the time King James of Scotland became the King of England the bestiary was passed along to the Marischal College, Aberdeen. The manuscript is in fragmented condition as many illuminations on folios were removed individually as miniatures likely not for monetary but possibly for personal reasons. The manuscript currently is in the Aberdeen Library in Scotland where it has remained since 1542. Description Materials The Aberdeen bestiary is a gilded decorated manuscript featuring large miniatures and some of the finest pigment, parchment and gold leaf from its time. Some portions of the manuscript such as folio eight recto even feature tarnished silver leaf. The original patron was wealthy enough to afford such materials so that the artists and scribes could enjoy creative freedom while creating the manuscripts. The artists were professionally trained and experimented with new techniques - such as heavy washes mixed with light washes and dark thick lines and use of contrasting color. The aqua color that is in the Aberdeen Bestiary is not present in the Ashmole Bestiary. The Aberdeen manuscript is loaded with filigree flora design and champie style gold leaf initials. Canterbury is considered to be the original location of manufacture as the location was well known for manufacturing high-end luxury books during the thirteen century. Its similarities with the Canterbury Paris Psalter tree style also further draws evidence of this relation. Style The craftsmanship of both Ashmole and Aberdeen bestiary suggest similar artists and scribes. Both the Ashmole and Aberdeen bestiary were probably made within 10 years of each other due to their stylistic and material similarities and the fact that both are crafted with the finest materials of their time. Stylistically both manuscripts are very similar but the Aberdeen has figures that are both more voluminous and less energetic than those of the Ashmole Bestiary. The color usage has been suggested as potentially Biblical in meaning as color usage had different interpretations in the early 13th century. The overall style of the human figures as well as color usage is very reminiscent of Roman mosaic art especially with the attention to detail in the drapery. Circles and ovals semi-realistically depict highlights throughout the manuscript. The way that animals are shaded in a Romanesque fashion with the use of bands to depict volume and form, which is similar to an earlier 12th-century Bury Bible made at Bury St.Edmunds. This Bestiary also shows stylistic similarities with the Paris Psalters of Canterbury. The Aviary section is similar to the Aviariium which is a well-known 12th century monastic text. The deviation from traditional color usage can be seen in the tiger, satyr, and unicorn folios as well as many other folios. The satyr in the Aberdeen Bestiary when compared to the satyr section of the slightly older Worksop bestiary is almost identical. There are small color notes in the Aberdeen Bestiary that are often seen in similar manuscripts dating between 1175 and 1250 which help indicate that it was made near the year 1200 or 1210. These notes are similar to many other side notes written on the sides of pages throughout the manuscript and were probably by the painter to remind himself of special circumstances, these note occur irregularly throughout the text. Illuminations Folio page 1 to 3 recto depicts the Genesis 1:1-25 which is represented with a large full page illumination Biblical Creation scene in the manuscript. Folio 5 recto shows Adam, a large figure surrounded by gold leaf and towering over others, with the theme of 'Adam naming the animals' - this starts the compilation of the bestiary portion within the manuscript. Folio 5 verso depicts quadrupeds, livestock, wild beasts, and the concept of the herd. Folio 7 to 18 recto depicts large cats and other beasts such as wolves, foxes and dogs. Many pages from the start of the manuscript's bestiary section such as 11 verso featuring a hyena shows small pin holes which were likely used to map out and copy artwork to a new manuscript. Folio 20 verso to 28 recto depicts livestock such as sheep, horses, and goats. Small animals like cats and mice are depicted on folio 24 to 25. Pages 25 recto to 63 recto feature depictions of birds and folio 64 recto to 80 recto depicts reptiles, worms and fish. 77 recto to 91 verso depicts trees and plants and other elements of nature such as the nature of man. The end folios of the manuscript from 93 recto to 100 recto depicts the nature of stones and rocks. Seventeen of the Aberdeen manuscript pages are pricked for transfer in a process called pouncing such as clearly seen in the hyena folio as well as folio 3 recto and 3 verso depicting Genesis 1:26-1:28, 31, 1:1-2. The pricking must have been done shortly after the creation of the Adam and Eve folio pages since there is not damage done to nearby pages. Other pages used for pouncing include folio 7 recto to 18 verso which is the beginning of the beasts portion of the manuscript and likely depicted a lions as well as other big cats such as leopards, panthers and their characteristic as well as other large wild and domesticated beasts. Missing Folios On folio 6 recto there was likely intended to be a depiction of a lion as in the Ashmole bestiary, but in this instance the pages were left blank although there are markings of margin lines. In comparison to the Ashmole bestiary, on 9 verso some leaves are missing which should have likely contained imagery of the antelope (Antalops), unicorn (Unicornis), lynx (Lynx), griffin (Gryps), part of elephant (Elephans). Near folio 21 verso two illuminations of the ox (Bos), camel (Camelus), dromedary (Dromedarius), ass (Asinus), onager (Onager) and part of horse (Equus) are also assumed to be missing. Also missing from folio 15 recto on are some leaves which should have contained crocodile (Crocodilus), manticore (Mantichora) and part of parandrus (Parandrus). These missing folios are assumed from comparisons between the Ashmole and other related bestiaries. Contents Folio 1 recto : Creation of heaven and earth (Genesis, 1: 1–5). (Full page) Folio 1 verso: Creation of the waters and the firmament (Genesis, 1: 6–8) Folio 2 recto : Creation of the birds and fishes (Genesis, 1: 20–23) Folio 2 verso : Creation of the animals (Genesis, 1: 24–25) Folio 3 recto : Creation of man (Genesis, 1: 26–28, 31; 2: 1–2) Folio 5 recto : Adam names the animals (Isidore of Seville, Etymologiae, Book XII, i, 1–2) Folio 5 verso : Animal (Animal) (Isidore of Seville, Etymologiae, Book XII, i, 3) Folio 5 verso : Quadruped (Quadrupes) (Isidore of Seville, Etymologiae, Book XII, i, 4) Folio 5 verso : Livestock (Pecus) (Isidore of Seville, Etymologiae, Book XII, i, 5–6) Folio 5 verso : Beast of burden (Iumentum) (Isidore of Seville, Etymologiae, Book XII, i, 7) Folio 5 verso : Herd (Armentum) (Isidore of Seville, Etymologiae, Book XII, i, 8) Beasts (Bestiae) Folio 7 recto : Lion (Leo) (Physiologus, Chapter 1; Isidore of Seville, Etymologiae, Book XII, ii, 3–6) Folio 8 recto : Tiger (Tigris) (Isidore of Seville, Etymologiae, Book XII, ii, 7) Folio 8 verso : Pard (Pard) (Isidore of Seville, Etymologiae, Book XII, ii, 10–11) Folio 9 recto : Panther (Panther) (Physiologus, Chapter 16; Isidore of Seville, Etymologiae, Book XII, ii, 8–9) Folio 10 recto : Elephant (Elephans) (Isidore of Seville, Etymologiae, Book XII, ii, 14; Physiologus, Chapter 43; Ambrose, Hexaemeron, Book VI, 35; Solinus, Collectanea rerum memorabilium, xxv, 1–7) Folio 11 recto : Beaver (Castor) Folio 11 recto : Ibex (Ibex) (Hugh of Fouilloy, II, 15) Folio 11 verso : Hyena (Yena) (Physiologus, Chapter 24; Solinus, Collectanea rerum memorabilium, xxvii, 23–24) Folio 12 recto : Crocotta (Crocotta) (Solinus, Collectanea rerum memorabilium, xxvii, 26) Folio 12 recto : Bonnacon (Bonnacon) (Solinus, Collectanea rerum memorabilium, xl, 10–11) Folio 12 verso : Ape (Simia) Folio 13 recto : Satyr (Satyrs) Folio 13 recto : Deer (Cervus) Folio 14 recto : Goat (Caper) Folio 14 verso : Wild goat (Caprea) Folio 15 recto : Monoceros (Monoceros) (Solinus, Collectanea rerum memorabilium, lii, 39–40) Folio 15 recto : Bear (Ursus) Folio 15 verso : Leucrota (Leucrota) (Solinus, Collectanea rerum memorabilium, lii, 34) Folio 16 recto : Parandrus (Parandrus) (Solinus, Collectanea rerum memorabilium, xxx, 25) Folio 16 recto : Fox (Vulpes) Folio 16 verso : Yale (Eale) (Solinus, Collectanea rerum memorabilium, lii, 35) Folio 16 verso : Wolf (Lupus) Folio 18 recto : Dog (Canis) Livestock (Pecora) Folio 20 verso : Sheep (Ovis) (Isidore of Seville, Etymologiae, Book XII, i, 9; Ambrose, Hexaemeron, Book VI, 20) Folio 21 recto : Wether (Vervex) (Isidore of Seville, Etymologiae, Book XII, i, 10) Folio 21 recto : Ram (Aries) (Isidore of Seville, Etymologiae, Book XII, i, 11) Folio 21 recto : Lamb (Agnus) (Isidore of Seville, Etymologiae, Book XII, i, 12; Ambrose, Hexaemeron, Book VI, 28) Folio 21 recto : He-goat (Hircus) (Isidore of Seville, Etymologiae, Book XII, i, 14) Folio 21 verso : Kid (Hedus) (Isidore of Seville, Etymologiae, Book XII, i, 13) Folio 21 verso : Boar (Aper) (Isidore of Seville, Etymologiae, Book XII, i, 27) Folio 21 verso : Bullock (Iuvencus) (Isidore of Seville, Etymologiae, Book XII, i, 28) Folio 21 verso : Bull (Taurus) (Isidore of Seville, Etymologiae, Book XII, i, 29) Folio 22 recto : Horse (Equus) (Isidore of Seville, Etymologiae, Book XII, i, 41–56; Hugh of Fouilloy, III, xxiii) Folio 23 recto : Mule (Mulus) (Isidore of Seville, Etymologiae, Book XII, i, 57–60) Small animals (Minuta animala) Folio 23 verso : Cat (Musio) (Isidore of Seville, Etymologiae, Book XII, ii, 38) Folio 23 verso : Mouse (Mus) (Isidore of Seville, Etymologiae, Book XII, iii, 1) Folio 23 verso : Weasel (Mustela) (Isidore of Seville, Etymologiae, Book XII, iii, 2; Physiologus, Chapter 21) Folio 24 recto : Mole (Talpa) (Isidore of Seville, Etymologiae, Book XII, iii, 5) Folio 24 recto : Hedgehog (Ericius) (Isidore of Seville, Etymologiae, Book XII, iii, 7; Ambrose, Hexaemeron, VI, 20) Folio 24 verso : Ant (Formica) (Physiologus, 12; Ambrose, Hexaemeron, Book VI, 16, 20) Birds (Aves) Folio 25 recto : Bird (Avis) Folio 25 verso : Dove (Columba) Folio 26 recto : Dove and hawk (Columba et Accipiter) Folio 26 verso : Dove (Columba) Folio 29 verso : North wind and South wind (Aquilo et Auster ventus) Folio 30 recto : Hawk (Accipiter) Folio 31 recto : Turtle dove (Turtur) Folio 32 verso : Palm tree (Palma) Folio 33 verso : Cedar (Cedrus) Folio 34 verso : Pelican (Pellicanus) - Orange and blue Folio 35 verso : Night heron (Nicticorax) Folio 36 recto : Hoopoe (Epops) Folio 36 verso : Magpie (Pica) Folio 37 recto : Raven (Corvus) Folio 38 verso : Cock (Gallus) Folio 41 recto : Ostrich (Strutio) Folio 44 recto : Vulture (Vultur) Folio 45 verso : Crane (Grus) Folio 46 verso : Kite (Milvus) Folio 46 verso : Parrot (Psitacus) Folio 47 recto : Ibis (Ibis) Folio 47 verso : Swallow (Yrundo) Folio 48 verso : Stork (Ciconia) Folio 49 verso : Blackbird (Merula) Folio 50 recto : Eagle-owl (Bubo) Folio 50 verso : Hoopoe (Hupupa) Folio 51 recto : Little owl (Noctua) Folio 51 recto : Bat (Vespertilio) Folio 51 verso : Jay (Gragulus) Folio 52 verso : Nightingale (Lucinia) Folio 53 recto : Goose (Anser) Folio 53 verso : Heron (Ardea) Folio 54 recto : Partridge (Perdix) Folio 54 verso : Halcyon (Alcyon) Folio 55 recto : Coot (Fulica) Folio 55 recto : Phoenix (Fenix) Folio 56 verso : Caladrius (Caladrius) Folio 57 verso : Quail (Coturnix) Folio 58 recto : Crow (Cornix) Folio 58 verso : Swan (Cignus) Folio 59 recto : Duck (Anas) Folio 59 verso : Peacock (Pavo) Folio 61 recto : Eagle (Aquila) Folio 63 recto : Bee (Apis) Snakes and Reptiles (Serpentes) Folio 64 verso : Perindens tree (Perindens) Folio 65 verso : Snake (Serpens) Folio 65 verso : Dragon (Draco) Folio 66 recto : Basilisk (Basiliscus) Folio 66 verso : Regulus (Regulus) Folio 66 verso : Viper (Vipera) Folio 67 verso : Asp (Aspis) Folio 68 verso : Scitalis (Scitalis) Folio 68 verso : Amphisbaena (Anphivena) Folio 68 verso : Hydrus (Ydrus) Folio 69 recto : Boa (Boa) Folio 69 recto : Iaculus (Iaculus) Folio 69 verso : Siren (Siren) Folio 69 verso : Seps (Seps) Folio 69 verso : Dipsa (Dipsa) Folio 69 verso : Lizard (Lacertus) Folio 69 verso : Salamander (Salamandra) Folio 70 recto : Saura (Saura) Folio 70 verso : Newt (Stellio) Folio 71 recto : Of the nature of Snakes (De natura serpentium) Worms (Vermes) Folio 72 recto : Worms (Vermis) Fish (Pisces) Folio 72 verso : Fish (Piscis) Folio 73 recto : Whale (Balena) Folio 73 recto : Serra (Serra) Folio 73 recto : Dolphin (Delphinus) Folio 73 verso : Sea-pig (Porcus marinus) Folio 73 verso : Crocodile (Crocodrillus) Folio 73 verso : Mullet (Mullus) Folio 74 recto : Fish (Piscis) Trees and Plants (Arbories) Folio 77 verso : Tree (Arbor) Folio 78 verso : Fig (Ficus) Folio 79 recto : Again of trees (Item de arboribus) Folio 79 recto : Mulberry Folio 79 recto : Sycamore Folio 79 recto : Hazel Folio 79 recto : Nuts Folio 79 recto : Almond Folio 79 recto : Chestnut Folio 79 recto : Oak Folio 79 verso : Beech Folio 79 verso : Carob Folio 79 verso : Pistachio Folio 79 verso : Pitch pine Folio 79 verso : Pine Folio 79 verso : Fir Folio 79 verso : Cedar Folio 80 recto : Cypress Folio 80 recto : Juniper Folio 80 recto : Plane Folio 80 recto : Oak Folio 80 recto : Ash Folio 80 recto : Alder Folio 80 verso : Elm Folio 80 verso : Poplar Folio 80 verso : Willow Folio 80 verso : Osier Folio 80 verso : Box Nature of Man (Natura hominis) Folio 80 verso : Isidorus on the nature of man (Ysidorus de natura hominis) Folio 89 recto : Isidorus on the parts of man's body (Ysidorus de membris hominis) Folio 91 recto : Of the age of man (De etate hominis) Stones (Lapides) Folio 93 verso : Fire-bearing stone (Lapis ignifer) Folio 94 verso : Adamas stone (Lapis adamas) Folio 96 recto : Myrmecoleon (Mermecoleon) Folio 96 verso : Verse (Versus) Folio 97 recto : Stone in the foundation of the wall (Lapis in fundamento muri) Folio 97 recto : The first stone, Jasper Folio 97 recto : The second stone, Sapphire Folio 97 recto : The third stone, Chalcedony Folio 97 verso : The fourth stone, Smaragdus Folio 98 recto : The fifth stone, Sardonyx Folio 98 recto : The sixth stone, Sard Folio 98 verso : The seventh stone, Chrysolite Folio 98 verso : The eighth stone, Beryl Folio 99 recto : The ninth stone, Topaz Folio 99 verso : The tenth stone, Chrysoprase Folio 99 verso : The eleventh stone, Hyacinth Folio 100 recto : The twelfth stone, Amethyst Folio 100 recto : Of stones and what they can do (De effectu lapidum) Gallery See also Bestiary List of medieval bestiaries Physiologus Ashmole Bestiary Paris Psalter Aviarium References External links The Aberdeen Bestiary Project - University of Aberdeen, Online version of the bestiary. David Badke, The Medieval Bestiary : Manuscript: Univ. Lib. MS 24 (Aberdeen Bestiary) Bestiaries University of Aberdeen 12th-century illuminated manuscripts Biology books Works of unknown authorship
Aberdeen Bestiary
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks. History The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File. AI-complete problems AI-complete problems are hypothesized to include: AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system) Bongard problems Computer vision (and subproblems such as object recognition) Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation) Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems. Machine translation To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reason, commonsense knowledge and the intuitions that underlie motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it. Software brittleness Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real-world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on. Formalization Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterised formally. Since many AI problems have no formalisation yet, conventional complexity theory does not allow the definition of AI-completeness. To address this problem, a complexity theory for AI has been proposed. It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined. The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part. Results The complexity of solving the following problems with a human-assisted Turing machine is: Optical character recognition for printed text: Turing test: for an -sentence conversation where the oracle remembers the conversation history (persistent oracle): for an -sentence conversation where the conversation history must be retransmitted: for an -sentence conversation where the conversation history must be retransmitted and the person takes linear time to read the query: ESP game: Image labelling (based on the Arthur–Merlin protocol): Image classification: human only: , and with less reliance on the human: . See also ASR-complete List of unsolved problems in computer science Synthetic intelligence References Artificial intelligence Computational problems
AI-complete
In condensed matter physics and materials science, an amorphous (from the Greek a, "without", and morphé, "shape, form") or non-crystalline solid is a solid that lacks the long-range order, which is a characteristic of a crystal. In some older articles and books, the term was used synonymously with glass. Today, however, "glassy solid" or "amorphous solid" is considered to be the overarching concept, and glass is considered to be a special case: glass is an amorphous solid maintained below its glass transition temperature. Polymers are often amorphous. Amorphous materials have an internal structure comprising interconnected structural blocks that can be similar to the basic structural units found in the corresponding crystalline phase of the same compound. Whether a material is liquid or solid depends primarily on the connectivity between its elementary building blocks; solids are characterized by a high degree of connectivity whereas structural blocks in fluids have lower connectivity. In the pharmaceutical industry, some amorphous drugs have been shown to offer higher bioavailability than their crystalline counterparts as a result of the higher solubility of the amorphous phase. However, certain compounds can undergo precipitation in their amorphous form in vivo, and can then decrease mutual bioavailability if administered together. Nano-structured materials Even amorphous materials have some degree of short-range order at the atomic length scale as a result of the nature of intermolecular chemical bonding (see structure of liquids and glasses for more information on non-crystalline material structure). Furthermore, in very small crystals, short-range order encompasses a large fraction of the atoms; nevertheless, relaxation at the surface, along with interfacial effects, distort the atomic positions and decrease structural order. Even the most advanced structural characterization techniques, such as x-ray diffraction and transmission electron microscopy, have difficulty in distinguishing amorphous and crystalline structures at short length scales. Amorphous thin films Amorphous phases are important constituents of thin films, which are solid layers of a few nanometres to some tens of micrometres thickness deposited upon a substrate. So-called structure zone models were developed to describe the microstructure of thin films and ceramics as a function of the homologous temperature Th that is the ratio of deposition temperature over melting temperature. According to these models, a necessary (but not sufficient) condition for the occurrence of amorphous phases is that Th has to be smaller than 0.3, that is the deposition temperature must be below 30% of the melting temperature. For higher values, the surface diffusion of deposited atomic species would allow for the formation of crystallites with long-range atomic order. Regarding their applications, amorphous metallic layers played an important role in the discovery of superconductivity in amorphous metals by Buckel and Hilsch. The superconductivity of amorphous metals, including amorphous metallic thin films, is now understood to be due to phonon-mediated Cooper pairing, and the role of structural disorder can be rationalized based on the strong-coupling Eliashberg theory of superconductivity. Today, optical coatings made from TiO2, SiO2, Ta2O5 etc. and combinations of them in most cases consist of amorphous phases of these compounds. Much research is carried out into thin amorphous films as a gas separating membrane layer. The technologically most important thin amorphous film is probably represented by a few nm thin SiO2 layers serving as isolator above the conducting channel of a metal-oxide semiconductor field-effect transistor (MOSFET). Also, hydrogenated amorphous silicon, a-Si:H for short, is of technical significance for thin-film solar cells. In the case of a-Si:H the missing long-range order between silicon atoms is partly induced by the presence of hydrogen in the percent range. The occurrence of amorphous phases turned out as a phenomenon of particular interest for studying thin-film growth. Remarkably, the growth of polycrystalline films is often used and preceded by an initial amorphous layer, the thickness of which may amount to only a few nm. The most investigated example is represented by thin polycrystalline silicon films, where such as the unoriented molecule. An initial amorphous layer was observed in many studies. Wedge-shaped polycrystals were identified by transmission electron microscopy to grow out of the amorphous phase only after the latter has exceeded a certain thickness, the precise value of which depends on deposition temperature, background pressure and various other process parameters. The phenomenon has been interpreted in the framework of Ostwald's rule of stages that predicts the formation of phases to proceed with increasing condensation time towards increasing stability. Experimental studies of the phenomenon require a clearly defined state of the substrate surface and its contaminant density etc., upon which the thin film is deposited. Soils Amorphous materials in soil strongly influence bulk density, aggregate stability, plasticity and water holding capacity of soils. The low bulk density and high void ratios are mostly due to glass shards and other porous minerals not becoming compacted. Andisol soils contain the highest amounts of amorphous materials. References Further reading External links Journal of Non-crystalline Solids (Elsevier) Phases of matter Unsolved problems in physics
Amorphous solid
Albinism is a congenital condition characterized in humans by the partial or complete absence of pigment in the skin, hair and eyes. Albinism is associated with a number of vision defects, such as photophobia, nystagmus, and amblyopia. Lack of skin pigmentation makes for more susceptibility to sunburn and skin cancers. In rare cases such as Chédiak–Higashi syndrome, albinism may be associated with deficiencies in the transportation of melanin granules. This also affects essential granules present in immune cells leading to increased susceptibility to infection. Albinism results from inheritance of recessive gene alleles and is known to affect all vertebrates, including humans. It is due to absence or defect of tyrosinase, a copper-containing enzyme involved in the production of melanin. Unlike humans, other animals have multiple pigments and for these, albinism is considered to be a hereditary condition characterised by the absence of melanin in particular, in the eyes, skin, hair, scales, feathers or cuticle. While an organism with complete absence of melanin is called an albino, an organism with only a diminished amount of melanin is described as leucistic or albinoid. The term is from the Latin albus, "white". Signs and symptoms There are two principal types of albinism: oculocutaneous, affecting the eyes, skin and hair, and ocular affecting the eyes only. There are different types of oculocutaneous albinism depending on which gene has undergone mutation. With some there is no pigment at all. The other end of the spectrum of albinism is "a form of albinism called rufous oculocutaneous albinism, which usually affects dark-skinned people". According to the National Organization for Albinism and Hypopigmentation, "With ocular albinism, the color of the iris of the eye may vary from blue to green or even brown, and sometimes darkens with age. However, when an optometrist or ophthalmologist examines the eye by shining a light from the side of the eye, the light shines back through the iris since very little pigment is present." Because individuals with albinism have skin that entirely lacks the dark pigment melanin, which helps protect the skin from the sun's ultraviolet radiation, their skin can burn more easily from overexposure. The human eye normally produces enough pigment to color the iris blue, green or brown and lend opacity to the eye. In photographs, those with albinism are more likely to demonstrate "red eye", due to the red of the retina being visible through the iris. Lack of pigment in the eyes also results in problems with vision, both related and unrelated to photosensitivity. Those with albinism are generally as healthy as the rest of the population (but see related disorders below), with growth and development occurring as normal, and albinism by itself does not cause mortality, although the lack of pigment blocking ultraviolet radiation increases the risk of melanomas (skin cancers) and other problems. Visual problems Development of the optical system is highly dependent on the presence of melanin. For this reason, the reduction or absence of this pigment in people with albinism may lead to: Misrouting of the retinogeniculate projections, resulting in abnormal decussation (crossing) of optic nerve fibres Photophobia and decreased visual acuity due to light scattering within the eye (ocular straylight) Photophobia is specifically when light enters the eye, unrestrictedwith full force. It is painful and causes extreme sensitivity to light. Reduced visual acuity due to foveal hypoplasia and possibly light-induced retinal damage. Eye conditions common in albinism include: Nystagmus, irregular rapid movement of the eyes back and forth, or in circular motion. Amblyopia, decrease in acuity of one or both eyes due to poor transmission to the brain, often due to other conditions such as strabismus. Optic nerve hypoplasia, underdevelopment of the optic nerve. The improper development of the retinal pigment epithelium (RPE), which in normal eyes absorbs most of the reflected sunlight, further increases glare due to light scattering within the eye. The resulting sensitivity (photophobia) generally leads to discomfort in bright light, but this can be reduced by the use of sunglasses or brimmed hats. Genetics Oculocutaneous albinism is generally the result of the biological inheritance of genetically recessive alleles (genes) passed from both parents of an individual such as OCA1 and OCA2. A mutation in the human TRP-1 gene may result in the deregulation of melanocyte tyrosinase enzymes, a change that is hypothesized to promote brown versus black melanin synthesis, resulting in a third oculocutaneous albinism (OCA) genotype, "OCA3". Some rare forms are inherited from only one parent. There are other genetic mutations which are proven to be associated with albinism. All alterations, however, lead to changes in melanin production in the body. Some of these are associated with increased risk of skin cancer . The chance of offspring with albinism resulting from the pairing of an organism with albinism and one without albinism is low. However, because organisms (including humans) can be carriers of genes for albinism without exhibiting any traits, albinistic offspring can be produced by two non-albinistic parents. Albinism usually occurs with equal frequency in both sexes. An exception to this is ocular albinism, which it is passed on to offspring through X-linked inheritance. Thus, ocular albinism occurs more frequently in males as they have a single X and Y chromosome, unlike females, whose genetics are characterized by two X chromosomes. There are two different forms of albinism: a partial lack of the melanin is known as hypomelanism, or hypomelanosis, and the total absence of melanin is known as amelanism or amelanosis. Enzyme The enzyme defect responsible for OCA1-type albinism is tyrosine 3-monooxygenase (tyrosinase), which synthesizes melanin from the amino acid tyrosine. Evolutionary theories It is suggested that the early genus Homo (humans in the broader sense) started to evolve in East Africa around 3 million years ago. The dramatic phenotypic change from the ape-like Australopithecus to early Homo is hypothesized to have involved the extreme loss of body hair – except for areas most exposed to UV radiation, such as the head – to allow for more efficient thermoregulation in the early hunter-gatherers. The skin that would have been exposed upon general body hair loss in these early proto-humans would have most likely been non-pigmented, reflecting the pale skin underlying the hair of our chimpanzee relatives. A positive advantage would have been conferred to early hominids inhabiting the African continent that were capable of producing darker skin – those who first expressed the eumelanin-producing MC1R allele – which protected them from harmful epithelium-damaging ultraviolet rays. Over time, the advantage conferred to those with darker skin may have led to the prevalence of darker skin on the continent. The positive advantage, however, would have had to be strong enough so as to produce a significantly higher reproductive fitness in those who produced more melanin. The cause of a selective pressure strong enough to cause this shift is an area of much debate. Some hypotheses include the existence of significantly lower reproductive fitness in people with less melanin due to lethal skin cancer, lethal kidney disease due to excess vitamin D formation in the skin of people with less melanin, or simply natural selection due to mate preference and sexual selection. When comparing the prevalence of albinism in Africa to its prevalence in other parts of the world, such as Europe and the United States, the potential evolutionary effects of skin cancer as a selective force due to its effect on these populations may not be insignificant. It would follow, then, that there would be stronger selective forces acting on albino individuals in Africa than on albinos in Europe and the US. In two separate studies in Nigeria, very few people with albinism appear to survive to old age. One study found that 89% of people diagnosed with albinism are between 0 and 30 years of age, while the other found that 77% of albinos were under the age of 20. Diagnosis Genetic testing can confirm albinism and what variety it is, but offers no medical benefits, except in the case of non-OCA disorders. Such disorders cause other medical problems in conjunction with albinism, and may be treatable. Genetic tests are currently available for parents who want to find out if they are carriers of ty-neg albinism. Diagnosis of albinism involves carefully examining a person's eyes, skin and hairs. Genealogical analysis can also help. Management Since there is no cure for albinism, it is managed through lifestyle adjustments. People with albinism need to take care not to get sunburnt and should have regular healthy skin checks by a dermatologist. For the most part, treatment of the eye conditions consists of visual rehabilitation. Surgery is possible on the extra-ocular muscles to decrease strabismus. Nystagmus-damping surgery can also be performed, to reduce the "shaking" of the eyes back and forth. The effectiveness of all these procedures varies greatly and depends on individual circumstances. Glasses, low vision aids, large-print materials, and bright angled reading lights can help individuals with albinism. Some people with albinism do well using bifocals (with a strong reading lens), prescription reading glasses, hand-held devices such as magnifiers or monoculars or wearable devices like eSight and Brainport. The condition may lead to abnormal development of the optic nerve and sunlight may damage the retina of the eye as the iris cannot filter out excess light due to a lack of pigmentation. Photophobia may be ameliorated by the use of sunglasses which filter out ultraviolet light. Some use bioptics, glasses which have small telescopes mounted on, in, or behind their regular lenses, so that they can look through either the regular lens or the telescope. Newer designs of bioptics use smaller light-weight lenses. Some US states allow the use of bioptic telescopes for driving motor vehicles. (See also NOAH bulletin "Low Vision Aids".) There are a number of national support groups across the globe which come under the umbrella of the World Albinism Alliance. Epidemiology Albinism affects people of all ethnic backgrounds; its frequency worldwide is estimated to be approximately one in 17,000. Prevalence of the different forms of albinism varies considerably by population, and is highest overall in people of sub-Saharan African descent. Today, the prevalence of albinism in sub-Saharan Africa is around 1 in 5,000, while in Europe and the US it's around 1 in 20,000 of the European derived population. Rates as high as 1 in 1,000 have been reported for some populations in Zimbabwe and other parts of Southern Africa. Certain ethnic groups and populations in isolated areas exhibit heightened susceptibility to albinism, presumably due to genetic factors. These include notably the Native American Kuna, Zuni and Hopi nations (respectively of Panama, New Mexico and Arizona); Japan, in which one particular form of albinism is unusually common ; and Ukerewe Island, the population of which shows a very high incidence of albinism. Society and culture In physical terms, humans with albinism commonly have visual problems and need sun protection. Persecution of people with albinism Humans with albinism often face social and cultural challenges (even threats), as the condition is often a source of ridicule, discrimination, or even fear and violence. It is especially socially stigmatised in many African societies. A study conducted in Nigeria on albino children stated that "they experienced alienation, avoided social interactions and were less emotionally stable. Furthermore, affected individuals were less likely to complete schooling, find employment, and find partners". Many cultures around the world have developed beliefs regarding people with albinism. In African countries such as Tanzania and Burundi, there has been an unprecedented rise in witchcraft-related killings of people with albinism in recent years, because their body parts are used in potions sold by witch doctors. Numerous authenticated incidents have occurred in Africa during the 21st century. For example, in Tanzania, in September 2009, three men were convicted of killing a 14-year-old albino boy and severing his legs in order to sell them for witchcraft purposes. Again in Tanzania and Burundi in 2010, the murder and dismemberment of a kidnapped albino child was reported from the courts, as part of a continuing problem. The US-based National Geographic Society estimated that in Tanzania a complete set of albino body parts is worth US$75,000. Another harmful and false belief is that sex with an albinistic woman will cure a man of HIV. This has led, for example in Zimbabwe, to rapes (and subsequent HIV infection). Albinism in popular culture Famous people with albinism include historical figures such as Oxford don William Archibald Spooner; actor-comedian Victor Varnado; musicians such as Johnny and Edgar Winter, Salif Keita, Winston "Yellowman" Foster, Brother Ali, Sivuca, Hermeto Pascoal, Willie "Piano Red" Perryman, Kalash Criminel; actor-rapper Krondon, and fashion models Connie Chiu, Ryan "La Burnt" Byrne and Shaun Ross. Emperor Seinei of Japan is thought to have been an albino because he was said to have been born with white hair. International Albinism Awareness Day International Albinism Awareness Day was established after a motion was accepted on 18 December 2014 by the United Nations General Assembly, proclaiming that as of 2015, 13 June would be known as International Albinism Awareness Day. This was followed by a mandate created by the United Nations Human Rights Council that appointed Ms. Ikponwosa Ero, who is from Nigeria, as the very first Independent Expert on the enjoyment of human rights by persons with albinism. See also References External links GeneReview/NCBI/NIH/UW entry on Oculocutaneous Albinism Type 2 GeneReview/NCBI/NIH/UW entry on Oculocutaneous Albinism Type 4 Dermatologic terminology Disturbances of human pigmentation Skin pigmentation Autosomal recessive disorders
Albinism in humans
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. After Ackermann's publication of his function (which had three nonnegative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version, the two-argument Ackermann-Péter function is defined as follows for nonnegative integers m and n: Its value grows rapidly, even for small inputs. For example, is an integer of 19,729 decimal digits (equivalent to 265536−3, or 22222−3). History In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function (the Greek letter phi). Ackermann's three-argument function, , is defined such that for , it reproduces the basic operations of addition, multiplication, and exponentiation as and for p > 2 it extends these basic operations in a way that can be compared to the hyperoperations: (Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.) In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers. Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors. The generalized hyperoperation sequence, e.g. , is a version of Ackermann function as well. In 1963 R.C. Buck based an intuitive two-variable variant on the hyperoperation sequence: Compared to most other versions Buck's function has no unessential offsets: Many other versions of Ackermann function have been investigated. Definition Definition: as m-ary function Ackermann's original three-argument function is defined recursively as follows for nonnegative integers and : Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers and as follows: The Ackermann function has also been expressed in relation to the hyperoperation sequence: or, written in Knuth's up-arrow notation (extended to integer indices ): or, equivalently, in terms of Buck's function F: Definition: as iterated 1-ary function Define as the n-th iterate of : Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so . Conceiving the Ackermann function as a sequence of unary functions, one can set . The function then becomes a sequence of unary functions, defined from iteration: As function composition is associative, the last line can as well be Computation The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS). TRS, based on 2-ary function The definition of the 2-ary Ackermann function leads to the obvious reduction rules Example Compute The reduction sequence is To compute one can use a stack, which initially contains the elements . Then repeatedly the two top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 2 elements; PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3 } The pseudocode is published in . For example, on input , Remarks The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code. For all the computation of takes no more than steps. pointed out that in the computation of the maximum length of the stack is , as long as . Their own algorithm, inherently iterative, computes within time and within space. TRS, based on iterated 1-ary function The definition of the iterated 1-ary Ackermann functions leads to different reduction rules As function composition is associative, instead of rule r6 one can define Like in the previous section the computation of can be implemented with a stack. Initially the stack contains the three elements . Then repeatedly the three top elements are replaced according to the rules Schematically, starting from : WHILE stackLength <> 1 { POP 3 elements; PUSH 1 or 3 or 5 elements, applying the rules r4, r5, r6; } Example On input the successive stack configurations are The corresponding equalities are When reduction rule r7 is used instead of rule r6, the replacements in the stack will follow The successive stack configurations will then be The corresponding equalities are Remarks On any given input the TRSs presented so far converge in the same number of steps. They also use the same reduction rules (in this comparison the rules r1, r2, r3 are considered "the same as" the rules r4, r5, r6/r7 respectively). For example, the reduction of converges in 14 steps: 6 × r1, 3 × r2, 5 × r3. The reduction of converges in the same 14 steps: 6 × r4, 3 × r5, 5 × r6/r7. The TRSs differ in the order in which the reduction rules are applied. When is computed following the rules {r4, r5, r6}, the maximum length of the stack stays below . When reduction rule r7 is used instead of rule r6, the maximum length of the stack is only . The length of the stack reflects the recursion depth. As the reduction according to the rules {r4, r5, r7} involves a smaller maximum depth of recursion, this computation is more efficient in that respect. TRS, based on hyperoperators As — or — showed explicitly, the Ackermann function can be expressed in terms of the hyperoperation sequence: or, after removal of the constant 2 from the parameter list, in terms of Buck's function Buck's function , a variant of Ackermann function by itself, can be computed with the following reduction rules: Instead of rule b6 one can define the rule To compute the Ackermann function it suffices to add three reduction rules These rules take care of the base case A(0,n), the alignment (n+3) and the fudge (-3). Example Compute The matching equalities are when the TRS with the reduction rule is applied: when the TRS with the reduction rule is applied: Remarks The computation of according to the rules {b1 - b5, b6, r8 - r10} is deeply recursive. The maximum depth of nested s is . The culprit is the order in which iteration is executed: . The first disappears only after the whole sequence is unfolded. The computation according to the rules {b1 - b5, b7, r8 - r10} is more efficient in that respect. The iteration simulates the repeated loop over a block of code. The nesting is limited to , one recursion level per iterated function. showed this correspondence. These considerations concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules b6 and b7 are considered "the same"). The reduction of for instance converges in 35 steps: 12 × b1, 4 × b2, 1 × b3, 4 × b5, 12 × b6/b7, 1 × r9, 1 × r10. The modus iterandi only affects the order in which the reduction rules are applied. A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance . published a cunning algorithm which computes within time and within space. Huge numbers To demonstrate how the computation of results in many steps and in a large number: Table of values Computing the Ackermann function can be restated in terms of an infinite table. First, place the natural numbers along the top row. To determine a number in the table, take the number immediately to the left. Then use that number to look up the required number in the column given by that number and one row up. If there is no number to its left, simply look at the column headed "1" in the previous row. Here is a small upper-left portion of the table: The numbers here which are only expressed with recursive exponentiation or Knuth arrows are very large and would take up too much space to notate in plain decimal digits. Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively. This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly: Properties General remarks It may not be immediately obvious that the evaluation of always terminates. However, the recursion is bounded because in each recursive application either decreases, or remains the same and decreases. Each time that reaches zero, decreases, so eventually reaches zero as well. (Expressed more technically, in each case the pair decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when decreases there is no upper bound on how much can increase — and it will often increase greatly. For small values of m like 1, 2, or 3, the Ackermann function grows relatively slowly with respect to n (at most exponentially). For , however, it grows much more quickly; even is about 2, and the decimal expansion of is very large by any typical measure. An interesting aspect is that the only arithmetic operation it ever uses is addition of 1. Its fast growing power is based solely on nested recursion. This also implies that its running time is at least proportional to its output, and so is also extremely huge. In actuality, for most cases the running time is far larger than the output; see above. A single-argument version that increases both and at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that is roughly comparable to in the fast-growing hierarchy. This extreme growth can be exploited to show that which is obviously computable on a machine with infinite memory such as a Turing machine and so is a computable function, grows faster than any primitive recursive function and is therefore not primitive recursive. Not primitive recursive The Ackermann function grows faster than any primitive recursive function and therefore is not itself primitive recursive. Specifically, one shows that to every primitive recursive function there exists a non-negative integer such that for all non-negative integers , Once this is established, it follows that itself is not primitive recursive, since otherwise putting would lead to the contradiction The proof proceeds as follows: define the class of all functions that grow slower than the Ackermann function and show that contains all primitive recursive functions. The latter is achieved by showing that contains the constant functions, the successor function, the projection functions and that it is closed under the operations of function composition and primitive recursion. Inverse Since the function considered above grows very rapidly, its inverse function, f, grows very slowly. This inverse Ackermann function f−1 is usually denoted by α. In fact, α(n) is less than 5 for any practical input size n, since is on the order of . This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees. Sometimes Ackermann's original function or other variations are used in these settings, but they all grow at similarly high rates. In particular, some modified functions simplify the expression by eliminating the −3 and similar terms. A two-parameter variation of the inverse Ackermann function can be defined as follows, where is the floor function: This function arises in more precise analyses of the algorithms mentioned above, and gives a more refined time bound. In the disjoint-set data structure, m represents the number of operations while n represents the number of elements; in the minimum spanning tree algorithm, m represents the number of edges while n represents the number of vertices. Several slightly different definitions of exist; for example, is sometimes replaced by n, and the floor function is sometimes replaced by a ceiling. Other studies might define an inverse function of one where m is set to a constant, such that the inverse applies to a particular row. The inverse of the Ackermann function is primitive recursive. Use as benchmark The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first published use of Ackermann's function in this way was in 1970 by Dragoș Vaida and, almost simultaneously, in 1971, by Yngve Sundblad. Sundblad's seminal paper was taken up by Brian Wichmann (co-author of the Whetstone benchmark) in a trilogy of papers written between 1975 and 1982. See also Computability theory Double recursion Fast-growing hierarchy Goodstein function Primitive recursive function Recursion (computer science) Notes References Bibliography External links An animated Ackermann function calculator Ackerman function implemented using a for loop Scott Aaronson, Who can name the biggest number? (1999) Ackermann functions. Includes a table of some values. Hyper-operations: Ackermann's Function and New Arithmetical Operation Robert Munafo's Large Numbers describes several variations on the definition of A. Gabriel Nivasch, Inverse Ackermann without pain on the inverse Ackermann function. Raimund Seidel, Understanding the inverse Ackermann function (PDF presentation). The Ackermann function written in different programming languages, (on Rosetta Code) Ackermann's Function (Archived 2009-10-24)—Some study and programming by Harry J. Smith. Arithmetic Large integers Special functions Theory of computation Computability theory
Ackermann function
The Association for Computing Machinery (ACM) is a US-based international learned society for computing. It was founded in 1947 and is the world's largest scientific and educational computing society. The ACM is a non-profit professional membership group, claiming nearly 100,000 student and professional members . Its headquarters are in New York City. The ACM is an umbrella organization for academic and scholarly interests in computer science (informatics). Its motto is "Advancing Computing as a Science & Profession". History The ACM was founded in 1947 under the name Eastern Association for Computing Machinery, which was changed the following year to the Association for Computing Machinery. Activities ACM is organized into over 171 local chapters and 37 Special Interest Groups (SIGs), through which it conducts most of its activities. Additionally, there are over 500 college and university chapters. The first student chapter was founded in 1961 at the University of Louisiana at Lafayette Many of the SIGs, such as SIGGRAPH, SIGDA, SIGPLAN, SIGCSE and SIGCOMM, sponsor regular conferences, which have become famous as the dominant venue for presenting innovations in certain fields. The groups also publish a large number of specialized journals, magazines, and newsletters. ACM also sponsors other computer science related events such as the worldwide ACM International Collegiate Programming Contest (ICPC), and has sponsored some other events such as the chess match between Garry Kasparov and the IBM Deep Blue computer. Services Publications ACM publishes over 50 journals including the prestigious Journal of the ACM, and two general magazines for computer professionals, Communications of the ACM (also known as Communications or CACM) and Queue. Other publications of the ACM include: ACM XRDS, formerly "Crossroads", was redesigned in 2010 and is the most popular student computing magazine in the US. ACM Interactions, an interdisciplinary HCI publication focused on the connections between experiences, people and technology, and the third largest ACM publication. ACM Computing Surveys (CSUR) Computers in Entertainment (CIE) ACM Journal on Emerging Technologies in Computing Systems (JETC) ACM Special Interest Group: Computers and Society (SIGCAS) A number of journals, specific to subfields of computer science, titled ACM Transactions. Some of the more notable transactions include: ACM Transactions on Computer Systems (TOCS) IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) ACM Transactions on Computational Logic (TOCL) ACM Transactions on Computer-Human Interaction (TOCHI) ACM Transactions on Database Systems (TODS) ACM Transactions on Graphics (TOG) ACM Transactions on Mathematical Software (TOMS) ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) IEEE/ACM Transactions on Networking (TON) ACM Transactions on Programming Languages and Systems (TOPLAS) Although Communications no longer publishes primary research, and is not considered a prestigious venue, many of the great debates and results in computing history have been published in its pages. ACM has made almost all of its publications available to paid subscribers online at its Digital Library and also has a Guide to Computing Literature. Individual members additionally have access to Safari Books Online and Books24x7. ACM also offers insurance, online courses, and other services to its members. In 1997, ACM Press published Wizards and Their Wonders: Portraits in Computing (), written by Christopher Morgan, with new photographs by Louis Fabian Bachrach. The book is a collection of historic and current portrait photographs of figures from the computer industry. Portal and Digital Library The ACM Portal is an online service of the ACM. Its core are two main sections: ACM Digital Library and the ACM Guide to Computing Literature. The ACM Digital Library is the full-text collection of all articles published by the ACM in its articles, magazines and conference proceedings. The Guide is a bibliography in computing with over one million entries. The ACM Digital Library contains a comprehensive archive starting in the 1950s of the organization's journals, magazines, newsletters and conference proceedings. Online services include a forum called Ubiquity and Tech News digest. There is an extensive underlying bibliographic database containing key works of all genres from all major publishers of computing literature. This secondary database is a rich discovery service known as The ACM Guide to Computing Literature. ACM adopted a hybrid Open Access (OA) publishing model in 2013. Authors who do not choose to pay the OA fee must grant ACM publishing rights by either a copyright transfer agreement or a publishing license agreement. ACM was a "green" publisher before the term was invented. Authors may post documents on their own websites and in their institutional repositories with a link back to the ACM Digital Library's permanently maintained Version of Record. All metadata in the Digital Library is open to the world, including abstracts, linked references and citing works, citation and usage statistics, as well as all functionality and services. Other than the free articles, the full-texts are accessed by subscription. There is also a mounting challenge to the ACM's publication practices coming from the open access movement. Some authors see a subscription business model as less relevant and publish on their home pages or on unreviewed sites like arXiv. Other organizations have sprung up which do their peer review entirely free and online, such as Journal of Artificial Intelligence Research, Journal of Machine Learning Research and the Journal of Research and Practice in Information Technology. Membership grades In addition to student and regular members, ACM has several advanced membership grades to recognize those with multiple years of membership and "demonstrated performance that sets them apart from their peers". The number of Fellows, Distinguished Members, and Senior Members cannot exceed 1%, 10%, and 25% of the total number of professional members, respectively. Fellows The ACM Fellows Program was established by Council of the Association for Computing Machinery in 1993 "to recognize and honor outstanding ACM members for their achievements in computer science and information technology and for their significant contributions to the mission of the ACM." There are 1310 Fellows out of about 100,000 members. Distinguished Members In 2006, ACM began recognizing two additional membership grades, one which was called Distinguished Members. Distinguished Members (Distinguished Engineers, Distinguished Scientists, and Distinguished Educators) have at least 15 years of professional experience and 5 years of continuous ACM membership and "have made a significant impact on the computing field". Note that in 2006 when the Distinguished Members first came out, one of the three levels was called "Distinguished Member" and was changed about two years later to "Distinguished Educator". Those who already had the Distinguished Member title had their titles changed to one of the other three titles. List of Distinguished Members of the Association for Computing Machinery Senior Members Also in 2006, ACM began recognizing Senior Members. According to the ACM, "The Senior Members Grade recognizes those ACM members with at least 10 years of professional experience and 5 years of continuous Professional Membership who have demonstrated performance through technical leadership, and technical or professional contributions". Senior membership also requires 3 letters of reference Distinguished Speakers While not technically a membership grade, the ACM recognizes distinguished speakers on topics in computer science. A distinguished speaker is appointed for a three-year period. There are usually about 125 current distinguished speakers. The ACM website describes these people as 'Renowned International Thought Leaders'. The distinguished speakers program (DSP) has been in existence for over 20 years and serves as an outreach program that brings renowned experts from Academia, Industry and Government to present on the topic of their expertise. The DSP is overseen by a committee Chapters ACM has three kinds of chapters: Special Interest Groups, Professional Chapters, and Student Chapters. , ACM has professional & SIG Chapters in 56 countries. , there exist ACM student chapters in 41 different countries. Special Interest Groups SIGACCESS: Accessible Computing SIGACT: Algorithms and Computation Theory SIGAda: Ada Programming Language SIGAI: Artificial Intelligence SIGAPP: Applied Computing SIGARCH: Computer Architecture SIGBED: Embedded Systems SIGBio: Bioinformatics SIGCAS: Computers and Society SIGCHI: Computer–Human Interaction SIGCOMM: Data Communication SIGCSE: Computer Science Education SIGDA: Design Automation SIGDOC: Design of Communication SIGecom: Electronic Commerce SIGEVO: Genetic and Evolutionary Computation SIGGRAPH: Computer Graphics and Interactive Techniques SIGHPC: High Performance Computing SIGIR: Information Retrieval SIGITE: Information Technology Education SIGKDD: Knowledge Discovery and Data Mining SIGLOG: Logic and Computation SIGMETRICS: Measurement and Evaluation SIGMICRO: Microarchitecture SIGMIS: Management Information Systems SIGMM: Multimedia SIGMOBILE: Mobility of Systems, Users, Data and Computing SIGMOD: Management of Data SIGOPS: Operating Systems SIGPLAN: Programming Languages SIGSAC: Security, Audit, and Control SIGSAM: Symbolic and Algebraic Manipulation SIGSIM: Simulation and Modeling SIGSOFT: Software Engineering SIGSPATIAL: Spatial Information SIGUCCS: University and College Computing Services SIGWEB: Hypertext, Hypermedia, and Web Conferences ACM and its Special Interest Groups (SIGs) sponsors numerous conferences with 170 hosted worldwide in 2017. ACM Conferences page has an up-to-date complete list while a partial list is shown below. Most of the SIGs also have an annual conference. ACM conferences are often very popular publishing venues and are therefore very competitive. For example, the 2007 SIGGRAPH conference attracted about 30000 visitors, and CIKM only accepted 15% of the long papers that were submitted in 2005. COMPASS: International Conference on Computing and Sustainable Societies ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems CHI: Conference on Human Factors in Computing Systems SIGCSE: SIGCSE Technical Symposium on Computer Science Education CIKM: Conference on Information and Knowledge Management DAC: Design Automation Conference DEBS: Distributed Event Based Systems FCRC: Federated Computing Research Conference GECCO: Genetic and Evolutionary Computation Conference SC: Supercomputing Conference SIGGRAPH: International Conference on Computer Graphics and Interactive Techniques Hypertext: Conference on Hypertext and Hypermedia JCDL: Joint Conference on Digital Libraries TAPIA: Richard Tapia Celebration of Diversity in Computing Conference SIGCOMM: ACM SIGCOMM Conference MobiHoc: International Symposium on Mobile Ad Hoc Networking and Computing The ACM is a co–presenter and founding partner of the Grace Hopper Celebration of Women in Computing (GHC) with the Anita Borg Institute for Women and Technology. Some conferences are hosted by ACM student branches; this includes Reflections Projections, which is hosted by UIUC ACM.. In addition, ACM sponsors regional conferences. Regional conferences facilitate increased opportunities for collaboration between nearby institutions and they are well attended. For additional non-ACM conferences, see this list of computer science conferences. Awards The ACM presents or co–presents a number of awards for outstanding technical and professional achievements and contributions in computer science and information technology. ACM A. M. Turing Award ACM – AAAI Allen Newell Award ACM Athena Lecturer Award ACM/CSTA Cutler-Bell Prize in High School Computing ACM Distinguished Service Award ACM Doctoral Dissertation Award ACM Eugene L. Lawler Award ACM Fellowship, awarded annually since 1993 ACM Gordon Bell Prize ACM Grace Murray Hopper Award ACM – IEEE CS George Michael Memorial HPC Fellowships ACM – IEEE CS Ken Kennedy Award ACM – IEEE Eckert-Mauchly Award ACM India Doctoral Dissertation Award ACM Karl V. Karlstrom Outstanding Educator Award ACM Paris Kanellakis Theory and Practice Award ACM Policy Award ACM Presidential Award ACM Prize in Computing (formerly: ACM – Infosys Foundation Award in the Computing Sciences) ACM Programming Systems and Languages Paper Award ACM Student Research Competition ACM Software System Award International Science and Engineering Fair Outstanding Contribution to ACM Award SIAM/ACM Prize in Computational Science and Engineering Over 30 of ACM's Special Interest Groups also award individuals for their contributions with a few listed below. ACM Alan D. Berenbaum Distinguished Service Award ACM Maurice Wilkes Award ISCA Influential Paper Award Leadership The President of ACM for 2020–2022 is Gabriele Kotsis, Professor at the Johannes Kepler University Linz. She is successor of Cherri M. Pancake (2018–2020), Professor Emeritus at Oregon State University and Director of the Northwest Alliance for Computational Science and Engineering (NACSE); Vicki L. Hanson (2016–2018), Distinguished Professor at the Rochester Institute of Technology and Visiting Professor at the University of Dundee; Alexander L. Wolf (2014–2016), Dean of the Jack Baskin School of Engineering at the University of California, Santa Cruz; Vint Cerf (2012–2014), an American computer scientist who is recognized as one of "the fathers of the Internet"; Alain Chesnais (2010–2012), a French citizen living in Toronto, Ontario, Canada, where he runs his company named Visual Transitions; and Dame Wendy Hall of the University of Southampton, UK (2008–2010). ACM is led by a Council consisting of the President, Vice-President, Treasurer, Past President, SIG Governing Board Chair, Publications Board Chair, three representatives of the SIG Governing Board, and seven Members–At–Large. This institution is often referred to simply as "Council" in Communications of the ACM. Infrastructure ACM has five "Boards" that make up various committees and subgroups, to help Headquarters staff maintain quality services and products. These boards are as follows: Publications Board SIG Governing Board Education Board Membership Services Board Practitioners Board ACM Council on Women in Computing ACM-W, the ACM council on women in computing, supports, celebrates, and advocates internationally for the full engagement of women in computing. ACM–W's main programs are regional celebrations of women in computing, ACM-W chapters, and scholarships for women CS students to attend research conferences. In India and Europe these activities are overseen by ACM-W India and ACM-W Europe respectively. ACM-W collaborates with organizations such as the Anita Borg Institute, the National Center for Women & Information Technology (NCWIT), and Committee on the Status of Women in Computing Research (CRA-W). Athena Lectures The ACM-W gives an annual Athena Lecturer Award to honor outstanding women researchers who have made fundamental contributions to computer science. This program began in 2006. Speakers are nominated by SIG officers. 2006–2007: Deborah Estrin of UCLA 2007–2008: Karen Spärck Jones of Cambridge University 2008–2009: Shafi Goldwasser of MIT and the Weitzmann Institute of Science 2009–2010: Susan J. Eggers of the University of Washington 2010–2011: Mary Jane Irwin of the Pennsylvania State University 2011–2012: Judith S. Olson of the University of California, Irvine 2012–2013: Nancy Lynch of MIT 2013–2014: Katherine Yelick of LBNL 2014–2015: Susan Dumais of Microsoft Research 2015–2016: Jennifer Widom of Stanford University 2016–2017: Jennifer Rexford of Princeton University 2017–2018: Lydia Kavraki of Rice University 2018–2019: Andrea Goldsmith of Princeton University 2019–2020: Elisa Bertino of Purdue University 2020–2021: Sarit Kraus of Bar-Ilan University 2021–2022: Ayanna Howard of Ohio State University Cooperation ACM's primary partner has been the IEEE Computer Society (IEEE-CS), which is the largest subgroup of the Institute of Electrical and Electronics Engineers (IEEE). The IEEE focuses more on hardware and standardization issues than theoretical computer science, but there is considerable overlap with ACM's agenda. They have many joint activities including conferences, publications and awards. ACM and its SIGs co-sponsor about 20 conferences each year with IEEE-CS and other parts of IEEE. Eckert-Mauchly Award and Ken Kennedy Award, both major awards in computer science, are given jointly by ACM and the IEEE-CS. They occasionally cooperate on projects like developing computing curricula. ACM has also jointly sponsored on events with other professional organizations like the Society for Industrial and Applied Mathematics (SIAM). Criticism In December 2019, the ACM signed a letter to President Trump opposing open access. A petition against this was formed and collected over a thousand signatures. In reaction to this, ACM clarified its position. The SoCG conference, while originally an ACM conference, parted ways with ACM in 2014 because of problems when organizing conferences abroad. See also ACM Classification Scheme Franz Alt, former president Edmund Berkeley, co–founder Computer science Computing Bernard Galler, former president Fellows of the ACM (by year) Fellows of the ACM (category) Grace Murray Hopper Award Presidents of the Association for Computing Machinery Timeline of computing hardware before 1950 Turing Award List of academic databases and search engines References External links ACM portal for publications ACM Digital Library Association for Computing Machinery Records, 1947-2009, Charles Babbage Institute, University of Minnesota. ACM Upsilon Phi Epsilon honor society 1947 establishments in the United States Computer science-related professional associations International learned societies Organizations established in 1947
Association for Computing Machinery
In chemistry, an alkali (; from ) is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline is commonly, and alkalescent less often, used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases. Etymology The word "alkali" is derived from Arabic al qalīy (or alkali), meaning the calcined ashes (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali. Common properties of alkalis and bases Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include: Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink. Concentrated solutions are caustic (causing chemical burns). Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin. Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution. Difference between alkali and base The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering. There are various more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen. A basic salt of an alkali metal or alkaline earth metal (This includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia).) Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.) The second subset of bases is also called an "Arrhenius base". Alkali salts Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are: Sodium hydroxide (NaOH) – often called "caustic soda" Potassium hydroxide (KOH) – commonly called "caustic potash" Lye – generic term for either of two previous salts or their mixture Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater" Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions) Alkaline soil Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally, due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer a mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems. Alkali lakes In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake. Examples of alkali lakes: Alkali Lake, Lake County, Oregon Baldwin Lake, San Bernardino County, California Bear Lake on the Utah–Idaho border Lake Magadi in Kenya Lake Turkana in Kenya Mono Lake, near Owens Valley in California Redberry Lake, Saskatchewan Summer Lake, Lake County, Oregon Tramping Lake, Saskatchewan See also Alkali metals Alkaline earth metals Base (chemistry) References Inorganic chemistry
Alkali
An anemometer is a device used for measuring wind speed and direction. It is also a common weather station instrument. The term is derived from the Greek word anemos, which means wind, and is used to describe any wind speed instrument used in meteorology. The first known description of an anemometer was given by Leon Battista Alberti in 1450. History The anemometer has changed little since its development in the 15th century. Leon Battista Alberti (1404–1472) is said to have invented the first mechanical anemometer around 1450. In the ensuing centuries numerous others, including Robert Hooke (1635–1703), developed their own versions, with some being mistakenly credited as the inventor. In 1846, John Thomas Romney Robinson (1792–1882) improved upon the design by using four hemispherical cups and mechanical wheels. In 1926, Canadian meteorologist John Patterson (January 3, 1872 – February 22, 1956) developed a three-cup anemometer, which was improved by Brevoort and Joiner in 1935. In 1991, Derek Weston added the ability to measure wind direction. In 1994, Andreas Pflitsch developed the sonic anemometer. Velocity anemometers Cup anemometers A simple type of anemometer was invented in 1845 by Rev Dr John Thomas Romney Robinson, of Armagh Observatory. It consisted of four hemispherical cups mounted on horizontal arms, which were mounted on a vertical shaft. The air flow past the cups in any horizontal direction turned the shaft at a rate that was roughly proportional to the wind speed. Therefore, counting the turns of the shaft over a set time interval produced a value proportional to the average wind speed for a wide range of speeds. It is also called a rotational anemometer. On an anemometer with four cups, it is easy to see that since the cups are arranged symmetrically on the end of the arms, the wind always has the hollow of one cup presented to it and is blowing on the back of the cup on the opposite end of the cross. Since a hollow hemisphere has a drag coefficient of .38 on the spherical side and 1.42 on the hollow side, more force is generated on the cup that is presenting its hollow side to the wind. Because of this asymmetrical force, torque is generated on the axis of the anemometer, causing it to spin. Theoretically, the speed of rotation of the anemometer should be proportional to the wind speed because the force produced on an object is proportional to the speed of the fluid flowing past it. However, in practice other factors influence the rotational speed, including turbulence produced by the apparatus, increasing drag in opposition to the torque that is produced by the cups and support arms, and friction of the mount point. When Robinson first designed his anemometer, he asserted that the cups moved one-third of the speed of the wind, unaffected by the cup size or arm length. This was apparently confirmed by some early independent experiments, but it was incorrect. Instead, the ratio of the speed of the wind and that of the cups, the anemometer factor, depends on the dimensions of the cups and arms, and may have a value between two and a little over three. Every previous experiment involving an anemometer had to be repeated after the error was discovered. The three-cup anemometer developed by the Canadian John Patterson in 1926 and subsequent cup improvements by Brevoort & Joiner of the United States in 1935 led to a cupwheel design with a nearly linear response and had an error of less than 3% up to . Patterson found that each cup produced maximum torque when it was at 45° to the wind flow. The three-cup anemometer also had a more constant torque and responded more quickly to gusts than the four-cup anemometer. The three-cup anemometer was further modified by the Australian Dr. Derek Weston in 1991 to measure both wind direction and wind speed. Weston added a tag to one cup, which causes the cupwheel speed to increase and decrease as the tag moves alternately with and against the wind. Wind direction is calculated from these cyclical changes in cupwheel speed, while wind speed is determined from the average cupwheel speed. Three-cup anemometers are currently used as the industry standard for wind resource assessment studies & practice. Vane anemometers One of the other forms of mechanical velocity anemometer is the vane anemometer. It may be described as a windmill or a propeller anemometer. Unlike the Robinson anemometer, whose axis of rotation is vertical, the vane anemometer must have its axis parallel to the direction of the wind and is therefore horizontal. Furthermore, since the wind varies in direction and the axis has to follow its changes, a wind vane or some other contrivance to fulfill the same purpose must be employed. A vane anemometer thus combines a propeller and a tail on the same axis to obtain accurate and precise wind speed and direction measurements from the same instrument. The speed of the fan is measured by a rev counter and converted to a windspeed by an electronic chip. Hence, volumetric flow rate may be calculated if the cross-sectional area is known. In cases where the direction of the air motion is always the same, as in ventilating shafts of mines and buildings, wind vanes known as air meters are employed, and give satisfactory results. Hot-wire anemometers Hot wire anemometers use a fine wire (on the order of several micrometres) electrically heated to some temperature above the ambient. Air flowing past the wire cools the wire. As the electrical resistance of most metals is dependent upon the temperature of the metal (tungsten is a popular choice for hot-wires), a relationship can be obtained between the resistance of the wire and the speed of the air. In most cases, they cannot be used to measure the direction of the airflow, unless coupled with a wind vane. Several ways of implementing this exist, and hot-wire devices can be further classified as CCA (constant current anemometer), CVA (constant voltage anemometer) and CTA (constant-temperature anemometer). The voltage output from these anemometers is thus the result of some sort of circuit within the device trying to maintain the specific variable (current, voltage or temperature) constant, following Ohm's law. Additionally, PWM (pulse-width modulation) anemometers are also used, wherein the velocity is inferred by the time length of a repeating pulse of current that brings the wire up to a specified resistance and then stops until a threshold "floor" is reached, at which time the pulse is sent again. Hot-wire anemometers, while extremely delicate, have extremely high frequency-response and fine spatial resolution compared to other measurement methods, and as such are almost universally employed for the detailed study of turbulent flows, or any flow in which rapid velocity fluctuations are of interest. An industrial version of the fine-wire anemometer is the thermal flow meter, which follows the same concept, but uses two pins or strings to monitor the variation in temperature. The strings contain fine wires, but encasing the wires makes them much more durable and capable of accurately measuring air, gas, and emissions flow in pipes, ducts, and stacks. Industrial applications often contain dirt that will damage the classic hot-wire anemometer. Laser Doppler anemometers In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer. Ultrasonic anemometers Ultrasonic anemometers, first developed in the 1950s, use ultrasonic sound waves to measure wind velocity. They measure wind speed based on the time of flight of sonic pulses between pairs of transducers. Measurements from pairs of transducers can be combined to yield a measurement of velocity in 1-, 2-, or 3-dimensional flow. The spatial resolution is given by the path length between transducers, which is typically 10 to 20 cm. Ultrasonic anemometers can take measurements with very fine temporal resolution, 20 Hz or better, which makes them well suited for turbulence measurements. The lack of moving parts makes them appropriate for long-term use in exposed automated weather stations and weather buoys where the accuracy and reliability of traditional cup-and-vane anemometers are adversely affected by salty air or dust. Their main disadvantage is the distortion of the air flow by the structure supporting the transducers, which requires a correction based upon wind tunnel measurements to minimize the effect. An international standard for this process, ISO 16622 Meteorology—Ultrasonic anemometers/thermometers—Acceptance test methods for mean wind measurements is in general circulation. Another disadvantage is lower accuracy due to precipitation, where rain drops may vary the speed of sound. Since the speed of sound varies with temperature, and is virtually stable with pressure change, ultrasonic anemometers are also used as thermometers. Two-dimensional (wind speed and wind direction) sonic anemometers are used in applications such as weather stations, ship navigation, aviation, weather buoys and wind turbines. Monitoring wind turbines usually requires a refresh rate of wind speed measurements of 3 Hz, easily achieved by sonic anemometers. Three-dimensional sonic anemometers are widely used to measure gas emissions and ecosystem fluxes using the eddy covariance method when used with fast-response infrared gas analyzers or laser-based analyzers. Two-dimensional wind sensors are of two types: Two ultrasounds paths: These sensors have four arms. The disadvantage of this type of sensor is that when the wind comes in the direction of an ultrasound path, the arms disturb the airflow, reducing the accuracy of the resulting measurement. Three ultrasounds paths: These sensors have three arms. They give one path redundancy of the measurement which improves the sensor accuracy and reduces aerodynamic turbulence. Acoustic resonance anemometers Acoustic resonance anemometers are a more recent variant of sonic anemometer. The technology was invented by Savvas Kapartis and patented in 1999. Whereas conventional sonic anemometers rely on time of flight measurement, acoustic resonance sensors use resonating acoustic (ultrasonic) waves within a small purpose-built cavity in order to perform their measurement. Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction. Because acoustic resonance technology enables measurement within a small cavity, the sensors tend to be typically smaller in size than other ultrasonic sensors. The small size of acoustic resonance anemometers makes them physically strong and easy to heat, and therefore resistant to icing. This combination of features means that they achieve high levels of data availability and are well suited to wind turbine control and to other uses that require small robust sensors such as battlefield meteorology. One issue with this sensor type is measurement accuracy when compared to a calibrated mechanical sensor. For many end uses, this weakness is compensated for by the sensor's longevity and the fact that it does not require recalibration once installed. Ping-pong ball anemometers A common anemometer for basic use is constructed from a ping-pong ball attached to a string. When the wind blows horizontally, it presses on and moves the ball; because ping-pong balls are very lightweight, they move easily in light winds. Measuring the angle between the string-ball apparatus and the vertical gives an estimate of the wind speed. This type of anemometer is mostly used for middle-school level instruction, which most students make on their own, but a similar device was also flown on the Phoenix Mars Lander. Pressure anemometers The first designs of anemometers that measure the pressure were divided into plate and tube classes. Plate anemometers These are the first modern anemometers. They consist of a flat plate suspended from the top so that the wind deflects the plate. In 1450, the Italian art architect Leon Battista Alberti invented the first mechanical anemometer; in 1664 it was re-invented by Robert Hooke (who is often mistakenly considered the inventor of the first anemometer). Later versions of this form consisted of a flat plate, either square or circular, which is kept normal to the wind by a wind vane. The pressure of the wind on its face is balanced by a spring. The compression of the spring determines the actual force which the wind is exerting on the plate, and this is either read off on a suitable gauge, or on a recorder. Instruments of this kind do not respond to light winds, are inaccurate for high wind readings, and are slow at responding to variable winds. Plate anemometers have been used to trigger high wind alarms on bridges. Tube anemometers James Lind's anemometer of 1775 consisted of a glass U tube containing a liquid manometer (pressure gauge), with one end bent in a horizontal direction to face the wind and the other vertical end remains parallel to the wind flow. Though the Lind was not the first it was the most practical and best known anemometer of this type. If the wind blows into the mouth of a tube it causes an increase of pressure on one side of the manometer. The wind over the open end of a vertical tube causes little change in pressure on the other side of the manometer. The resulting elevation difference in the two legs of the U tube is an indication of the wind speed. However, an accurate measurement requires that the wind speed be directly into the open end of the tube; small departures from the true direction of the wind causes large variations in the reading. The successful metal pressure tube anemometer of William Henry Dines in 1892 utilized the same pressure difference between the open mouth of a straight tube facing the wind and a ring of small holes in a vertical tube which is closed at the upper end. Both are mounted at the same height. The pressure differences on which the action depends are very small, and special means are required to register them. The recorder consists of a float in a sealed chamber partially filled with water. The pipe from the straight tube is connected to the top of the sealed chamber and the pipe from the small tubes is directed into the bottom inside the float. Since the pressure difference determines the vertical position of the float this is a measure of the wind speed. The great advantage of the tube anemometer lies in the fact that the exposed part can be mounted on a high pole, and requires no oiling or attention for years; and the registering part can be placed in any convenient position. Two connecting tubes are required. It might appear at first sight as though one connection would serve, but the differences in pressure on which these instruments depend are so minute, that the pressure of the air in the room where the recording part is placed has to be considered. Thus if the instrument depends on the pressure or suction effect alone, and this pressure or suction is measured against the air pressure in an ordinary room, in which the doors and windows are carefully closed and a newspaper is then burnt up the chimney, an effect may be produced equal to a wind of 10 mi/h (16 km/h); and the opening of a window in rough weather, or the opening of a door, may entirely alter the registration. While the Dines anemometer had an error of only 1% at , it did not respond very well to low winds due to the poor response of the flat plate vane required to turn the head into the wind. In 1918 an aerodynamic vane with eight times the torque of the flat plate overcame this problem. Pitot tube static anemometers Modern tube anemometers use the same principle as in the Dines anemometer but using a different design. The implementation uses a pitot-static tube which is a pitot tube with two ports, pitot and static, that is normally used in measuring the airspeed of aircraft. The pitot port measures the dynamic pressure of the open mouth of a tube with pointed head facing wind, and the static port measures the static pressure from small holes along the side on that tube. The pitot tube is connected to a tail so that it always makes the tube's head to face the wind. Additionally, the tube is heated to prevent rime ice formation on the tube. There are two lines from the tube down to the devices to measure the difference in pressure of the two lines. The measurement devices can be manometers, pressure transducers, or analog chart recorders. Effect of density on measurements In the tube anemometer the dynamic pressure is actually being measured, although the scale is usually graduated as a velocity scale. If the actual air density differs from the calibration value, due to differing temperature, elevation or barometric pressure, a correction is required to obtain the actual wind speed. Approximately 1.5% (1.6% above 6,000 feet) should be added to the velocity recorded by a tube anemometer for each 1000 ft (5% for each kilometer) above sea-level. Effect of icing At airports, it is essential to have accurate wind data under all conditions, including freezing precipitation. Anemometry is also required in monitoring and controlling the operation of wind turbines, which in cold environments are prone to in-cloud icing. Icing alters the aerodynamics of an anemometer and may entirely block it from operating. Therefore, anemometers used in these applications must be internally heated. Both cup anemometers and sonic anemometers are presently available with heated versions. Instrument location In order for wind speeds to be comparable from location to location, the effect of the terrain needs to be considered, especially in regard to height. Other considerations are the presence of trees, and both natural canyons and artificial canyons (urban buildings). The standard anemometer height in open rural terrain is 10 meters. See also Air flow meter Anemoi, for the ancient origin of the name of this technology Anemoscope, ancient device for measuring or predicting wind direction or weather Automated airport weather station Night of the Big Wind Particle image velocimetry Savonius wind turbine Wind power forecasting Wind run Windsock, a simple high-visibility indicator of approximate wind speed and direction Notes References Meteorological Instruments, W.E. Knowles Middleton and Athelstan F. Spilhaus, Third Edition revised, University of Toronto Press, Toronto, 1953 Invention of the Meteorological Instruments, W. E. Knowles Middleton, The Johns Hopkins Press, Baltimore, 1969 External links Description of the development and the construction of an ultrasonic anemometer Animation Showing Sonic Principle of Operation (Time of Flight Theory) – Gill Instruments Collection of historical anemometer Principle of Operation: Acoustic Resonance measurement – FT Technologies Thermopedia, "Anemometers (laser doppler)" Thermopedia, "Anemometers (pulsed thermal)" Thermopedia, "Anemometers (vane)" The Rotorvane Anemometer. Measuring both wind speed and direction using a tagged three-cup sensor Italian inventions Measuring instruments Meteorological instrumentation and equipment Navigational equipment Wind power 15th-century inventions
Anemometer