id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
314575
https://en.wikipedia.org/wiki/Octagon
Octagon
In geometry, an octagon () is an eight-sided polygon or 8-gon. A regular octagon has Schläfli symbol {8} and can also be constructed as a quasiregular truncated square, t{4}, which alternates two types of edges. A truncated octagon, t{8} is a hexadecagon, {16}. A 3D analog of the octagon can be the rhombicuboctahedron with the triangular faces on it like the replaced edges, if one considers the octagon to be a truncated square. Properties The sum of all the internal angles of any octagon is 1080°. As with all polygons, the external angles total 360°. If squares are constructed all internally or all externally on the sides of an octagon, then the midpoints of the segments connecting the centers of opposite squares form a quadrilateral that is both equidiagonal and orthodiagonal (that is, whose diagonals are equal in length and at right angles to each other). The midpoint octagon of a reference octagon has its eight vertices at the midpoints of the sides of the reference octagon. If squares are constructed all internally or all externally on the sides of the midpoint octagon, then the midpoints of the segments connecting the centers of opposite squares themselves form the vertices of a square. Regularity A regular octagon is a closed figure with sides of the same length and internal angles of the same size. It has eight lines of reflective symmetry and rotational symmetry of order 8. A regular octagon is represented by the Schläfli symbol {8}. The internal angle at each vertex of a regular octagon is 135° ( radians). The central angle is 45° ( radians). Area The area of a regular octagon of side length a is given by In terms of the circumradius R, the area is In terms of the apothem r (see also inscribed figure), the area is These last two coefficients bracket the value of pi, the area of the unit circle. The area can also be expressed as where S is the span of the octagon, or the second-shortest diagonal; and a is the length of one of the sides, or bases. This is easily proven if one takes an octagon, draws a square around the outside (making sure that four of the eight sides overlap with the four sides of the square) and then takes the corner triangles (these are 45–45–90 triangles) and places them with right angles pointed inward, forming a square. The edges of this square are each the length of the base. Given the length of a side a, the span S is The span, then, is equal to the silver ratio times the side, a. The area is then as above: Expressed in terms of the span, the area is Another simple formula for the area is More often the span S is known, and the length of the sides, a, is to be determined, as when cutting a square piece of material into a regular octagon. From the above, The two end lengths e on each side (the leg lengths of the triangles (green in the image) truncated from the square), as well as being may be calculated as Circumradius and inradius The circumradius of the regular octagon in terms of the side length a is and the inradius is (that is one-half the silver ratio times the side, a, or one-half the span, S) The inradius can be calculated from the circumradius as Diagonality The regular octagon, in terms of the side length a, has three different types of diagonals: Short diagonal; Medium diagonal (also called span or height), which is twice the length of the inradius; Long diagonal, which is twice the length of the circumradius. The formula for each of them follows from the basic principles of geometry. Here are the formulas for their length: Short diagonal: ; Medium diagonal: ; (silver ratio times a) Long diagonal: . Construction A regular octagon at a given circumcircle may be constructed as follows: Draw a circle and a diameter AOE, where O is the center and A, E are points on the circumcircle. Draw another diameter GOC, perpendicular to AOE. (Note in passing that A,C,E,G are vertices of a square). Draw the bisectors of the right angles GOA and EOG, making two more diameters HOD and FOB. A,B,C,D,E,F,G,H are the vertices of the octagon. A regular octagon can be constructed using a straightedge and a compass, as 8 = 23, a power of two: The regular octagon can be constructed with meccano bars. Twelve bars of size 4, three bars of size 5 and two bars of size 6 are required. Each side of a regular octagon subtends half a right angle at the centre of the circle which connects its vertices. Its area can thus be computed as the sum of eight isosceles triangles, leading to the result: for an octagon of side a. Standard coordinates The coordinates for the vertices of a regular octagon centered at the origin and with side length 2 are: (±1, ±(1+)) (±(1+), ±1). Dissectibility Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular octagon, m=4, and it can be divided into 6 rhombs, with one example shown below. This decomposition can be seen as 6 of 24 faces in a Petrie polygon projection plane of the tesseract. The list defines the number of solutions as eight, by the eight orientations of this one dissection. These squares and rhombs are used in the Ammann–Beenker tilings. Skew A skew octagon is a skew polygon with eight vertices and edges but not existing on the same plane. The interior of such an octagon is not generally defined. A skew zig-zag octagon has vertices alternating between two parallel planes. A regular skew octagon is vertex-transitive with equal edge lengths. In three dimensions it is a zig-zag skew octagon and can be seen in the vertices and side edges of a square antiprism with the same D4d, [2+,8] symmetry, order 16. Petrie polygons The regular skew octagon is the Petrie polygon for these higher-dimensional regular and uniform polytopes, shown in these skew orthogonal projections of in A7, B4, and D5 Coxeter planes. Symmetry The regular octagon has Dih8 symmetry, order 16. There are three dihedral subgroups: Dih4, Dih2, and Dih1, and four cyclic subgroups: Z8, Z4, Z2, and Z1, the last implying no symmetry. On the regular octagon, there are eleven distinct symmetries. John Conway labels full symmetry as r16. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars) Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Full symmetry of the regular form is r16 and no symmetry is labeled a1. The most common high symmetry octagons are p8, an isogonal octagon constructed by four mirrors can alternate long and short edges, and d8, an isotoxal octagon constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular octagon. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g8 subgroup has no degrees of freedom but can be seen as directed edges. Use The octagonal shape is used as a design element in architecture. The Dome of the Rock has a characteristic octagonal plan. The Tower of the Winds in Athens is another example of an octagonal structure. The octagonal plan has also been in church architecture such as St. George's Cathedral, Addis Ababa, Basilica of San Vitale (in Ravenna, Italia), Castel del Monte (Apulia, Italia), Florence Baptistery, Zum Friedefürsten Church (Germany) and a number of octagonal churches in Norway. The central space in the Aachen Cathedral, the Carolingian Palatine Chapel, has a regular octagonal floorplan. Uses of octagons in churches also include lesser design elements, such as the octagonal apse of Nidaros Cathedral. Architects such as John Andrews have used octagonal floor layouts in buildings for functionally separating office areas from building services, such as in the Intelsat Headquarters of Washington or Callam Offices in Canberra. Derived figures Related polytopes The octagon, as a truncated square, is first in a sequence of truncated hypercubes: As an expanded square, it is also first in a sequence of expanded hypercubes:
Mathematics
Two-dimensional space
null
314610
https://en.wikipedia.org/wiki/Pebble
Pebble
A pebble is a clast of rock with a particle size of based on the Udden-Wentworth scale of sedimentology. Pebbles are generally considered larger than granules ( in diameter) and smaller than cobbles ( in diameter). A rock made predominantly of pebbles is termed a conglomerate. Pebble tools are among the earliest known man-made artifacts, dating from the Palaeolithic period of human history. A beach composed chiefly of surface pebbles is commonly termed a shingle beach. This type of beach has armoring characteristics with respect to wave erosion, as well as ecological niches that provide habitat for animals and plants. Inshore banks of shingle (large quantities of pebbles) exist in some locations, such as the entrance to the River Ore, England, where the moving banks of shingle give notable navigational challenges. Pebbles come in various colors and textures and can have streaks, known as veins, of quartz or other minerals. Pebbles are mostly smooth but, dependent on how frequently they come in contact with the sea, they can have marks of contact with other rocks or other pebbles. Pebbles left above the high water mark may have growths of organisms such as lichen on them, signifying the lack of contact with seawater. Location Pebbles on Earth exist in two types of locations – on the beaches of various oceans and seas, and inland where ancient seas used to cover the land. Then, when the seas retreated, the rocks became landlocked. Here, they entered lakes and ponds, and form in rivers, travelling into estuaries where the smoothing continues in the sea. Beach pebbles and river pebbles (also known as river rock) are distinct in their geological formation and appearance. Manufactured Pebbles-These  are made from natural stones such as marble, granite, and sandstone. Manufactured Pebbles are designed to specified sizes and forms, making them flexible for many purposes. Beach Beach pebbles form gradually over time as the ocean water washes over loose rock particles. The result is a smooth, rounded appearance. The typical size range is from 2 mm to 50 mm. The colors range from translucent white to black, and include shades of yellow, brown, red and green. Some of the more plentiful pebble beaches are along the coast of the Pacific Ocean, beginning in Canada and extending down to the tip of South America in Argentina. Other pebble beaches are in northern Europe (particularly on the beaches of the Norwegian Sea), along the coast of the U.K. and Ireland, on the shores of Australia, and around the islands of Indonesia and Japan. Inland Inland pebbles (river pebbles of river rock) are usually found along the shores of large rivers and lakes. These pebbles form as the flowing water washes over rock particles on the bottom and along the shores of the river. The smoothness and color of river pebbles depends on several factors, such as the composition of the soil of the river banks, the chemical characteristics of the water, and the speed of the current. Because river current is gentler than the ocean waves, river pebbles are usually not as smooth as beach pebbles. The most common colors of river rock are black, grey, green, brown and white. Human use Beach pebbles and river pebbles are used for a variety of purposes, both outdoors and indoors. They can be sorted by colour and size, and they can also be polished to improve the texture and colour. Outdoors, beach pebbles are often used for landscaping, construction and as decorative elements. Beach pebbles are often used to cover walkways and driveways, around pools, in and around plant containers, on patios and decks. Beach and river pebbles are also used to create water-smart gardens in areas where water is scarce. Small pebbles are also used to create living spaces and gardens on the rooftops of buildings. Indoors, pebbles can be used as bookends and paperweights. Large pebbles are also used to create "pet rocks" for children. Mars On Mars, slabs of pebbly conglomerate rock have been found and have been interpreted by scientists as having formed in an ancient streambed. The gravels, which were discovered by NASA's Mars rover Curiosity, range from the size of sand particles to the size of golf balls. Analysis has shown that the pebbles were deposited by a stream that flowed at walking pace and was ankle- to hip-deep. Gallery
Physical sciences
Sedimentology
Earth science
314650
https://en.wikipedia.org/wiki/Enamel%20paint
Enamel paint
Enamel paint is paint that air-dries to a hard, usually glossy, finish, used for coating surfaces that are outdoors or otherwise subject to hard wear or variations in temperature; it should not be confused with decorated objects in "painted enamel", where vitreous enamel is applied with brushes and fired in a kiln. The name is something of a misnomer, as in reality most commercially available enamel paints are significantly softer than either vitreous enamel or stoved synthetic resins, and are totally different in composition; vitreous enamel is applied as a powder or paste and then fired at high temperature. There is no generally accepted definition or standard for use of the term "enamel paint", and not all enamel-type paints may use it. Paint Typically the term "enamel paint" is used to describe oil-based covering products, usually with a significant amount of gloss in them, however recently many latex or water-based paints have adopted the term as well. The term today means "hard surfaced paint" and usually is in reference to paint brands of higher quality, floor coatings of a high gloss finish, or spray paints. Most enamel paints are alkyd resin based. Some enamel paints have been made by adding varnish to oil-based paint. Although "enamels" and "painted enamel" in art normally refer to vitreous enamel, in the 20th century some artists used commercial enamel paints in art, including Pablo Picasso (mixing it with oil paint), Hermann-Paul, Jackson Pollock, and Sidney Nolan. The Trial (1947) is one of a number of works by Nolan to use enamel paint, usually Ripolin, a commercial paint not intended for art, also Picasso's usual brand. Some "enamel paints" are now produced specifically for artists. Enamels paints can also refer to nitrocellulose based paints, one of the first modern commercial paints of the 20th century. They have since been superseded by new synthetic coatings like alkyd, acrylic and vinyl, due to toxicity, safety, and conservation (tendency to age yellow) concerns. In art has been used also by Pollock with the commercial paint named Duco. The artist experimented and created with many types of commercial or house paints during his career. Other artists: "after discovering various types of industrial materials produced in the United States in the 1930s, Siqueiros' produced most of his easel works with uncommon materials which include Duco paint, a DuPont brand name for pyroxyline paint, a tough and resilient type of nitro-cellulose paint manufactured for the automotive industry". Nitro-cellulose enamels are also commonly known as modern lacquers. Enamel paint comes in a variety of hues and can be custom blended to produce a particular tint. It is also available in water-based and solvent-based formulations, with solvent-based enamel being more prevalent in industrial applications. For the greatest results, use a high-quality brush, roller, or spray gun when applying enamel paint. When dried, enamel paint forms a durable, hard-wearing surface that resists chipping, fading, and discoloration, making it a great choice for a wide range of surfaces and applications. Uses and categories Floor enamel – May be used for concrete, stairs, basements, porches, and patios. Fast dry enamel – Can dry within 10–15 minutes of application. Ideal for refrigerators, counters, and other industrial finishes. High-temp enamel – May be used for engines, brake calipers, exhaust pipe and BBQs. Enamel paint is also used on wood to make it resistant to the elements via the waterproofing and rotproofing properties of enamel. Generally, treated surfaces last much longer and are much more resistant to wear than untreated surfaces. Model building – Xtracolor and Humbrol are mainstream UK brands. Colourcoats model paint is a high quality brand with authentic accurate military colours. Testors, a US company, offers the Floquil, Pactra, Model Master and Testors brands. Nail enamel – to color nails, it comes in many varieties for fast drying, color retention, gloss retention, etc. Epoxy enamel, polyurethane enamel, etc. used in protective coating / industrial painting purpose in chemical and petrochemical industries for anti-corrosion purposes.
Technology
Artist's and drafting tools
null
314788
https://en.wikipedia.org/wiki/Clostridium
Clostridium
Clostridium is a genus of anaerobic, Gram-positive bacteria. Species of Clostridium inhabit soils and the intestinal tracts of animals, including humans. This genus includes several significant human pathogens, including the causative agents of botulism and tetanus. It also formerly included an important cause of diarrhea, Clostridioides difficile, which was reclassified into the Clostridioides genus in 2016. History In the late 1700s, Germany experienced several outbreaks of an illness connected to eating specific sausages. In 1817, the German neurologist Justinus Kerner detected rod-shaped cells in his investigations into this so-called sausage poisoning. In 1897, the Belgian biology professor Emile van Ermengem published his finding of an endospore-forming organism he isolated from spoiled ham. Biologists classified van Ermengem's discovery along with other known gram-positive spore formers in the genus Bacillus. This classification presented problems, however, because the isolate grew only in anaerobic conditions, but Bacillus grew well in oxygen. Circa 1880, in the course of studying fermentation and butyric acid synthesis, a scientist surnamed Prazmowski first assigned a binomial name to Clostridium butyricum. The mechanisms of anaerobic respiration were still not yet well elucidated at that time, so taxonomy of anaerobes was still developing. In 1924, Ida A. Bengtson separated van Ermengem's microorganisms from the Bacillus group and assigned them to the genus Clostridium. By Bengtson's classification scheme, Clostridium contained all of the anaerobic endospore-forming rod-shaped bacteria, except the genus Desulfotomaculum. Taxonomy As of October 2022, there are 164 validly published species in Clostridium. The genus, as traditionally defined, contains many organisms not closely related to its type species. The issue was originally illustrated in full detail by a rRNA phylogeny from Collins 1994, which split the traditional genus (now corresponding to a large slice of Clostridia) into twenty clusters, with cluster I containing the type species Clostridium butyricum and its close relatives. Over the years, this has resulted in many new genera being split out, with the ultimate goal of constraining Clostridium to cluster I. "Clostridium" cluster XIVa (now Lachnospiraceae) and "Clostridium" cluster IV (now Ruminococcaceae) efficiently ferment plant polysaccharide composing dietary fiber, making them important and abundant taxa in the rumen and the human large intestine. As mentioned before, these clusters are not part of current Clostridium, and use of these terms should be avoided due to ambiguous or inconsistent usage. Biochemistry Species of Clostridium are obligate anaerobe and capable of producing endospores. They generally stain gram-positive, but as well as Bacillus, are often described as Gram-variable, because they show an increasing number of gram-negative cells as the culture ages. The normal, reproducing cells of Clostridium, called the vegetative form, are rod-shaped, which gives them their name, from the Greek κλωστήρ or spindle. Clostridium endospores have a distinct bowling pin or bottle shape, distinguishing them from other bacterial endospores, which are usually ovoid in shape. The Schaeffer–Fulton stain (0.5% malachite green in water) can be used to distinguish endospores of Bacillus and Clostridium from other microorganisms. Clostridium can be differentiated from the also endospore forming genus Bacillus by its obligate anaerobic growth, the shape of endospores and the lack of catalase. Species of Desulfotomaculum form similar endospores and can be distinguished by their requirement for sulfur. Glycolysis and fermentation of pyruvic acid by Clostridia yield the end products butyric acid, butanol, acetone, isopropanol, and carbon dioxide. There is a commercially available polymerase chain reaction (PCR) test kit (Bactotype) for the detection of C. perfringens and other pathogenic bacteria. Biology and pathogenesis Clostridium species are readily found inhabiting soils and intestinal tracts. Clostridium species are also a normal inhabitant of the healthy lower reproductive tract of females. The main species responsible for disease in humans are: Clostridium botulinum can produce botulinum toxin in food or wounds and can cause botulism. This same toxin is known as Botox and is used in cosmetic surgery to paralyze facial muscles to reduce the signs of aging; it also has numerous other therapeutic uses. Clostridium perfringens causes a wide range of symptoms, from food poisoning to cellulitis, fasciitis, necrotic enteritis and gas gangrene. Clostridium tetani causes tetanus. Several more pathogenic species, that were previously described in Clostridium, have been found to belong to other genera. Clostridium difficile, now placed in Clostridioides. Clostridium histolyticum, now placed in Hathewaya. Clostridium sordellii, now placed in Paraclostridium, can cause a fatal infection in exceptionally rare cases after medical abortions. Treatment In general, the treatment of clostridial infection is high-dose penicillin G, to which the organism has remained susceptible. Clostridium welchii and Clostridium tetani respond to sulfonamides. Clostridia are also susceptible to tetracyclines, carbapenems (imipenem), metronidazole, vancomycin, and chloramphenicol. The vegetative cells of clostridia are heat-labile and are killed by short heating at temperatures above . The thermal destruction of Clostridium spores requires higher temperatures (above , for example in an autoclave) and longer cooking times (20 min, with a few exceptional cases of more than 50 min recorded in the literature). Clostridia and Bacilli are quite radiation-resistant, requiring doses of about 30 kGy, which is a serious obstacle to the development of shelf-stable irradiated foods for general use in the retail market. The addition of lysozyme, nitrate, nitrite and propionic acid salts inhibits clostridia in various foods. Fructooligosaccharides (fructans) such as inulin, occurring in relatively large amounts in a number of foods such as chicory, garlic, onion, leek, artichoke, and asparagus, have a prebiotic or bifidogenic effect, selectively promoting the growth and metabolism of beneficial bacteria in the colon, such as Bifidobacteria and Lactobacilli, while inhibiting harmful ones, such as clostridia, fusobacteria, and Bacteroides. Use Clostridium thermocellum can use lignocellulosic waste and generate ethanol, thus making it a possible candidate for use in production of ethanol fuel. It also has no oxygen requirement and is thermophilic, which reduces cooling cost. Clostridium acetobutylicum was first used by Chaim Weizmann to produce acetone and biobutanol from starch in 1916 for the production of cordite (smokeless gunpowder). Clostridium botulinum produces a potentially lethal neurotoxin used in a diluted form in the drug Botox, which is carefully injected to nerves in the face, which prevents the movement of the expressive muscles of the forehead, to delay the wrinkling effect of aging. It is also used to treat spasmodic torticollis and provides relief for around 12 to 16 weeks. Clostridium butyricum MIYAIRI 588 strain is marketed in Japan, Korea, and China for Clostridium difficile prophylaxis due to its reported ability to interfere with the growth of the latter. Clostridium histolyticum has been used as a source of the enzyme collagenase, which degrades animal tissue. Clostridium species excrete collagenase to eat through tissue and, thus, help the pathogen spread throughout the body. The medical profession uses collagenase for the same reason in the débridement of infected wounds. Hyaluronidase, deoxyribonuclease, lecithinase, leukocidin, protease, lipase, and hemolysin are also produced by some clostridia that cause gas gangrene. Clostridium ljungdahlii, recently discovered in commercial chicken wastes, can produce ethanol from single-carbon sources including synthesis gas, a mixture of carbon monoxide and hydrogen, that can be generated from the partial combustion of either fossil fuels or biomass. Clostridium butyricum converts glycerol to 1,3-propanediol. Genes from Clostridium thermocellum have been inserted into transgenic mice to allow the production of endoglucanase. The experiment was intended to learn more about how the digestive capacity of monogastric animals could be improved. Nonpathogenic strains of Clostridium may help in the treatment of diseases such as cancer. Research shows that Clostridium can selectively target cancer cells. Some strains can enter and replicate within solid tumors. Clostridium could, therefore, be used to deliver therapeutic proteins to tumours. This use of Clostridium has been demonstrated in a variety of preclinical models. Mixtures of Clostridium species, such as Clostridium beijerinckii, Clostridium butyricum, and species from other genera have been shown to produce biohydrogen from yeast waste.
Biology and health sciences
Gram-positive bacteria
Plants
314855
https://en.wikipedia.org/wiki/Cruise%20ship
Cruise ship
Cruise ships are large passenger ships used mainly for vacationing. Unlike ocean liners, which are used for transport, cruise ships typically embark on round-trip voyages to various ports of call, where passengers may go on tours known as "shore excursions". Modern cruise ships tend to have less hull strength, speed, and agility compared to ocean liners. However, they have added amenities to cater to water tourists, with recent vessels being described as "balcony-laden floating condominiums". there were 302 cruise ships operating worldwide, with a combined capacity of 664,602 passengers. Cruising has become a major part of the tourism industry, with an estimated market of $29.4 billion per year, and over 19 million passengers carried worldwide annually . The industry's rapid growth saw nine or more newly built ships catering to a North American clientele added every year since 2001, as well as others servicing European clientele until the COVID-19 pandemic in 2020 saw the entire industry all but shut down. The average age of a cruise ship in 2024 is 17.5 years. The construction market for cruise ships is dominated by three European companies and one Asian company. Operators of cruise ships are known as cruise lines. Cruise ships are organized much like floating hotels, with a complete hospitality staff in addition to the usual ship's crew. Traditionally, the ships' restaurants organize two dinner services per day, early dining and late dining, and passengers are allocated a set dining time for the entire cruise; a recent trend is to allow diners to dine whenever they want. Besides the dining room, modern cruise ships often contain one or more casual buffet-style eateries. Most cruise ships sail the Caribbean or the Mediterranean. Others operate elsewhere in places like Alaska, the South Pacific, and the Baltic Sea. Large cruise ships have been identified as one of the major causes of overtourism. History Origins Italy, a traditional focus of the Grand Tour, offered an early cruise experience on the Francesco I, flying the flag of the Kingdom of the Two Sicilies. Built in 1831, the Francesco I sailed from Naples in early June 1833, preceded by an advertising campaign. Nobles, authorities, and royal princes from all over Europe boarded the cruise ship, which sailed in just over three months to Taormina, Catania, Syracuse, Malta, Corfu, Patras, Delphi, Zante, Athens, Smyrna and Constantinople, providing passengers with excursions and guided tours. P&O first introduced passenger-cruising services in 1844, advertising sea tours to destinations such as Gibraltar, Malta and Athens, sailing from Southampton. The forerunner of modern cruise holidays, these voyages were the first of their kind. P&O Cruises is the world's oldest cruise line. The company later introduced round trips to destinations such as Alexandria and Constantinople. It underwent a period of rapid expansion in the latter half of the 19th century, commissioning larger and more luxurious ships to serve the steadily expanding market. Notable ships of the era include built in 1880, which became the first ship built with a total steel superstructure, and built in 1889. The cruise of in the Mediterranean and the Near East from 22 January to 22 March 1891, with 241 passengers including Albert Ballin and wife themselves, is often stated to have been the first ever cruise. Christian Wilhelm Allers published an illustrated account of it as Backschisch. The first vessel built exclusively for luxury cruising was of the German Empire, designed by Albert Ballin, general manager of the Hamburg-America Line. The ship was completed in 1900. The practice of luxury cruising made steady inroads into the more established market for transatlantic crossings. In the competition for passengers, ocean liners – being the most famous example – added luxuries such as fine dining, luxury services, and staterooms with finer appointments. In the late-19th century, Albert Ballin, director of the Hamburg-America Line, was the first to send his transatlantic ships out on long southern cruises during the worst of the North Atlantic winter seasons. Other companies followed suit. Some of them built specialized ships designed for easy transformation between summer crossings and winter cruising. In 1897 three luxury liners, all European-owned, offered transportation between Europe and North America. In 1906 the number had increased to seven. The British Inman Line owned , the Cunard Line had and . The White Star Line owned and . La Lorraine and La Savoie sailed for the French Compagnie Générale Transatlantique. From luxury ocean liners to "megaship" cruising Modern cruise ships tend to have less hull strength, speed, and agility compared to ocean liners. With the advent of large passenger jet aircraft in the 1960s, intercontinental travelers switched from ships to planes, sending the ocean liner trade into a terminal decline. Certain characteristics of older ocean liners made them unsuitable for cruising duties, such as high fuel consumption, deep draught preventing them from entering shallow ports, and cabins (often windowless) designed to maximize passenger numbers rather than comfort. In the late 1950s and 1960s, ships such as Holland America Line's (1959), the French Line's (1961), and Cunard Line's RMS (1969) were designed to serve the dual purposes of ocean liner during the northern hemisphere summer months and cruise ship in the winter, incorporating doors and baffles that could be open or closed to divide classes or open the ship to one class, wherein all passengers received roughly the same quality berthing and most of the same facilities. (Passengers in cabins in certain grades on the Queen Elizabeth 2 had access only to certain dining rooms). Ocean liner services almost ceased in the 1970s and 1980s. The Rotterdam was put on permanent cruise service in 1968, while the France (at the time the largest passenger vessel in the world) was mothballed in 1974, sold to Norwegian Cruise Line in 1979, and after major renovations relaunched as in 1980, thus becoming the first "mega-cruise ship". The main exception was Cunard's Queen Elizabeth 2: although being put on more cruises, she maintained the regular transatlantic crossing tradition throughout the year, but with a stronger focus on leisure passengers, catering to a niche market of those who appreciated the several days at sea. International celebrities were hired to perform acts on board, along with cabarets, and with the addition of a casino and other entertainment amenities, the crossing was advertised as a vacation in itself. The 1977–1986 television series The Love Boat helped to popularize the concept as a romantic opportunity for couples. Industry experts credit the series with increasing interest in the cruise industry, especially for those that weren't newlyweds or senior citizens, and for the resulting demand to spur investment in new ships instead of conversions. The influence was particularly notable for Princess Cruises, a line that partnered with the series and received a great deal of attention as a result. Contemporary cruise ships built in the late 1980s and later, such as the which broke the size record held for decades by Norway, showed characteristics of size once reserved for ocean liners. The Sovereign-class ships were the first "megaships" to be built specifically for the mass cruising market. They also were the first series of cruise ships to include a multi-story lobby with a glass elevator and had a single deck devoted entirely to cabins with private balconies, instead of oceanview cabins. Other cruise lines soon launched ships with similar attributes, such as the , leading up to the Panamax-type , designed such that two-thirds of the oceanview staterooms have balconies. As the veranda suites were particularly lucrative for cruise lines, something which was lacking in older ocean liners, recent cruise ships have been designed to maximize such amenities and have been described as "balcony-laden floating condominiums". Until 1975–1980, cruises offered shuffleboard, deck chairs, "drinks with umbrellas and little else for a few hundred passengers". After 1980, they offered increasing amenities. As of 2010, city-sized ships have dozens of amenities. There have been nine or more new cruise ships added every year since 2001, including the 11 members of the aforementioned Vista class, and all at or greater. The only actual ocean liner to be completed in recent years has been Cunard Line's in 2004. Following the retirement of her running mate Queen Elizabeth 2 in November 2008, Queen Mary 2 is the only liner operating on scheduled transatlantic service, though she also sees significant service on cruise routes. Queen Mary 2 was for a time the largest passenger ship before being surpassed by Royal Caribbean International's vessels in 2006. The Freedom-class ships were in turn overtaken by RCI's own vessels which entered service in 2009 and 2010. A distinctive feature of the Oasis-class ships is the split, atrium structure, made possible by the hull's extraordinary width, with the 6-deck high Central Park and Boardwalk outdoor areas running down the middle of the ship and verandas on all decks. In two short decades (1988–2009), the largest class cruise ships have grown a third longer (), doubled their widths (), nearly tripled the total passenger count (2,744 to 7,600), and more than tripled in volume (73,000 to 248,000 GT). Also, the "megaships" went from a single deck with verandas to all decks with verandas. there were 302 cruise ships operating worldwide, with a combined capacity of 664,602 passengers. Cruising has become a major part of the tourism industry, with an estimated market of $29.4 billion per year, and over 19 million passengers carried worldwide annually . The industry's rapid growth saw nine or more newly built ships catering to a North American clientele added every year since 2001, as well as others servicing European clientele until the COVID-19 pandemic in 2020 saw the entire industry all but shut down. The average age of a cruise ship in 2024 is 17.5 years. Cruise lines Operators of cruise ships are known as cruise lines, which are companies that sell cruises to the public. Cruise lines have a dual character; they are partly in the transportation business, and partly in the leisure entertainment business, a duality that carries down into the ships themselves, which have both a crew headed by the ship's captain, and a hospitality staff headed by the equivalent of a hotel manager. Among cruise lines, some are direct descendants of the traditional passenger shipping lines (such as Cunard), while others were founded from the 1960s specifically for cruising. Historically, the cruise ship business has been volatile. The ships are large capital investments with high operating costs. A persistent decrease in bookings can put a company in financial jeopardy. Cruise lines have sold, renovated, or renamed their ships to keep up with travel trends. Cruise lines operate their ships almost constantly. If the maintenance is unscheduled, it can result, potentially, in thousands of dissatisfied customers. A wave of failures and consolidations in the 1990s led to many cruise lines being bought by much larger holding companies and continue to operate as "brands" or subsidiaries of the holding company. Brands continue to be maintained partly because of the expectation of repeat customer loyalty, and also to offer different levels of quality and service. For instance, Carnival Corporation & plc owns both Carnival Cruise Line, whose former image were vessels that had a reputation as "party ships" for younger travelers, but have become large, modern, yet still profitable, as well as Holland America Line and Cunard Line, whose ships cultivate an image of classic elegance. In 2004, Carnival had merged Cunard's headquarters with that of Princess Cruises in Santa Clarita, California so that administrative, financial and technology services could be combined, ending Cunard's history where it had operated as a standalone company (subsidiary) regardless of parent ownership. However, Cunard did regain some independence in 2009 when its headquarters were moved to Carnival House in Southampton. The common practice in the cruise industry in listing cruise ship transfers and orders is to list the smaller operating company, not the larger holding corporation, as the recipient cruise line of the sale, transfer, or new order. In other words, Carnival Cruise Line and Holland America Line, for example, are the cruise lines from this common industry practice point of view; whereas Carnival Corporation & plc and Royal Caribbean Group, for example, can be considered holding corporations of cruise lines. This industry practice of using the smaller operating company, not the larger holding corporation, is also followed in the list of cruise lines and in member-based reviews of cruise lines. Some cruise lines have specialties; for example, Saga Cruises only allows passengers over 50 years old aboard their ships, and Star Clippers and formerly Windjammer Barefoot Cruises and Windstar Cruises only operate tall ships. Regent Seven Seas Cruises operates medium-sized vessels—smaller than the "megaships" of Carnival and Royal Caribbean—designed such that virtually all of their suites are balconies. Several specialty lines offer "expedition cruising" or only operate small ships, visiting certain destinations such as the Arctic and Antarctica, or the Galápagos Islands. , which formerly operated as part of the United States Merchant Marine during World War II before being converted to a museum ship, still gets underway several times a year for six-hour "Living History Cruises" that take the ship through Baltimore Harbor, down the Patapsco River, and into the Chesapeake Bay, and she is also the largest cruise ship operating under the American flag on the United States East Coast. Currently the three largest cruise line holding companies and operators in the world are Carnival Corporation & plc, Royal Caribbean Group and Norwegian Cruise Line Holdings. As an industry, the total number of cabins on all of the world's cruise ships amount to less than 2% of the world's hotel rooms. Organization Cruise ships are organized much like floating hotels, with a complete hospitality staff in addition to the usual ship's crew. It is not uncommon for the most luxurious ships to have more crew and staff than passengers. Dining Dining on almost all cruise ships is included in the cruise price. Traditionally, the ships' restaurants organize two dinner services per day, early dining and late dining, and passengers are allocated a set dining time for the entire cruise; a recent trend is to allow diners to dine whenever they want. Having two dinner times allows the ship to have enough time and space to accommodate all of its guests. Having two different dinner services can cause some conflicts with some of the ship's events (such as shows and performances) for the late diners, but this problem is usually fixed by having a shorter version of the event take place before late dinner. Cunard Line ships maintain the class tradition of ocean liners and have separate dining rooms for different types of suites, while Celebrity Cruises and Princess Cruises have a standard dining room and "upgrade" specialty restaurants that require pre-booking and cover charges. Many cruises schedule one or more "formal dining" nights. Guests dress "formally", however, that is defined for the ship, often suits and ties or even tuxedos for men, and formal dresses for women. The menu is more upscale than usual. Besides the dining room, modern cruise ships often contain one or more casual buffet-style eateries, which may be open 24 hours and with menus that vary throughout the day to provide meals ranging from breakfast to late-night snacks. In recent years, cruise lines have started to include a diverse range of ethnically themed restaurants aboard each ship. Ships also feature numerous bars and nightclubs for passenger entertainment; the majority of cruise lines do not include alcoholic beverages in their fares and passengers are expected to pay for drinks as they consume them. Most cruise lines also prohibit passengers from bringing aboard and consuming their own beverages, including alcohol, while aboard. Alcohol purchased duty-free is sealed and returned to passengers when they disembark. There is often a central galley responsible for serving all major restaurants aboard the ship, though specialty restaurants may have their own separate galleys. As with any vessel, adequate provisioning is crucial, especially on a cruise ship serving several thousand meals at each seating. For example, a quasi "military operation" is required to load and unload 3,600 passengers and eight tons of food at the beginning and end of each cruise, for the . Other on-board facilities Modern cruise ships typically have aboard some or all of the following facilities: Buffet restaurant Card room Casino – Only open when the ship is at sea to avoid conflict with local laws Child care facilities Cinema Clubs Fitness center Hot tub Indoor and/or outdoor swimming pool with water slides Infirmary and morgue Karaoke Library Lounges Observation lounge Ping pong tables Pool tables Shops – Only open when the ship is at sea to avoid merchandising licensing and local taxes Spa Teen Lounges Theatre with Broadway-style shows Some ships have bowling alleys, ice skating rinks, rock climbing walls, sky-diving simulators, miniature golf courses, video arcades, ziplines, surfing simulators, water slides, basketball courts, tennis courts, chain restaurants, ropes obstacle courses, and even roller coasters. Crew Crew are usually hired on three to eleven month contracts which may then be renewed as mutually agreed, depending on service ratings from passengers as well as the cyclical nature of the cruise line operator. Most staff work 77-hour work weeks for 10 months continuously followed by two months of vacation. There are no paid vacations or pensions for service, non-management crew, depending on the level of the position and the type of the contract. Non-service and management crew members get paid vacation, medical, retirement options, and can participate in the company's group insurance plan. The direct salary is low by North American standards, though restaurant staff have considerable earning potential from passenger tips. Crew members do not have any expenses while on board, because food and accommodation, medical care, and transportation for most employees, are included. Bard College at Simon's Rock professor Francisca Oyogoa states that "Crewing agencies often exploit the desperation of potential employees." Living arrangements vary by cruise line, but mostly by shipboard position. In general two employees share a cabin with a shower, commode and a desk with a television set, while senior officers are assigned single cabins. There is a set of facilities for the crew separate from that for passengers, such as mess rooms and bars, recreation rooms, prayer rooms/mosques, and fitness center, with some larger ships even having a crew deck with a swimming pool and hot tubs. The International Labour Organization's 2006 Maritime Labour Convention is also known as the "Seafarers' Bill of Rights". Business model Most cruise lines since the 2000s have to some extent priced the cruising experience à la carte, as passenger spending aboard generates significantly more than ticket sales. The passenger's ticket includes the stateroom accommodation, room service, unlimited meals in the main dining room (or main restaurant) and buffet, access to shows, and use of pool and gym facilities, while there is a daily gratuity charge to cover housekeeping and waiter service. However, there are extra charges for alcohol and soft drinks, official cruise photos, Internet and wi-fi access, and specialty restaurants. Cruise lines earn significantly from selling onshore excursions offered by local contractors; keeping 50% or more of what passengers spend for these tours. In addition, cruise ships earn significant commissions on sales from onshore stores that are promoted on board as "preferred" (as much as 40% of gross sales). Facilitating this practice are modern cruise terminals with establishments of duty-free shops inside a perimeter accessible only by passengers and not by locals. Ports of call have often oriented their own businesses and facilities towards meeting the needs of visiting cruise ships. In one case, Icy Strait Point in Alaska, the entire destination was created explicitly and solely for cruise ship visitors. On "cruises to nowhere" or "nowhere voyages", some cruise ships make two- to three-night round trips without visiting any ports of call. Travel to and from the port of departure is usually the passengers' responsibility, although purchasing a transfer pass from the cruise line for the trip between the airport and cruise terminal will guarantee that the ship will not leave until the passenger is aboard. Similarly, if the passenger books a shore excursion with the cruise line and the tour runs late, the ship is obliged to remain until the passenger returns. Luxury cruise lines such as Regent Seven Seas Cruises and Crystal Cruises market their fares as "all-inclusive". For example, the base fare on Regent Seven Seas ships includes most alcoholic beverages on board ship and most shore excursions in ports of call, as well as all gratuities that would normally be paid to hotel staff on the ship. The fare may also include a one-night hotel stay before boarding, and the air fare to and from the cruise's origin and destination ports. Many cruise lines have loyalty programs. Using these and by booking inexpensive tickets, some people have found it cheaper to live continuously on cruise ships instead of on land. Cruise ship utilization Cruise ships and former liners sometimes find use in applications other than those for which they were built. Due to slower speed and reduced seaworthiness, as well as being largely introduced after several major wars, cruise ships have also been used as troop transport vessels. By contrast, ocean liners were often seen as the pride of their country and used to rival liners of other nations, and have been requisitioned during both World Wars and the Falklands War to transport soldiers and serve as hospital ships. During the 1992 Summer Olympics, eleven cruise ships docked at the Port of Barcelona for an average of 18 days, served as floating hotels to help accommodate the large influx of visitors to the Games. They were available to sponsors and hosted 11,000 guests a day, making it the second largest concentration of Olympic accommodation behind the Olympic Village. This hosting solution has been used since then in Games held in coastal cities, such as at Sydney 2000, Athens 2004, London 2012, Sochi 2014, Rio 2016 and was going to be used at Tokyo 2020. Cruise ships have been used to accommodate displaced persons during hurricanes. For example, on 1 September 2005, the U.S. Federal Emergency Management Agency (FEMA) contracted three Carnival Cruise Lines vessels (, the former , and the ) to house Hurricane Katrina evacuees. In 2017, cruise ships were used to help transport residents from some Caribbean islands destroyed by Hurricane Irma, as well as Puerto Rico residents displaced by Hurricane Maria. The cruise ships have also been used for evacuations. In 2010, in response to the shutdown of UK airspace due to the eruption of Iceland's Eyjafjallajökull volcano, the newly completed was used to rescue 2,000 British tourists stranded in Spain as an act of goodwill by the owners. The ship departed from Southampton for Bilbao on 21 April, and returned on 23 April. A cruise ship was kept on standby in case inhabitants of Kangaroo Island required evacuation in 2020 after a series of fires burned on the island. Regional industries Most cruise ships sail the Caribbean or the Mediterranean. Others operate elsewhere in places like Alaska, the South Pacific, the Baltic Sea and New England. A cruise ship that is moving from one of these regions to another will commonly operate a repositioning cruise while doing so. Expedition cruise lines, which usually operate small ships, visit certain more specialized destinations such as the Arctic and Antarctica, or the Galápagos Islands. The number of cruise tourists worldwide in 2005 was estimated at some 14 million. The main region for cruising was North America (70% of cruises), where the Caribbean islands were the most popular destinations. The second most popular region was continental Europe (13%), where the fastest growing segment is cruises in the Baltic Sea. The most visited Baltic ports are Copenhagen, St. Petersburg, Tallinn, Stockholm and Helsinki. The seaport of St. Petersburg, the main Baltic port of call, received 426,500 passengers during the 2009 cruise season. According to 2010 CEMAR statistics the Mediterranean cruise market is going through a fast and fundamental change; Italy has won prime position as a destination for European cruises, and destination for the whole of the Mediterranean basin. The most visited ports in Mediterranean Sea are Barcelona (Spain), Civitavecchia (Italy), Palma (Spain) and Venice (Italy). 2013 saw the entrance of the first Chinese company into the cruise market. China's first luxury cruise ship, Henna, made her maiden voyage from Sanya Phoenix Island International Port in late January. Caribbean cruising industry The Caribbean cruising industry is one of the largest in the world, responsible for over $2 billion in direct revenue to the Caribbean islands in 2012. Over 45,000 people from the Caribbean are directly employed in the cruise industry. An estimated 17,457,600 cruise passengers visited the islands in the 2011–2012 cruise year (May 2011 to April 2012.) Cruise lines operating in the Caribbean include Royal Caribbean International, Princess Cruises, Carnival Cruise Line, Celebrity Cruises, Disney Cruise Line, Holland America, P&O, Cunard and Norwegian Cruise Line. There are also smaller cruise lines that cater to a more intimate feeling among their guests. The three largest cruise operators are Carnival Corporation, Royal Caribbean International, and Star Cruises/Norwegian Cruise Lines. Many American cruise lines to the Caribbean depart out of the Port of Miami, with "nearly one-third of the cruises sailing out of Miami in recent years". Other cruise ships depart from Port Everglades (in Fort Lauderdale), Port Canaveral (approximately east of Orlando), New York, Tampa, Galveston, New Orleans, Cape Liberty, Baltimore, Jacksonville, Charleston, Norfolk, Mobile, and San Juan, Puerto Rico. Some UK cruise lines base their ships out of Barbados for the Caribbean season, operating direct charter flights out of the UK. The busiest ports of call in the Caribbean for cruising in the 2014 year are listed below: Alaskan cruising industry 2016 was the most recent year of CLIA (Cruise Lines International Association) studies conducted around the cruise industry specifically in the US and more specifically Alaska. In 2016, Alaskan cruises generated nearly 5 million passenger and crew visits, 20.3% of all passenger and crew visits in the US. (NASDAQ, 2017) Cruise lines frequently bring passengers to Glacier Bay National Park, Ketchikan, Anchorage, Skagway, and the state's capital, Juneau. Visitor volume is represented by overall visitors entering Alaskan waters and/or airspace. Between October 2016 and September 2017 Alaska had about 2.2 million visitors; 49% of those were through the cruise industry. That 2.2 million was a 27% increase since 2009, and the volume overall has steadily increased. Visitors generally spend money when travelling, and this is measured in two distinct areas: the cruising companies themselves and the visitors. There are no current numbers for cruise specific passenger spending ashore, but the overall visitor expenditure can be measured. Tours accounted for $394 million (18%), gifts and souvenirs $427 million (20%), food $428 million (20%), transportation $258 million (12%), lodging $454 million (21%), and other $217 million (10%). The second main area of economic growth comes from what the cruising companies and their crews spend themselves. Cruise liners spend around $297 million on the items that come in their packages on board and ashore as parts of group tours: things like stagecoach rides and boat tours on smaller vessels throughout their ports of call. This money is paid to the service providers by the cruise line company. Cruise liner crew are also a revenue generator, with 27,000 crew members visiting Alaska in 2017 alone, generating about $22 million. 2017 was also a good year for job generation within Alaska: 43,300 jobs were created, bringing in $1.5 billion in labor costs, and a total income of $4.5 billion was generated. These jobs were scattered across all of Alaska. Southeast Alaska had 11,925 jobs ($455 million labor income), Southwest 1,800 jobs ($50 million labor income), South Central 20,700 jobs ($761 million Labor income), Interior 8,500 jobs ($276 million labor income), Far North 375 jobs ($13 M labor income). Labor income is shown in the graph below. Shipyards The construction market for cruise ships is dominated by three European companies and one Asian company: Chantiers de l’Atlantique of France. Fincantieri of Italy with: Ancona shipyards (located at Ancona) Marghera shipyards (located at Marghera, Venice) Monfalcone shipyards (located at Monfalcone, Gorizia) Sestri Ponente shipyards (located at Genoa) VARD Braila shipyards (located at Braila) VARD Søviknes Shipyard (located in Norway) VARD Tulcea shipyards (located at Tulcea) Meyer Werft of Germany with two shipyards: Meyer Turku at Perno shipyard in Turku, Finland Meyer Werft of Germany. Mitsubishi Heavy Industries of Japan. , 54 new ships have been ordered and are due to be delivered by 2028. As of August 2024, there are 62 ships on order until 2036, adding 154,146 berths. Safety and security Piracy As most of the passengers on a cruise are affluent and have considerable ransom potential, not to mention a considerable amount of cash and jewelry on board (for example in casinos and shops), there have been several high-profile pirate attacks on cruise ships, such as on and . As a result, cruise ships have implemented various security measures. While most merchant shipping firms have generally avoided arming crew or security guards for reasons of safety, liability and conformity with the laws of the countries where they dock, cruise ships have small arms (usually semi-automatic pistols) stored in a safe accessible only by the captain who distributes them to authorized personnel such as security or the master-at-arms. The ship's high-pressure fire hoses can be used to keep boarders at bay, and often the vessel itself can be maneuvered to ram pirate craft. A recent technology to deter pirates has been the LRAD or sonic cannon which was used in the successful defence of Seabourn Spirit. A related risk is that of terrorism, the most notable incident being that of the 1985 hijacking of Achille Lauro, an Italian cruise ship. Crime on-board Passengers entering the cruise ship are screened by metal detectors. Explosive detection machines used include X-ray machines and explosives trace-detection portal machines (a.k.a. "puffer machines"), to prevent weapons, drugs and other contraband on board. Security has been considerably tightened since 11 September 2001, such that these measures are similar to airport security. In addition to security checkpoints, passengers are often given a ship-specific identification card, which must be shown in order to get on or off the ship. This prevents people boarding who are not entitled to do so, and also ensures the ship's crew are aware of who is on the ship. The Cruise Ship ID cards are also used as the passenger's room key. CCTV cameras are mounted frequently throughout the ship. In 2010, the United States Congress passed the Cruise Vessel Security and Safety Act after numerous incidents of sexual violence, passenger disappearances, physical assault, and other serious crimes. Congress said: Passengers on cruise vessels have an inadequate appreciation of their potential vulnerability to crime while on ocean voyages, and those who may be victimized lack the information they need to understand their legal rights or to know whom to contact for help in the immediate aftermath of the crime. Congress said both passengers and crew committed crimes. It said data on the problem was lacking because cruise lines did not make it publicly available, multiple countries were involved in investigating incidents on international waters, and crime scenes could not be secured quickly by police. It recommended that owners of cruise vessels: install acoustic hailing and warning devices capable of working at a distance. install more security cameras install peep holes in passenger room doors limit access to passenger rooms to select staff at specific times After investigating the death of Dianne Brimble in 2002, a coroner in Australia recommended: Federal police officers travel on ships to ensure a quick response to crime, scanners and drug detection dogs check passengers and crew at Australian ports, an end to overlaps between jurisdictions, and flags of ships be disregarded for nations unable to investigate incidents thoroughly and competently. The lobby group International Cruise Victims Association, based in Arizona, pushes for more regulation of the cruise industry, and supports victims of crimes committed on cruise ships. Overboard drownings Passengers and crew sometimes drown after going overboard in what the industry calls man-overboard incidents (MOBs). From 2000 to 2018 more than 300 people fell off cruise ships or large ferries, which is an average of about 1.5 people each month. Of those, only about 17 to 25 percent were rescued. Critics of the industry blame alcohol promotion for many passenger deaths, and poor labour conditions for crew suicides. They also point to underinvestment in the latest MOB sensors, a lack of regulation and consumer protection, and a lack of on-board counselling services for crew. The industry blames irresponsible behaviour by passengers, and says overboard sensors are unreliable and generate false alarms. Maritime lawyer Jim Walker estimates about half of all disappearances at sea involve some factor of foul play, and that a lack of police jurisdiction on international waters allows sexual predators to go unpunished. Stability Modern cruise ships are tall but remain stable due to their relatively low center of mass. This is due to large open spaces and the extensive use of aluminium, high-strength steel and other lightweight materials in the upper parts, and the fact that the heaviest components—engines, propellers, fuel tanks and such—are located at the bottom of the hull. Thus, even though modern cruise ships may appear tall, proper weight distribution ensures that they are not top-heavy. Furthermore, large cruise ships tend to be very wide, which considerably increases their initial stability by increasing the metacentric height. Although most passenger ships utilize stabilizers to reduce rolling in heavy weather, they are only used for crew and passenger comfort and do not contribute to the overall intact stability of the vessel. The ships must fulfill all stability requirements even with the stabilizer fins retracted. According to the Washington Post, a recent study by economic consultant G.P. Wild – commissioned by the cruise industry's trade group and released in March 2019 – argued that cruises are getting safer over time. The study claims that, even as capacity increased 55 percent between 2009 and 2018, the number of overall "operational incidents" declined 37 percent and the rate of man-overboard cases dropped 35 percent. Disease Norovirus Norovirus is a virus that commonly causes gastroenteritis, and is also a cause of gastroenteritis on cruise ships. It is typically transmitted from person to person. Symptoms usually last between 1 and 3 days and generally resolve without treatment or long term consequences. The incubation period of the virus averages about 24 hours. Norovirus outbreaks are often perceived to be associated with cruise ships. According to the United States CDC, the factors that cause norovirus to be associated with cruise ships include the closer tracking and faster reporting of illnesses on cruise ships compared to those on land; the closer living quarters that increases the amount of interpersonal contact; as well as the turnover of passengers that may bring the viruses on board. However, the estimated likelihood of contracting gastroenteritis from any cause on an average 7-day cruise is less than 1%. In 2009, during which more than 13 million people took a cruise, there were nine reported norovirus outbreaks on cruise ships. Outbreak investigations by the United States Centers for Disease Control and Prevention (CDC) have shown that transmission among cruise ship passengers is primarily person-to-person; potable water supplies have not been implicated. In a study published in the Journal of the American Medical Association, the CDC reported that, "Perceptions that cruise ships can be luxury breeding grounds for acute gastroenteritis outbreaks don't hold water. A recent CDC report showed that from 2008 to 2014, only 0.18% of more than 73 million cruise passengers and 0.15% of some 28 million crew members reported symptoms of the illness." Ships docked in port undergo surprise health inspections. In 2009, ships that underwent unannounced inspections by the CDC received an average CDC Vessel Sanitation Program score of approximately 97 out of a total possible 100 points. The minimum passing inspection score is 85. Collaboration with the CDC's Vessel Sanitation Program and the development of Outbreak Prevention and Response Plans has been credited in decreasing the incidence of norovirus outbreaks on ships. Legionnaires' disease Other pathogens which can colonise pools and spas including those on cruise ships include Legionella, the bacterium which causes Legionnaires' disease. Legionella, and in particular the most virulent strain, Legionella pneumophila serogroup 1, can cause infections when inhaled as an aerosol or aspirated. Individuals who are immunocompromised and those with pre-existing chronic respiratory and cardiac disease are more susceptible. Legionnaires' has been infrequently associated with cruise ships. Enterotoxigenic Escherichia coli (ETEC) Enterotoxigenic Escherichia coli is a form of E. coli and the leading bacterial cause of diarrhea in the developing world, as well as the most common cause of diarrhea for travelers to those areas. Since 2008 there has been at least one reported incident each year of E. coli on international cruise ships reported to the Vessel Sanitation Program of the Centers for Disease Control, though there were none in 2015. Causes of E. coli infection include the consumption of contaminated food or water contaminated by human waste. COVID-19 News outlets reported several cases and suspected cases of Coronavirus disease 2019 associated with cruise ships in early 2020. Authorities variously turned away ships or quarantined them; cruise operators cancelled some port visits and ultimately suspended global cruise operations. People aboard cruise ships played a role in spreading the disease in some countries. Environmental impact Cruise ships generate a number of waste streams that can result in discharges to the marine environment, including sewage, graywater, hazardous wastes, oily bilge water, ballast water, and solid waste. They also emit air pollutants to the air and water. These wastes, if not properly treated and disposed of, can be a significant source of pathogens, nutrients, and toxic substances with the potential to threaten human health and damage aquatic life. Most cruise ships run (primarily) on heavy fuel oil (HFO), or "bunker fuel", which, because of its high sulphur content, results in sulphur dioxide emissions worse than those of equivalent road traffic. The international MARPOL IV-14 agreement for Sulphur Emission Control Areas requires that cruise ships must use fuel containing no more than 0.10% sulphur or make use of exhaust gas scrubbers to reduce sulfur oxide emissions to no worse than an engine running on <0.1% sulfur fuel. Cruise ships may use 60 percent of the fuel energy for propulsion, and 40 percent for hotel functions, but loads and distribution depend highly on conditions. It has been claimed that air pollution from maritime transport, including cruise ships, is responsible for 50,000 deaths per year in Europe. Some cruise lines, such as Cunard, are taking steps to reduce environmental impact by refraining from discharges (Queen Mary 2 has a zero-discharge policy) and reducing their carbon dioxide output every year. Cruise ships require electrical power, normally provided by diesel generators, although an increasing number of new ships are fueled by liquified natural gas (LNG). When docked, ships must run their generators continuously to power on-board facilities, unless they are capable of using onshore power, where available. Some cruise ships already support the use of shore power, while others are being adapted to do so. Overtourism Large cruise ships have been identified as one of the major causes of overtourism in places like Venice, Barcelona, and Dubrovnik. Critics of the industry say it overwhelms the cities' infrastructure, causing overcrowding, damaging heritage sites, and changing the character of local neighbourhoods – as residential amenities and shops are replaced by tourist cafes and souvenir stands. Cruise tourists contribute little economically to the places they visit. In Venice, short stay day trippers – including cruise tourists – account for 73% of all visitors, yet only contribute to 18% of the tourism economy. By contrast overnight visitors contribute 50%. In Venice, campaigners have long been calling for a ban on large cruise ships entering the historic portion of the city. In 2021, they were successful. Ships weighing over 25,000 tonnes were banned from entering the Venice Lagoon along the Giudecca Canal in an attempt to protect the fragile lagoon ecosystem and to limit the damage to the underwater foundations of the city's historic centre. At the time, UNESCO warned the city could be placed on its endangered list if ships were not diverted to another port. In 2023, Barcelona Mayor, Ada Colau, spoke out in favour of limiting the number of cruise ships arriving in the city. Currently up to 200,000 people disembark each month in peak season, Colau's new measures could halve this. In a 2019 study by Transport and Environment, Barcelona ranked as the worst cruise port for air pollution in Europe. From 2024, only 1,000 cruise passengers per day will be allowed to disembark in Bar Harbor, Maine, United States. The average cruise ship holds 3,000 passengers. The move came after a 2021 survey showed the majority of local residents were unhappy with large cruise ships, and felt that the town was overrun by cruise tourists. Sunken vessels : caught fire and sank on 24 October 1961, one dead. : caught fire and sank on 11 October 1980, with no fatalities. : accidentally hit a rock 16 February 1986, one dead. : sank on 21 October 1988 after accidentally colliding with the cargo ship Adige, 4 dead. : accidentally sunk on 4 August 1991 after suffering uncontrolled flooding, no fatalities. : caught fire and sank 30 November 1994, two dead. : caught fire and sank on May 21, 1999, with no fatalities. : accidentally hit a reef on 30 April 2000, no fatalities. : sank en route to the scrapyard on October 21, 2000, with no fatalities. : sank on 17 December 2000 due to possible sabotage, with no fatalities. : accidentally hit a reef and capsized on 6 April 2007, two dead. : accidentally sunk on 23 November 2007 after hitting an iceberg, no fatalities. : sank accidentally on 13 January 2012 after hitting some rocks, 32 dead. The wreck was salvaged three years after the incident and then towed to the port of Genoa, where it was scrapped. : capsized and sank in a storm on June 1, 2015, killing 442 people. : a river cruise ship, sank on 29 May 2019 after accidentally colliding with the river cruise ship Viking Sigyn, 28 dead.
Technology
Naval transport
null
314907
https://en.wikipedia.org/wiki/Oryx
Oryx
Oryx ( ) is a genus consisting of four large antelope species called oryxes. Their pelage is pale with contrasting dark markings in the face and on the legs, and their long horns are almost straight. The exception is the scimitar oryx, which lacks dark markings on the legs, only has faint dark markings on the head, has an ochre neck, and has horns that are clearly decurved. The Arabian oryx was only saved from extinction through a captive-breeding program and reintroduction to the wild. The scimitar oryx, which was listed as extinct in the wild, also relied on a captive-breeding program for its survival. Etymology The term "oryx" comes from the Greek word ὄρυξ, óryx, for a type of antelope. The Greek plural form is óryges, although "oryxes" has been established in English. Herodotus mentions a type of gazelle in Libya called ὄρυς, orus, probably related to the verb ὀρύσσω, orussō, or ὀρύττω, oruttō, meaning "to dig". White oryxes are known to dig holes in the sand. Species Arabian oryx The Arabian oryx (Oryx leucoryx, Arabic: المها), became extinct in the wild in 1972 in the Arabian Peninsula. It was reintroduced in 1982 in Oman, but poaching has reduced its numbers there. One of the largest populations of Arabian oryxes exists on Sir Bani Yas Island in the United Arab Emirates. Additional populations have been reintroduced in Qatar, Bahrain, Israel, Jordan, and Saudi Arabia. As of 2011, the total wild population is over 1,000, and 6,000–7,000 are being held in captivity. In 2011, the IUCN downgraded its threat category from extinct in the wild to vulnerable, the first species to have changed back in this way. Scimitar oryx The scimitar oryx, also called the scimitar-horned oryx (Oryx dammah), of North Africa used to be listed as extinct in the wild, but it is now declared as endangered. Unconfirmed surviving populations have been reported in central Niger and Chad, and a semi-wild population currently inhabiting a fenced nature reserve in Tunisia is being expanded for reintroduction to the wild in that country. Several thousand are held in captivity around the world. East African oryx and gemsbok The East African oryx (Oryx beisa) inhabits eastern Africa and the closely related gemsbok (Oryx gazella) inhabits southern Africa. The gemsbok is monotypic and the East African oryx has two subspecies; the common beisa oryx (O. b. beisa) and the fringe-eared oryx (O. b. callotis). In the past, both were considered subspecies of the gemsbok. The East African oryx is an endangered species, whereas the gemsbok is not. Gemsbok were introduced in New Mexico by the Department of Game and Fish in the late 1960s and early 1970s as an experiment in offering a unique hunting opportunity to New Mexico residents. Between 1969 and 1973, 95 oryx were released onto White Sands Missile Range. White Sands Missile Range, located between the cities of Albuquerque, NM and El Paso, TX, is a 3,200 square mile US Army facility which also hosts White Sands National Park. Researchers believed that the population would never grow beyond 500 to 600 and would remain within the Tularosa Basin. However, the animals proved to be extremely opportunistic, and quickly spread into the San Andres Mountains to the north and west of Tularosa Basin. At one time, numbers of oryx in New Mexico were estimated to be around 6,000 (original release numbers were less than 100). Today, numbers have been held around the 2,000 mark through managed hunting efforts. The success of the oryx in New Mexico is due in part to the abundance of food. In Africa, they eat grasses, forbs, and melons. In New Mexico, they feed on desert grasses, yucca, buffalo gourds, and mesquite bean pods. They are especially adapted to desert life and can go a long time without drinking water. This area also lacks a way to control the population. Lions and other natural predators cull the population in Africa, with only 10% of calves reaching one year of age. In New Mexico, predators like coyotes and mountain lions are not effective at controlling numbers, allowing the oryx to reproduce without restriction. Classification Family Bovidae Subfamily Hippotraginae Genus Oryx Scimitar oryx, O. dammah Gemsbok, O. gazella East African oryx, O. beisa (formerly in O. gazella) Common beisa oryx, O. b. beisa Fringe-eared oryx, O. b. callotis Arabian oryx, O. leucoryx Ecology All oryx species prefer near-desert conditions and can survive without water for long periods. They live in herds of up to 600 animals. Newborn calves are able to run with the herd immediately after birth. Both males and females possess permanent horns. The horns are narrow and straight except in the scimitar oryx, where they curve backwards like a scimitar. The horns can be lethal: oryxes have been known to kill lions with them, and they are thus sometimes called sabre antelopes (not to be confused with the sable antelope). The horns also make the animals a prized game trophy, which has led to the near-extinction of the two northern species. As an introduced species Between 1969 and 1977, the New Mexico Department of Game and Fish in the US intentionally released 95 gemsbok into its state's White Sands Missile Range and that population is now estimated between 3,000 and 6,000 animals. Within the state of New Mexico, oryxes are classified as "big game" and can be hunted. Oryxes in popular culture The oryx is the national animal of Namibia, the State of Qatar, and the company Qatar Airways has an oryx as its logo. The main boss of the MMO game Realm of the Mad God is Oryx the Mad God, named after the creator of the original sprite sheets, Oryx. His four direct subordinates also bear the names of four South African species of oryx. Oryxes appear briefly, along with many other species of animal, in the Talk Talk music video It's My Life. In the video game Tom Clancy's Rainbow Six Siege, a playable defending operator nicknamed Oryx was introduced in Year 5 Season 1. His ability is called "Remah Dash," where he can charge to break holes in walls and knock down enemies. Oryx is a nickname for a character in Margaret Atwood's book Oryx and Crake. Oryx is also the main antagonist’s name in the video game Destiny: The Taken King, a god who seeks vengeance on the player, known as a Guardian, after they killed his son Crota. He is killed by the player in the raid "King’s Fall". He is portrayed as "Oryx, the Taken King". The Oryx is mentioned in Pliny's Natural History, in which he writes, "There is a wild beast, named by the Egyptians Oryx, which, when the star [Sirius] rises, is said to stand opposite to it, to look steadfastly at it, and then to sneeze, as if it were worshiping it." In the 1994 film and 2019 remake of The Lion King and The Lion King II: Simba's Pride, the two species of oryxes in Hell's Gate National Park are the East African oryx and gemsbok.
Biology and health sciences
Bovidae
Animals
314911
https://en.wikipedia.org/wiki/Myxozoa
Myxozoa
Myxozoa (etymology: Greek: μύξα myxa "slime" or "mucus" + thematic vowel o + ζῷον zoon "animal") is a subphylum of aquatic cnidarian animals – all obligate parasites. It contains the smallest animals ever known to have lived. Over 2,180 species have been described and some estimates have suggested at least 30,000 undiscovered species. Many have a two-host lifecycle, involving a fish and an annelid worm or a bryozoan. The average size of a myxosporean spore usually ranges from 10 μm to 20 μm, whereas that of a malacosporean (a subclade of the Myxozoa) spore can be up to 2 mm. Myxozoans can live in both freshwater and marine habitats. Myxozoans are highly derived cnidarians that have undergone dramatic evolution from a free swimming, self-sufficient jellyfish-like creature into their current form of obligate parasites composed of very few cells. As myxozoans evolved into microscopic parasites, they lost many genes responsible for multicellular development, coordination, cell–cell communication, and even, in some cases, aerobic respiration. The genomes of some myxozoans are now among the smallest genomes of any known animal species. Life cycle and pathology Myxozoans are endoparasitic animals exhibiting complex life cycles that, in most of the documented cases, involve an intermediate host, usually a fish, but in rare cases amphibians, reptiles, birds, and mammals; and a definitive host, usually an annelid or an ectoproct. Only about 100 life cycles have been resolved and it is suspected that there may be some exclusively terrestrial. The mechanism of infection occurs through valve spores that have many forms, but their main morphology is the same: one or two sporoplasts, which are the real infectious agent, surrounded by a layer of flattened cells called valve cells, which can secrete a layer protective coating and form float appendages. Integrated into the layer of valve cells are two to four specialized capsulogenic cells (in a few cases, one or even 15), each carrying a polar capsule containing coiled polar filaments, an extrudable organelle used for recognition, contact and infiltration. Myxospores are ingested by annelids, in which the polar filaments extrude to anchor the spore to the gut epithelium. Opening of the shell valves allows the sporoplasms to penetrate into the epithelium. Subsequently, the parasite undergoes reproduction and development in the gut tissue, and finally produces usually eight actinosporean spore stages (actinospores) within a pansporocyst. After mature actinospores are released from their hosts they float in the water column. Upon contact with skin or gills of fish, sporoplasms penetrate through the epithelium, followed by development of the myxosporean stage. Myxosporean trophozoites are characterized by cell-in-cell state, where the secondary (daughter) cells develop in the mother (primary) cells. The presporogonic stages multiply, migrate via nervous or circulatory systems, and develop into sporogonic stages. At the final site of infection, they produce mature spores within mono- or di-sporic pseudoplasmodia, or poly-sporic plasmodia. Relationships between myxosporeans and their hosts are often highly evolved and do not usually result in severe diseases of the natural host. Infection in fish hosts can be extremely long-lasting, potentially persisting for the lifetime of the host. However, an increasing number of myxosporeans have become pathogens with significant impact to the commercial fish industry, largely as a result of aquaculture bringing new species into contact with myxosporeans to which they had not been previously exposed, and to which they are highly susceptible. The economic impact of such parasites can be severe, especially where prevalence rates are high; they may also have a severe impact on wild fish stocks. The diseases caused by myxosporeas in cultured fish with the most significant economic impact worldwide are proliferative kidney disease (PKD) caused by the malacosporean T. bryosalmonae, and whirling disease, caused by a myxosporean M. cerebralis; both diseases affect salmon. Enteromyxosis is caused by E. leei in cultured marine sparids, while proliferative gill disease (or “hamburger disease”) is caused by H. ictaluri in catfish and S. renicola infections occur in common carp. Anatomy Myxozoans are very small animals, typically 10–300 μm in length. Like other cnidarians they possess cnidocysts, which were referred to as "polar capsules" before the discovery that myxozoans are cnidarians. These cnidocysts fire tubules as in other cnidarians; some inject substances into the host. However, the tubules lack hooks or barbs, and in some species are more elastic than in other cnidarians. Myxozoans have secondarily lost epithelial structures, a nervous system, gut, and cilia. Most lack muscles, though these are retained in some members of malacosporea. Those who have lost their muscles move around inside the host using other forms of locomotion, such as the use of filopodia, spore valve contractions, amoeboid movements, and rapidly creating and reabsorbing folds on the cell membrane. Myxozoans do not undergo embryogenesis during development and have lost true gametes. Instead, they reproduce via multicellular spores. These spores contain the polar capsules, which are not typically present in somatic cells. Centrioles are not involved in the nuclear division of myxozoans. Cell division by binary fission is rare, and cells divide instead via endogeny. In 2020, the myxozoan Henneguya salminicola was found to lack a mitochondrial genome, and thus be incapable of aerobic respiration; it was the first animal to be positively identified as such. Its actual metabolism is currently unknown. Phylogenetics Myxozoans were originally considered to be protozoans, and were included among other non-motile forms in the group Sporozoa. As their distinct nature became clear through 18S ribosomal DNA (rDNA) sequencing, they were relocated in the metazoa. Detailed classification within the metazoa was however long hindered by conflicting rDNA evidence: although 18S rDNA suggested an affinity with Cnidaria, other rDNA sampled, and the HOX genes of two species, were more similar to those of the Bilateria. The discovery that Buddenbrockia plumatellae, a worm-like parasite of bryozoans up to 2 mm in length, is a myxozoan initially appeared to strengthen the case for a bilaterian origin, as the body plan is superficially similar. Nevertheless, closer examination reveals that Buddenbrockia longitudinal symmetry is not twofold, but fourfold, casting doubt on this hypothesis. Further testing resolved the genetic conundrum by sourcing the first three previously identified discrepant HOX genes (Myx1-3) to the bryozoan Cristatella mucedo and the fourth (Myx4) to northern pike, the respective hosts of the two corresponding Myxozoa samples. This explained the confusion: the original experiments had used samples contaminated by tissue from host organisms, leading to false positives for a position among the Bilateria. More careful cloning of 50 coding genes from Buddenbrockia firmly established the clade as severely modified members of the phylum Cnidaria, with medusozoans as their closest relatives. Similarities between myxozoan polar capsules and cnidarian nematocysts had been drawn for a long time, but were generally assumed to be the result of convergent evolution. Taxonomists now recognize the outdated subgroup Actinosporea as a life-cycle phase of Myxosporea. Molecular clocks suggest that myxozoans and their closest relatives, the polypodiozoa, shared their last common ancestor with medusazoans about 600 million years ago, during the Ediacaran period. Taxonomy Myxozoan taxonomy has undergone great and important changes in its levels of generic, family and suborder classification. Fiala et al. (2015) proposed a new classification based on spores.
Biology and health sciences
Cnidarians
Animals
314928
https://en.wikipedia.org/wiki/Aquatic%20animal
Aquatic animal
An aquatic animal is any animal, whether vertebrate or invertebrate, that lives in a body of water for all or most of its lifetime. Aquatic animals generally conduct gas exchange in water by extracting dissolved oxygen via specialised respiratory organs called gills, through the skin or across enteral mucosae, although some are evolved from terrestrial ancestors that re-adapted to aquatic environments (e.g. marine reptiles and marine mammals), in which case they actually use lungs to breathe air and are essentially holding their breath when living in water. Some species of gastropod mollusc, such as the eastern emerald sea slug, are even capable of kleptoplastic photosynthesis via endosymbiosis with ingested yellow-green algae. Almost all aquatic animals reproduce in water, either oviparously or viviparously, and many species routinely migrate between different water bodies during their life cycle. Some animals have fully aquatic life stages (typically as eggs and larvae), while as adults they become terrestrial or semi-aquatic after undergoing metamorphosis. Such examples include amphibians such as frogs, many flying insects such as mosquitoes, mayflies, dragonflies, damselflies and caddisflies, as well as some species of cephalopod molluscs such as the algae octopus (whose larvae are completely planktonic, but adults are highly terrestrial). Aquatic animals are a diverse polyphyletic group based purely on the natural environments they inhabit, and many morphological and behavioral similarities among them are the result of convergent evolution. They are distinct from terrestrial and semi-aquatic animals, who can survive away from water bodies, while aquatic animals often die of dehydration or hypoxia after prolonged removal out of water due to either gill failure or compressive asphyxia by their own body weight (as in the case of whale beaching). Along with aquatic plants, algae and microbes, aquatic animals form the food webs of various marine, brackish and freshwater aquatic ecosystems. Description The term aquatic can be applied to animals that live in either fresh water or salt water. However, the adjective marine is most commonly used for animals that live in saltwater or sometimes brackish water, i.e. in oceans, shallow seas, estuaries, etc. Aquatic animals can be separated into four main groups according to their positions within the water column. Neustons ("floaters"), more specifically the zooneustons, inhabit the surface ecosystem and use buoyancy to stay at the water surface, sometimes with appendages hanging from the underside for foraging (e.g. Portuguese man o' war, chondrophores and the buoy barnacle). They only move around via passive locomotion, meaning they have vagility but no motility. Planktons ("drifters"), more specifically the metazoan zooplanktons, are suspended within the water column with no motility (most aquatic larvae) or limited motility (e.g. jellyfish, salps, larvaceans, and escape responses of copepods), causing them to be mostly carried by the water currents. Nektons ("swimmers") have active motility that are strong enough to propel and overcome the influence of water currents. These are the aquatic animals most familiar to the common knowledge, as their movements are obvious on the macroscopic scale and the cultivation and harvesting of their biomass is most important to humans as seafoods. Nektons often have powerful tails, paddle/fan-shaped appendages with large wetted surfaces (e.g. fins, flippers or webbed feet) and/or jet propulsion (in the case of cephalopods) to achieve aquatic locomotion. Benthos ("bottom dwellers") inhabit the benthic zone at the floor of water bodies, which include both shallow sea (coastal, littoral and neritic) and deep sea communities. These animals include sessile organisms (e.g. sponges, sea anemones, corals, sea pens, sea lilies and sea squirts, some of which are reef-builders crucial to the biodiversity of marine ecosystems), sedentary filter feeders (e.g. bivalve molluscs) and ambush predators (e.g. flatfishes and bobbit worms, who often burrow or camouflage within the marine sediment), and more actively moving bottom feeders who swim (e.g. demersal fishes) and crawl around (e.g. decapod crustaceans, marine chelicerates, octopus, most non-bivalvian molluscs, echinoderms etc.). Many benthic animals are algivores, detrivores and scavengers who are important basal consumers and intermediate recyclers in the marine nitrogen cycle. Aquatic animals (especially freshwater animals) are often of special concern to conservationists because of the fragility of their environments. Aquatic animals are subject to pressure from overfishing/hunting, destructive fishing, water pollution, acidification, climate change and competition from invasive species. Many aquatic ecosystems are at risk of habitat destruction/fragmentation, which puts aquatic animals at risk as well. Aquatic animals play an important role in the world. The biodiversity of aquatic animals provide food, energy, and even jobs. Freshwater aquatic animals Fresh water creates a hypotonic environment for aquatic organisms. This is problematic for organisms with pervious skins and gills, whose cell membranes may rupture if excess water is not excreted. Some protists accomplish this using contractile vacuoles, while freshwater fish excrete excess water via the kidney. Although most aquatic organisms have a limited ability to regulate their osmotic balance and therefore can only live within a narrow range of salinity, diadromous fish have the ability to migrate between fresh and saline water bodies. During these migrations they undergo changes to adapt to the surroundings of the changed salinities; these processes are hormonally controlled. The European eel (Anguilla anguilla) uses the hormone prolactin, while in salmon (Salmo salar) the hormone cortisol plays a key role during this process. Freshwater molluscs include freshwater snails and freshwater bivalves. Freshwater crustaceans include freshwater shrimps, crabs, crayfish,freshwater pirahnas and copepods. Air-breathing aquatic animals In addition to water-breathing animals (e.g. fish, most molluscs, etc.), the term "aquatic animal" can be applied to air-breathing tetrapods who have evolved for aquatic life. The most proliferative extant group are the marine mammals, such as Cetacea (whales, dolphins and porpoises, with some freshwater species) and Sirenia (dugongs and manatees), who are too evolved for aquatic life to survive on land at all (where they will die of beaching), as well as the highly aquatically adapted but land-dwelling pinnipeds (true seals, eared seals and the walrus). The term "aquatic mammal" is also applied to riparian mammals like the river otter (Lontra canadensis) and beavers (family Castoridae), although they are technically semiaquatic or amphibious. Unlike the more common gill-bearing aquatic animals, these air-breathing animals have lungs (which are homologous to the swim bladders in bony fish) and need to surface periodically to change breaths, but their ranges are not restricted by oxygen saturation in water, although salinity changes can still affect their physiology to an extent. There are also reptilian animals that are highly evolved for life in water, although most extant aquatic reptiles, including crocodilians, turtles, water snakes and the marine iguana, are technically semi-aquatic rather than fully aquatic, and most of them only inhabit freshwater ecosystems. Marine reptiles were once a dominant group of ocean predators that altered the marine fauna during the Mesozoic, although most of them died out during the Cretaceous-Paleogene extinction event and now only the sea turtles (the only remaining descendants of the Mesozoic marine reptiles) and sea snakes (which only evolved during the Cenozoic) remain fully aquatic in saltwater ecosystems. Amphibians, while still requiring access to water to inhabit, are separated into their own ecological classification. The majority of amphibians — except the order Gymnophiona (caecilians), which are mainly terrestrial burrowers — have a fully aquatic larval form known as tadpoles, but those from the order Anura (frogs and toads) and some of the order Urodela (salamanders) will metamorphosize into lung-bearing and sometimes skin-breathing terrestrial adults, and most of them may return to the water to breed. Axolotl, a Mexican salamander that retains its larval external gills into adulthood, is the only extant amphibian that remains fully aquatic throughout the entire life cycle. Certain amphibious fish also evolved to breathe air to survive oxygen-deprived waters, such as lungfishes, mudskippers, labyrinth fishes, bichirs, arapaima and walking catfish. Their abilities to breathe atmospheric oxygen are achieved via skin-breathing, enteral respiration, or specialized gill organs such as the labyrinth organ and even primitive lungs (lungfish and bichirs). Most molluscs have gills, while some freshwater gastropods (e.g. Planorbidae) have evolved pallial lungs and some amphibious species (e.g. Ampullariidae) have both. Many species of octopus have cutaneous respiration that allows them to survive out of water at the intertidal zones, with at least one species (Abdopus aculeatus) being routinely terrestrial hunting crabs among the tidal pools of rocky shores. Importance Environmental Aquatic animals play an important role for the environment as indicator species, as they are particularly sensitive to deterioration in water quality and climate change. Biodiversity of aquatic animals is also an important factor for the sustainability of aquatic ecosystems as it reflects the food web status and the carrying capacity of the local habitats. Many migratory aquatic animals, predominantly forage fish (such as sardines) and euryhaline fish (such as salmon), are keystone species that accumulate and transfer biomass between marine, freshwater and even to terrestrial ecosystems. Importance to humans As a food source Aquatic animals are important to humans as a source of food (i.e. seafood) and as raw material for fodders (e.g. feeder fish and fish meal), pharmaceuticals (e.g. fish oil, krill oil, cytarabine and bryostatin) and various industrial chemicals (e.g. chitin and bioplastics, formerly also whale oil). The harvesting of aquatic animals, especially finfish, shellfish and inkfish, provides direct and indirect employment to the livelihood of over 500 million people in developing countries, and both the fishing industry and aquaculture make up a major component of the primary sector of the economy. The United Nations Food and Agriculture Organization estimates that global consumption of aquatic animals in 2022 was 185 million tonnes (live weight equivalent), an increase of 4 percent from 2020. The value of the 2022 global trade was estimated at USD 452 billion, comprising USD 157 billion for wild fisheries and USD 296 billion for aquaculture. Of the total 185 million tonnes of aquatic animals produced in 2022, about 164.6 million tonnes (89%) were destined for human consumption, equivalent to an estimated 20.7 kg per capita. The remaining 20.8 million tonnes were destined for non-food uses, to produce mainly fishmeal and fish oil. In 2022, China remained the major producer (36% of the total), followed by India (8%), Indonesia (7%), Vietnam (5%) and Peru (3%). Total fish production in 2016 reached an all-time high of 171 million tonnes, of which 88% was utilized for direct human consumption, resulting in a record-high per capita consumption of . Since 1961 the annual global growth in fish consumption has been twice as high as population growth. While annual growth of aquaculture has declined in recent years, significant double-digit growth is still recorded in some countries, particularly in Africa and Asia. Overfishing and destructive fishing practices fuelled by commercial incentives have reduced fish stocks beyond sustainable levels in many world regions, causing the fishery industry to maladaptively fishing down the food web. It was estimated in 2014 that global fisheries were adding US$270 billion a year to global GDP, but by full implementation of sustainable fishing, that figure could rise by as much as US$50 billion. UN Food and Agriculture Organization projects world production of aquatic animals to reach 205 million tonnes by 2032. Where sex-disaggregated data are available, approximately 24 percent of the total workforce were women; of these, 53 percent were employed in the sector on a full-time basis, a great improvement since 1995, when only 32 percent of women were employed full time. Aquatic animal are highly perishable and several chemical and biological changes take place immediately after death; this can result in spoilage and food safety risks if good handling and preservation practices are not applied all along the supply chain. These practices are based on temperature reduction (chilling and freezing), heat treatment (canning, boiling and smoking), reduction of available water (drying, salting and smoking) and changing of the storage environment (vacuum packing, modified atmosphere packaging and refrigeration). Aquatic animal products also require special facilities such as cold storage and refrigerated transport, and rapid delivery to consumers. Recreational fishing In addition to commercial and subsistence fishing, recreational fishing is a popular pastime in both developed and developing countries, and the manufacturing, retail and service sectors associated with recreational fishing have together conglomerated into a multibillion-dollar industry. In 2014 alone, around 11 million saltwater sportfishing participants the United States generated USD$58 billion of retail revenue (comparatively, commercial fishing generated USD$141 billion that same year). In 2021, the total revenue of recreational fishing industry in the United States overtook those of Lockheed Martin, Intel, Chrysler and Google; and together with personnel salary (about USD$39.5 billion) and various tolls and fees collected by fisheries management agencies (about USD$17 billion), contributed almost USD$129 billion to the GDP of the United States, roughly 1% of the national GDP and more than the economic sum of 17 U.S. states.
Biology and health sciences
General classifications
Animals
314968
https://en.wikipedia.org/wiki/Yellowjacket
Yellowjacket
Yellowjacket or yellowjacket is the common name in North America for predatory social wasps of the genera Vespula and Dolichovespula. Members of these genera are known simply as "wasps" in other English-speaking countries. Most of these are black and yellow like the eastern yellowjacket (Vespula maculifrons) and the aerial yellowjacket (Dolichovespula arenaria); some are black and white like the bald-faced hornet (Dolichovespula maculata). Some have an abdomen with a red background color instead of black. They can be identified by their distinctive markings, their occurrence only in colonies, and a characteristic, rapid, side-to-side flight pattern prior to landing. All females are capable of stinging. Yellowjackets are important predators of pest insects. Identification Yellowjackets may be confused with other wasps, such as hornets and paper wasps such as Polistes dominula. A typical yellowjacket worker is about long, with alternating bands on the abdomen; the queen is larger, about long (the different patterns on their abdomens help separate various species). Yellowjackets are sometimes mistakenly called "bees" (as in "meat bees"), given that they are similar in size and general coloration to honey bees. In contrast to honey bees, yellowjackets have yellow or white markings, are not covered with tan-brown dense hair on their bodies, and do not have the flattened, hairy pollen-carrying hind legs characteristic of honey bees (although they are capable of pollination). Yellowjackets have lance-like stingers with small barbs, and typically sting repeatedly, though occasionally a stinger becomes lodged and pulls free of the wasp's body; the venom, like most bee and wasp venoms, is primarily dangerous to only those humans who are allergic or are stung many times. All species have yellow or white on their faces. Their mouthparts are well-developed with strong mandibles for capturing and chewing insects, with probosces for sucking nectar, fruit, and other juices. Yellowjackets build nests in trees, shrubs, or in protected places such as inside man-made structures, or in soil cavities, tree stumps, mouse burrows, etc. They build them from wood fiber they chew into a paper-like pulp. Many other insects exhibit protective mimicry of aggressive, stinging yellowjackets; in addition to numerous bees and wasps (Müllerian mimicry), the list includes some flies, moths, and beetles (Batesian mimicry). Yellowjackets' closest relatives, the hornets, closely resemble them but have larger heads, seen especially in the large distance from the eyes to the back of the head. Life cycle and habits Yellowjackets are social hunters living in colonies containing workers, queens, and males (drones). Colonies are annual with only inseminated queens overwintering. Fertilized queens are found in protected places such as in hollow logs, stumps, under bark, leaf litter, soil cavities, and man-made structures. Queens emerge during the warm days of late spring or early summer, select a nest site, and build a small paper nest in which they lay eggs. After eggs hatch from the 30 to 50 brood cells, the queen feeds the young larvae for about 18 to 20 days. Larvae pupate, then emerge later as small, infertile females called workers. Workers in the colony take over caring for the larvae, feeding them with chewed-up meat or fruit. By midsummer, the first adult workers emerge and assume the tasks of nest expansion, foraging for food, care of the queen and larvae, and colony defense. From this time until her death in the autumn, the queen remains inside the nest, laying eggs. The colony then expands rapidly, reaching a maximum size of 4,000–5,000 workers and a nest of 10,000–15,000 cells in late summer. The species V. squamosa, in the southern part of its range, may build much larger perennial colonies populated by dozens of queens, tens of thousands of workers, and hundreds of thousands of cells. At peak size, reproductive cells are built with new males and queens produced. Adult reproductives remain in the nest fed by the workers. New queens build up fat reserves to overwinter. Adult reproductives leave the parent colony to mate. After mating, males quickly die, while fertilized queens seek protected places to overwinter. Parent colony workers dwindle, usually leaving the nest to die, as does the founding queen. Abandoned nests rapidly decompose and disintegrate during the winter. They can persist as long as they are kept dry, but are rarely used again. In the spring, the cycle is repeated; weather in the spring is the most important factor in colony establishment. The adult yellowjacket diet consists primarily of sugars and carbohydrates, such as fruits, flower nectar, and tree sap. Larvae feed on proteins derived from insects, meats, and fish. Workers collect, chew, and condition such foods before feeding them to the larvae. Many of the insects collected by the workers are considered pest species, making the yellowjacket beneficial to agriculture. Larvae, in return, secrete a sugary substance for workers to eat; this exchange is a form of trophallaxis. As insect sources of food diminish in late summer, larvae produce less for workers to eat. Foraging workers pursue sources of sugar outside the nest including ripe fruits and human garbage. Notable species Two of the European yellowjacket species, the German wasp (Vespula germanica), and the common wasp (Vespula vulgaris) were originally native to Europe, but are now established as invasives in southern Africa, New Zealand, eastern Australia, and South America. The North American yellowjacket (Vespula alascensis), eastern yellowjacket (Vespula maculifrons), western yellowjacket (Vespula pensylvanica), and prairie yellowjacket (Vespula atropilosa) are native to North America. Southern yellowjacket (Vespula squamosa), a species that is sometimes free-living and sometimes a social parasite Bald-faced hornets (Dolichovespula maculata) belong among the yellowjackets rather than the true hornets. They are not usually called "yellowjackets" because of their ivory-on-black coloration. Aerial yellowjacket (Dolichovespula arenaria) Tree wasp (Dolichovespula sylvestris) Nest Dolichovespula species such as the aerial yellowjacket, D. arenaria, and the bald-faced hornet, tend to create exposed aerial nests. This feature is shared with some true hornets, which has led to some naming confusion. Vespula species, in contrast, build concealed nests, usually underground. Yellowjacket nests usually last for only one season, dying off in winter. The nest is started by a single queen, called the "foundress". Typically, a nest can reach the size of a basketball by the end of a season. In parts of Australia, New Zealand, the Pacific Islands, and southern coastal areas of the United States, the winters are mild enough to allow nest overwintering. Nests that survive multiple seasons become massive and often possess multiple egg-laying queens. In the United States The German yellowjacket (V. germanica) first appeared in Ohio in 1975, and has now become the dominant species over the eastern yellowjacket. It is bold and aggressive and can sting repeatedly and painfully. It will mark aggressors and pursue them. It is often confused with Polistes dominula, another invasive species in the United States, due to their very similar pattern. The German yellowjacket builds its nests in cavities—not necessarily underground—with the peak worker population in temperate areas between 1000 and 3000 individuals between May and August. Each colony produces several thousand new reproductives after this point through November. The eastern yellowjacket builds its nests underground, also with the peak worker population between 1000 and 3000 individuals, similar to the German yellowjacket. Nests are built entirely of wood fiber and are completely enclosed except for a small entrance at the bottom. The color of the paper is highly dependent on the source of the wood fibers used. The nests contain multiple, horizontal tiers of combs within. Larvae hang within the combs. In the southeastern United States, where southern yellowjacket (Vespula squamosa) nests may persist through the winter, colony sizes of this species may reach 100,000 adult wasps. The same kind of nest expansion has occurred in Hawaii with the invasive western yellowjacket (V. pensylvanica). In popular culture The yellowjacket's most visible place in US sporting culture is as a mascot, most famously with the Georgia Tech Yellow Jackets, represented by the mascot Buzz. Other college and university examples include Allen University, the American International College, Baldwin-Wallace University, Black Hills State University, Cedarville University, Defiance College, Graceland University, Howard Payne University, LeTourneau University, Montana State University Billings, Northern Vermont University-Lyndon, Randolph-Macon College, University of Rochester, University of Wisconsin–Superior, West Virginia State University, and Waynesburg University. Though not specified by the team, the mascot of the Columbus Blue Jackets, named "Stinger," closely resembles a yellowjacket. In the years since its original yellow incarnation, the mascot's color has been changed to light green, seemingly combining the real insect's yellow and the team's blue. In the United Kingdom the rugby union team Wasps RFC traditionally used a yellowjacket as their club emblem. The Marvel Comics character Yellowjacket, who is based on the insect, is one of the various identities adopted by Hank Pym, who is most commonly known as Ant-Man. In addition to being able to fly, emit bio-electrictry inspired by a yellowjacket's sting, and shrink down to insect size, Yellowjacket can also control the insect and uses them to aid him in various ways. The television series, Yellowjackets, features a girls’ soccer team which gets stranded in the wilderness and resorts to extreme measures to survive. Their mascot is a Yellowjacket, and the theme song features images of the insect as well. Note that yellowjacket is often spelled as two words (yellow jacket) in popular culture and even in some dictionaries. The proper entomological spelling, according to the Entomological Society of America, is as a single word (yellowjacket).
Biology and health sciences
Hymenoptera
Animals
315245
https://en.wikipedia.org/wiki/Ricefish
Ricefish
The ricefishes are a family (Adrianichthyidae) of small ray-finned fish that are found in fresh and brackish waters from India to Japan and out into the Malay Archipelago, most notably Sulawesi (where the Lake Poso and Lore Lindu species are known as buntingi). The common name ricefish derives from the fact that some species are found in rice paddies. This family consists of about 37 species in two genera (some recognize a third, Xenopoecilus). Several species are rare and threatened, and some 2–4 may already be extinct. Description Most of these species are quite small, making them of interest for aquaria. Adrianichthys reach lengths of depending on the exact species involved, while the largest Oryzias reaches up to . Most Oryzias species are less than a half this length, with the smallest being up to only long. They have a number of distinctive features, including an unusual structure to the jaw, and the presence of an additional bone in the tail. The Japanese rice fish (O. latipes), also known as the medaka, is a popular model organism used in research in developmental biology. This species has traveled into space, where they have the distinction of being the first vertebrate to mate and produce healthy young in space. Genetic study of the family suggests that it originally evolved on Sulawesi and spread from there to the Asian mainland; the supposed genus Xenopoecilus are apparently unrelated, morphologically divergent species of Oryzias. Taxonomy The ricefish were formerly classified within the order Cyprinodontiformes but in the 1980s workers showed that they were a monophyletic grouping, mainly based on the characters on the bones of the gill arches and the hyoid apparatus, within the Beloniformes as the family Adrianichthyidae, this family making up one of the three suborders of the Beloniformes, the Adrianichthyoidei. Since then some workers have placed them in the Cyprinodontiformes but more recently molecular studies have supported their placement in the Beloniformes. History Ricefish are believed to have been kept as aquarium fishes since the 17th century. The Japanese ricefish was one of the first species to be kept and it has been bred into a golden color, from their original white coloring. Reproduction As with most fish, ricefish typically spawn their eggs, which are fertilised externally. However, some species, including the Japanese ricefish, are known to fertilise the eggs internally, carrying them inside the body as the embryo develops. The female then lays the eggs just before they hatch. Several other species carry their eggs attached to the body between their pelvic fins.
Biology and health sciences
Acanthomorpha
Animals
315320
https://en.wikipedia.org/wiki/Blade
Blade
A blade is the sharp, cutting portion of a tool, weapon, or machine, specifically designed to puncture, chop, slice, or scrape surfaces or materials. Blades are typically made from materials that are harder than those they are intended to cut. This includes early examples made from flaked stones like flint or obsidian, evolving through the ages into metal forms like copper, bronze, and iron, and culminating in modern versions made from steel or ceramics. Serving as one of humanity's oldest tools, blades continue to have wide-ranging applications, including in combat, cooking, and various other everyday and specialized tasks. Blades function by concentrating force at the cutting edge. Design variations, such as serrated edges found on bread knives and saws, serve to enhance this force concentration, adapting blades for specific functions and materials. Blades thus hold a significant place both historically and in contemporary society, reflecting an evolution in material technology and utility. Uses During food preparation, knives are mainly used for slicing, chopping, and piercing. In combat, a blade may be used to slash or puncture, and may also be thrown or otherwise propelled. The function is to sever a nerve, muscle or tendon fibers, or blood vessel to disable or kill the adversary. Severing a major blood vessel typically leads to death due to exsanguination. Blades may be used to scrape, moving the blade sideways across a surface, as in an ink eraser, rather than along or through a surface. For construction equipment such as a grader, the ground-working implement is also referred to as the blade, typically with a replaceable cutting edge. Physics A simple blade intended for cutting has two faces that meet at an edge. Ideally, this edge would have no roundness but in practice, all edges can be seen to be rounded to some degree under magnification either optically or with an electron microscope. Force is applied to the blade, either from the handle or pressing on the back of the blade. The handle or back of the blade has a large area compared to the fine edge. This concentration of applied force onto the small edge area increases the pressure exerted by the edge. It is this high pressure that allows a blade to cut through a material by breaking the bonds between the molecules, crystals, fibers, etc. in the material. This necessitates the blade being strong enough to resist breaking before the other material gives way. Geometry The angle at which the faces meet is important as a larger angle will make for a duller blade while making the edge stronger. A stronger edge is less likely to dull from fracture or have the edge roll out of shape. The shape of the blade is also important. A thicker blade will be heavier and stronger and stiffer than a thinner one of similar design while also making it experience more drag while slicing or piercing. A filleting knife will be thin enough to be very flexible while a carving knife will be thicker and stiffer; a dagger will be thin so it can pierce, while a camping knife will be thicker so it can be stronger and more durable. A strongly curved edge, like a talwar, will allow the user to draw the edge of the blade against an opponent even while close to the opponent where a straight sword would be more difficult to pull in the same fashion. The curved edge of an axe means that only a small length of the edge will initially strike the tree, concentrating force as does a thinner edge, whereas a straight edge could potentially land with the full length of its edge against a flat section of the tree. A splitting maul has a convex section to avoid getting stuck in the wood where chopping axes can be flat or even concave. A khopesh, falchion, or kukri is angled and/or weighted at the distal end so that force is concentrated at the faster moving, heavier part of the blade maximizing cutting power and making it largely unsuitable for thrusting, whereas a rapier is thin and tapered allowing it to pierce and be moved with more agility while reducing its chopping power compared to a similarly sized sword. A serrated edge, such as on a saw or a bread knife, concentrates force onto the tips of the serrations which increases pressure as well as allowing soft or fibrous material (like wood, rope, bread, and vegetables) to expand into the spaces between serrations. Whereas pushing any knife, even a bread knife, down onto a bread loaf will just squash the loaf as bread has a low elastic modulus (is soft) but high yield strain (loosely, can be stretched or squashed by a large proportion without breaking), drawing serrations across the loaf with little downward force will allow each serration to simultaneously cut the bread with much less deformation of the loaf. Similarly, pushing on a rope tends to squash the rope while drawing serrations across it sheers the rope fibers. Drawing a smooth blade is less effective as the blade is parallel to the direction draw but the serrations of a serrated blade are at an angle to the fibers. Serrations on knives are often symmetric allowing the blade to cut on both the forward and reverse strokes of a cut, a notable exception being Veff serrations which are designed to maximize cutting power while moving the blade away from the user. Saw blade serrations, for both wood and metal, are typically asymmetrical so that they cut while moving in only one direction. (Saws act by abrading a material into dust along a narrow channel, the kerf, whereas knives and similar act by forcing the material apart. This means that saws result in a loss of material and the serrations of a saw also serve to carry metal swarf and sawdust out of the cut channel.) Fullers are longitudinal channels either forged into the blade or later machined/milled out of the blade though the latter process is less desirable. This loss of material necessarily weakens the blade but serves to make the blade lighter without sacrificing stiffness. The same principle is applied in the manufacture of beams such as I-beams. Fullers are only of significant utility in swords. In most knives there is so little material removed by the fuller that it makes little difference to the weight of the blade and they are largely cosmetic. Materials Typically blades are made from a material that is about as hard, though usually harder, than the material to be cut. Insufficiently hard blades will be unable to cut a material or will wear away quickly as hardness is related to a material's ability to resist abrasion. However, blades must also be tough enough to resist the dynamic load of impact and as a general rule the harder a blade the less tough (the more brittle) a material. For example, a steel axehead is much harder than the wood it is intended to cut and is sufficiently tough to resist the impact resulting when swung against a tree while a ceramic kitchen knife, harder than steel, is very brittle (has low toughness) and can easily shatter if dropped onto the floor or twisted while inside the food it is cutting or carelessly stored under other kitchen utensils. This creates a tension between the intended use of the blade, the material it is to be made from, and any manufacturing processes (such as heat treatment in the case of steel blades that will affect a blade's hardness and toughness). A balance must be found between the sharpness and how well it can last. Methods that can circumvent this include differential hardening. This method yields an edge that can hold its sharpness as well as a body that is tough. Non-metals Prehistorically, and in less technologically advanced cultures even into modern times, tool and weapon blades have been made from wood, bone, and stone. Most woods are exceptionally poor at holding edges and bone and stone suffer from brittleness making them suffer from fracture when striking or struck. In modern times stone, in the form of obsidian, is used in some medical scalpels as it is capable of being formed into an exceedingly fine edge. Ceramic knives are non-metallic and non-magnetic. As non-metals do not corrode they remain rust and corrosion free but they suffer from similar faults as stone and bone, being rather brittle and almost entirely inflexible. They are harder than metal knives and so more difficult to sharpen, and some ceramic knives may be as hard or harder than some sharpening stones. For example, synthetic sapphire is harder than natural sharpening stones and is as hard as alumina sharpening stones. Zirconium dioxide is also harder than garnet sharpening stones and is nearly as hard as alumina. Both require diamond stones or silicon carbide stones to sharpen and care has to be taken to avoid chipping the blade. As such ceramic knives are seldom used outside of a kitchen and they are still quite uncommon. Plastic knives are difficult to make sharp and poorly retain an edge. They are largely used as low cost, disposable utensils or as children's utensils or in environments such as air travel where metal blades are prohibited. They are often serrated to compensate for their general lack of sharpness but, as evidenced by the fact they can cut food, they are still capable of inflicting injury. Plastic blades of designs other than disposable cutlery are prohibited or restricted in some jurisdictions as they are undetectable by metal detectors. Metals Native copper was used to make blades by ancient civilizations due to its availability. Copper's comparative softness causes it to deform easily; it does not hold an edge well and is poorly suited for working stone. Bronze is superior in this regard, and was taken up by later civilizations. Both bronze and copper can be work hardened by hitting the metal with a hammer. With technological advancement in smelting, iron came to be used in the manufacturing of blades. Steel, a range of alloys made from iron, has become the metal of choice for the modern age. Various alloys of steel can be made which offer a wide range of physical and chemical properties desirable for blades. For example, surgical scalpels are often made of stainless steel so that they remain free of rust and largely chemically inert; tool steels are hard and impact resistant (and often expensive as retaining toughness and hardness requires expensive alloying materials, and, being hard, they are difficult to make into their finished shape) and some are designed to resist changes to their physical properties at high temperatures. Steels can be further heat treated to optimize their toughness, which is important for impact blades, or their hardness, which allows them to retain an edge well with use (although harder metals require more effort to sharpen). Combined materials and heat-treatments It is possible to combine different materials, or different heat treatments, to produce desirable qualities in a blade. For example, the finest Japanese swords were routinely made of up to seven sections of metals and even poorer quality swords were often made of two. These would include soft irons that could absorb the energy of impact without fracturing but which would bend and poorly retain an edge, and hard steels more liable to shatter on impact but which retained an edge well. The combination provided a sword that would resist impact while remaining sharp, even though the edge could chip if abused. Pattern welding involved forging together twisted bars of soft (bendable) low carbon and hard (brittle) higher carbon iron. This was done because furnaces of the time were typically able to produce only one grade or the other, and neither was well suited for more than a very limited use blade. The ability of modern steelmakers to produce very high-quality steels of various compositions has largely relegated this technique to either historical recreations or to artistic works. Acid etching and polishing blades made of different grades of steel can be used to produce decorative or artistic effects. Japanese sword makers developed the technique of differential hardening by covering their sword blades in different thicknesses of clay before quenching. Thinner clay allowed the heated metal to cool faster, particularly along the edge. Faster cooling resulted in a finer crystal structure, resulting in a blade with a hard edge but a more flexible body. European sword makers produced similar results using differential tempering. Dulling Blades dull with use and abuse. This is particularly true of acute blades and those made of soft materials. Dulling usually occurs due to contact between the blade and a hard substance such as ceramic, stone, bone, glass, or metal. The more acute the blade, the more easily it will dull. As the blade near the edge is thinner, there is little material to remove before the edge is worn away to a thicker section. Thin edges can also roll over when force is applied it them, forming a section like the bottom part of a letter "J". For this reason, straight edge razors are frequently stropped to straighten the edge. Drawing a blade across any material tends to abrade both the blade, usually making it duller, and the cut material. Though softer than glass or many types of stone used in the kitchen, steel edges can still scratch these surfaces. The resulting scratch is full of very fine particles of ground glass or stone which will very quickly abrade the blade's edge and so dull it. In times when swords were regularly used in warfare, they required frequent sharpening because of dulling from contact with rigid armor, mail, metal rimmed shields, or other swords, for example. Particularly, hitting the edge of another sword by accident or in an emergency could chip away metal and even cause cracks through the blade. Soft-cored blades are more resistant to fracturing on impact. Nail pulls Folding pocket knives often have a groove cut in the side of the blade near the spine. This is called a nail pull and allows the fingernail to be inserted to swing the blade out of the holder. Knife patterns Some of the most common shapes are listed below. S1 A straight back blade, also called standard or normal, has a curving edge and a straight back. A dull back lets the wielder use fingers to concentrate force; it also makes the knife heavy and strong for its size. The curve concentrates force on a smaller area, making cutting easier. This knife can chop as well as pick and slice. This is also the best single-edged blade shape for thrusting, as the edge cuts a swath that the entire width of the knife can pass through without the spine having to push aside any material on its path, as a sheepsfoot or drop-point knife would. S2 A trailing-point knife has a back edge that curves upward to end above the spine. This lets a lightweight knife have a larger curve on its edge and indeed the whole of the knife may be curved. Such a knife is optimized for slicing or slashing. Trailing point blades provide a larger cutting area, or belly, and are common on skinning knives. S3 A drop point blade has a convex curve of the back towards the point. It handles much like the clip-point, though with a stronger point typically less suitable for piercing. Swiss army pocket knives often have drop-points on their larger blades. S4 A clip-point blade is like a normal blade with the back "clipped". This clip can be either straight or concave. The back edge of the clip may have a false edge that could be sharpened to make a second edge. The sharp tip is useful as a pick, or for cutting in tight places. If the false edge is sharpened it increases the knife's effectiveness in piercing. As well, having the tip closer to the center of the blade allows greater control in piercing. The Bowie knife has a clip point blade and clip-points are common on pocket knives and other folding knives. S5 A blade has a straight edge and a straight dull back that curves towards the edge at the end. It gives the most control because the dull back edge is made to be held by fingers. Sheepsfoot blades were originally made to trim the hooves of sheep; their shape bears no similarity to the foot of a sheep. S6 A Wharncliffe blade is similar in profile to a sheep's foot but the curve of the back edge starts closer to the handle and is more gradual. Its blade is much thicker than a knife of comparable size. Wharncliffes were used by sailors, as the shape of the tip prevented accidental penetration of the work or the user's hand with the sudden motion of a ship. S7 A spey point blade (once used for neutering livestock) has a single, sharp, straight edge that curves strongly upwards at the end to meet a short, dull, straight point from the dull back. With the curved end of the blade being closer to perpendicular to the blade's axis than other knives and lacking a point, making penetration unlikely, spey blades are common on Trapper style pocketknives for skinning fur-bearing animals. C1 Leaf blade with a distinctive recurved "waist" adding some curved "belly" to the knife facilitating slicing as well as shifting weight towards the tip meaning that it is commonly used for throwing knives as well as improving chopping ability. C2 A spear point blade is a symmetrically-shaped blade with a point aligned with the centerline of the blade's long axis. True spear-point blades are double-edged with a central spine, like a dagger or spear head. The spear point is one of the stronger blade point designs in terms of penetration stress, and is found on many thrusting knives such as the dagger. The term spear point is occasionally and confusingly used to describe small single-edged blades without a central spine, such as that of the pen knife, a small folding-blade pocket knife formerly used in sharpening quills for writing. Pen-knife may also nowadays refer to a knifelike weapon blade pattern of some of larger pocket knife blades that would otherwise be termed drop-point designs. C3 A needle point blade has a sharply-tapered acuminated point. It is frequently found on daggers such as the stiletto (which had no sharpened edges) and the Fairbairn–Sykes fighting knife. Its long, narrow point reduces friction and increases the blade's penetrative capabilities, but is liable to stick in bone and can break if abused. When the needle point is combined with a reinforced 'T' section running the length of the blade's spine, it is called a reinforced tip. One example of a knife with a reinforced tip is the pesh-kabz. C4 Kris or flame-bladed sword. These blades have a distinct recurved blade form and are sharpened on both sides, typically tapering to (or approximating) a symmetrical point. C5 Referred to in English speaking countries as a "tanto" or "tanto a corruption of the Japanese word tantō, despite the tip bearing no resemblance to a or as a chisel point, referring to the straightness of the edge that comprises the end of the blade (and not to be confused with a blade said to have a "chisel grind", which would refer to a blade ground on only one side, even though chisels can be ground on one or both sides). It is similar to, but not the same as, some early Japanese swords that had kamasu kissaki ("barracuda tip"), a nearly straight edge at the tip whereas the typical "tanto point" as found in the west has a straight edge. The barracuda tip sword was sharp but also fragile whereas modern tanto points are often advertised as being stronger at the tip for having nearly the whole thickness of the blade present until quite close to the end of the knife. The geometry of the angle under the point gives tanto blades excellent penetration capabilities. For this reason, tanto blades are often found on knives designed for combat or fighting applications, where the user may need to pierce heavy clothing or low-level soft body armor. With a modified tanto, the end is clipped and often sharpened. This brings the tip closer to the center of the blade increasing control of the blade and improves penetration potential by having a finer point and a sharpened back edge. C6 A hawkbill blade is sharpened on the inside edge and is similar to carpet and linoleum knives. The point will tear even if the rest of the knife is comparatively dull. The karambit from Far South-East Asia is a hawkbill knife which is held with the blade extending from the bottom of the fist and the tip facing forward. The outside edge of a karambit may be sharp and if so may also feature a backward-facing point. C7 An ulu (lit. 'woman's knife' in Inuktitut) knife is a sharpened segment of a circle. This blade type has no point, and has a handle in the middle. It is good for scraping and sometimes chopping. The semi-circular version appears elsewhere in the world and is called a head knife. It is used in leatherworking both to scrape down leather (reducing thickness, i.e. skiving), and to make precise, rolling cuts for shapes other than straight lines. The circular version is a popular tool for slicing pizzas. One corner is placed at the edge of the pizza and the blade is rolled across in a diameter cut. Sword patterns The sharp edges of a sword may be either curved or straight. Curved blades tend to glide more easily through soft materials, making these weapons more ideal for slicing. Techniques for such weapons feature drawing the blade across the opponent's body and back. For straight-edged weapons, many recorded techniques feature cleaving cuts, which deliver the power out to a point, striking directly in at the target's body, done to split flesh and bone rather than slice it. That being said, there also exist many historical slicing techniques for straight-edged weapons. Hacking cuts can be followed by a drawing action to maximize the cut's effectiveness. For more information see Western Martial Arts or kenjutsu. Some weapons are made with only a single leading edge, such as the sabre or dusack. The dusack has a 'false edge' near the tip, which only extends down a portion of the blade's backside. Other weapons have a blade that is entirely dull except for a sharpened point, like the épée or foil, which prefer thrusts over cuts. A blade cannot perform a proper cut without an edge, and so in competitive fencing such attacks reward no points. Some variations include: The flame blade (an undulated blade, for both psychological effect and some tactical advantage of using a non-standard blade: vibrations and easier parry) The colichemarde, found in smallsword Marks and decoration Blades are sometimes marked or inscribed, for decorative purposes, or with the mark of either the maker or the owner. Blade decorations are often realized in inlay in some precious metal (gold or silver). Early blade inscriptions are known from the Bronze Age, a Hittite sword found at Hattusa bears an inscription chiseled into the bronze, stating that the blade was deposited as an offering to the storm-god by king Tuthaliya. Blade inscriptions become particularly popular in the 12th century knightly sword, based on the earlier, 9th to 11th century, the tradition of the so-called Ulfberht swords.
Technology
Rigid components
null
315428
https://en.wikipedia.org/wiki/Isosceles%20triangle
Isosceles triangle
In geometry, an isosceles triangle () is a triangle that has two sides of equal length or two angles of equal measure. Sometimes it is specified as having exactly two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case. Examples of isosceles triangles include the isosceles right triangle, the golden triangle, and the faces of bipyramids and certain Catalan solids. The mathematical study of isosceles triangles dates back to ancient Egyptian mathematics and Babylonian mathematics. Isosceles triangles have been used as decoration from even earlier times, and appear frequently in architecture and design, for instance in the pediments and gables of buildings. The two equal sides are called the legs and the third side is called the base of the triangle. The other dimensions of the triangle, such as its height, area, and perimeter, can be calculated by simple formulas from the lengths of the legs and base. Every isosceles triangle has an axis of symmetry along the perpendicular bisector of its base. The two equal angles at the base (opposite the legs) are always acute, so the classification of the triangle as acute, right, or obtuse depends only on the angle between its two legs. Terminology, classification, and examples Euclid defined an isosceles triangle as a triangle with exactly two equal angles or two equal sides, but modern treatments prefer to define isosceles triangles as having at least two equal sides. The difference between these two definitions is that the modern version makes equilateral triangles (with three equal sides) a special case of isosceles triangles. A triangle that is not isosceles (having three unequal sides) is called scalene. "Isosceles" is made from the Greek roots "isos" (equal) and "skelos" (leg). The same word is used, for instance, for isosceles trapezoids, trapezoids with two equal sides, and for isosceles sets, sets of points every three of which form an isosceles triangle. In an isosceles triangle that has exactly two equal sides, the equal sides are called legs and the third side is called the base. The angle included by the legs is called the vertex angle and the angles that have the base as one of their sides are called the base angles. The vertex opposite the base is called the apex. In the equilateral triangle case, since all sides are equal, any side can be called the base. Whether an isosceles triangle is acute, right or obtuse depends only on the angle at its apex. In Euclidean geometry, the base angles can not be obtuse (greater than 90°) or right (equal to 90°) because their measures would sum to at least 180°, the total of all angles in any Euclidean triangle. Since a triangle is obtuse or right if and only if one of its angles is obtuse or right, respectively, an isosceles triangle is obtuse, right or acute if and only if its apex angle is respectively obtuse, right or acute. In Edwin Abbott's book Flatland, this classification of shapes was used as a satire of social hierarchy: isosceles triangles represented the working class, with acute isosceles triangles higher in the hierarchy than right or obtuse isosceles triangles. As well as the isosceles right triangle, several other specific shapes of isosceles triangles have been studied. These include the Calabi triangle (a triangle with three congruent inscribed squares), the golden triangle and golden gnomon (two isosceles triangles whose sides and base are in the golden ratio), the 80-80-20 triangle appearing in the Langley's Adventitious Angles puzzle, and the 30-30-120 triangle of the triakis triangular tiling. Five Catalan solids, the triakis tetrahedron, triakis octahedron, tetrakis hexahedron, pentakis dodecahedron, and triakis icosahedron, each have isosceles-triangle faces, as do infinitely many pyramids and bipyramids. Formulas Height For any isosceles triangle, the following six line segments coincide: the altitude, a line segment from the apex perpendicular to the base, the angle bisector from the apex to the base, the median from the apex to the midpoint of the base, the perpendicular bisector of the base within the triangle, the segment within the triangle of the unique axis of symmetry of the triangle, and the segment within the triangle of the Euler line of the triangle, except when the triangle is equilateral. Their common length is the height of the triangle. If the triangle has equal sides of length and base of length , the general triangle formulas for the lengths of these segments all simplify to This formula can also be derived from the Pythagorean theorem using the fact that the altitude bisects the base and partitions the isosceles triangle into two congruent right triangles. The Euler line of any triangle goes through the triangle's orthocenter (the intersection of its three altitudes), its centroid (the intersection of its three medians), and its circumcenter (the intersection of the perpendicular bisectors of its three sides, which is also the center of the circumcircle that passes through the three vertices). In an isosceles triangle with exactly two equal sides, these three points are distinct, and (by symmetry) all lie on the symmetry axis of the triangle, from which it follows that the Euler line coincides with the axis of symmetry. The incenter of the triangle also lies on the Euler line, something that is not true for other triangles. If any two of an angle bisector, median, or altitude coincide in a given triangle, that triangle must be isosceles. Area The area of an isosceles triangle can be derived from the formula for its height, and from the general formula for the area of a triangle as half the product of base and height: The same area formula can also be derived from Heron's formula for the area of a triangle from its three sides. However, applying Heron's formula directly can be numerically unstable for isosceles triangles with very sharp angles, because of the near-cancellation between the semiperimeter and side length in those triangles. If the apex angle and leg lengths of an isosceles triangle are known, then the area of that triangle is: This is a special case of the general formula for the area of a triangle as half the product of two sides times the sine of the included angle. Perimeter The perimeter of an isosceles triangle with equal sides and base is just As in any triangle, the area and perimeter are related by the isoperimetric inequality This is a strict inequality for isosceles triangles with sides unequal to the base, and becomes an equality for the equilateral triangle. The area, perimeter, and base can also be related to each other by the equation If the base and perimeter are fixed, then this formula determines the area of the resulting isosceles triangle, which is the maximum possible among all triangles with the same base and perimeter. On the other hand, if the area and perimeter are fixed, this formula can be used to recover the base length, but not uniquely: there are in general two distinct isosceles triangles with given area and perimeter . When the isoperimetric inequality becomes an equality, there is only one such triangle, which is equilateral. Angle bisector length If the two equal sides have length and the other side has length , then the internal angle bisector from one of the two equal-angled vertices satisfies as well as and conversely, if the latter condition holds, an isosceles triangle parametrized by and exists. The Steiner–Lehmus theorem states that every triangle with two angle bisectors of equal lengths is isosceles. It was formulated in 1840 by C. L. Lehmus. Its other namesake, Jakob Steiner, was one of the first to provide a solution. Although originally formulated only for internal angle bisectors, it works for many (but not all) cases when, instead, two external angle bisectors are equal. The 30-30-120 isosceles triangle makes a boundary case for this variation of the theorem, as it has four equal angle bisectors (two internal, two external). Radii The inradius and circumradius formulas for an isosceles triangle may be derived from their formulas for arbitrary triangles. The radius of the inscribed circle of an isosceles triangle with side length , base , and height is: The center of the circle lies on the symmetry axis of the triangle, this distance above the base. An isosceles triangle has the largest possible inscribed circle among the triangles with the same base and apex angle, as well as also having the largest area and perimeter among the same class of triangles. The radius of the circumscribed circle is: The center of the circle lies on the symmetry axis of the triangle, this distance below the apex. Inscribed square For any isosceles triangle, there is a unique square with one side collinear with the base of the triangle and the opposite two corners on its sides. The Calabi triangle is a special isosceles triangle with the property that the other two inscribed squares, with sides collinear with the sides of the triangle, are of the same size as the base square. A much older theorem, preserved in the works of Hero of Alexandria, states that, for an isosceles triangle with base and height , the side length of the inscribed square on the base of the triangle is Isosceles subdivision of other shapes For any integer , any triangle can be partitioned into isosceles triangles. In a right triangle, the median from the hypotenuse (that is, the line segment from the midpoint of the hypotenuse to the right-angled vertex) divides the right triangle into two isosceles triangles. This is because the midpoint of the hypotenuse is the center of the circumcircle of the right triangle, and each of the two triangles created by the partition has two equal radii as two of its sides. Similarly, an acute triangle can be partitioned into three isosceles triangles by segments from its circumcenter, but this method does not work for obtuse triangles, because the circumcenter lies outside the triangle. Generalizing the partition of an acute triangle, any cyclic polygon that contains the center of its circumscribed circle can be partitioned into isosceles triangles by the radii of this circle through its vertices. The fact that all radii of a circle have equal length implies that all of these triangles are isosceles. This partition can be used to derive a formula for the area of the polygon as a function of its side lengths, even for cyclic polygons that do not contain their circumcenters. This formula generalizes Heron's formula for triangles and Brahmagupta's formula for cyclic quadrilaterals. Either diagonal of a rhombus divides it into two congruent isosceles triangles. Similarly, one of the two diagonals of a kite divides it into two isosceles triangles, which are not congruent except when the kite is a rhombus. Applications In architecture and design Isosceles triangles commonly appear in architecture as the shapes of gables and pediments. In ancient Greek architecture and its later imitations, the obtuse isosceles triangle was used; in Gothic architecture this was replaced by the acute isosceles triangle. In the architecture of the Middle Ages, another isosceles triangle shape became popular: the Egyptian isosceles triangle. This is an isosceles triangle that is acute, but less so than the equilateral triangle; its height is proportional to 5/8 of its base. The Egyptian isosceles triangle was brought back into use in modern architecture by Dutch architect Hendrik Petrus Berlage. Warren truss structures, such as bridges, are commonly arranged in isosceles triangles, although sometimes vertical beams are also included for additional strength. Surfaces tessellated by obtuse isosceles triangles can be used to form deployable structures that have two stable states: an unfolded state in which the surface expands to a cylindrical column, and a folded state in which it folds into a more compact prism shape that can be more easily transported. The same tessellation pattern forms the basis of Yoshimura buckling, a pattern formed when cylindrical surfaces are axially compressed, and of the Schwarz lantern, an example used in mathematics to show that the area of a smooth surface cannot always be accurately approximated by polyhedra converging to the surface. In graphic design and the decorative arts, isosceles triangles have been a frequent design element in cultures around the world from at least the Early Neolithic to modern times. They are a common design element in flags and heraldry, appearing prominently with a vertical base, for instance, in the flag of Guyana, or with a horizontal base in the flag of Saint Lucia, where they form a stylized image of a mountain island. They also have been used in designs with religious or mystic significance, for instance in the Sri Yantra of Hindu meditational practice. In other areas of mathematics If a cubic equation with real coefficients has three roots that are not all real numbers, then when these roots are plotted in the complex plane as an Argand diagram they form vertices of an isosceles triangle whose axis of symmetry coincides with the horizontal (real) axis. This is because the complex roots are complex conjugates and hence are symmetric about the real axis. In celestial mechanics, the three-body problem has been studied in the special case that the three bodies form an isosceles triangle, because assuming that the bodies are arranged in this way reduces the number of degrees of freedom of the system without reducing it to the solved Lagrangian point case when the bodies form an equilateral triangle. The first instances of the three-body problem shown to have unbounded oscillations were in the isosceles three-body problem. History and fallacies Long before isosceles triangles were studied by the ancient Greek mathematicians, the practitioners of Ancient Egyptian mathematics and Babylonian mathematics knew how to calculate their area. Problems of this type are included in the Moscow Mathematical Papyrus and Rhind Mathematical Papyrus. The theorem that the base angles of an isosceles triangle are equal appears as Proposition I.5 in Euclid. This result has been called the pons asinorum (the bridge of asses) or the isosceles triangle theorem. Rival explanations for this name include the theory that it is because the diagram used by Euclid in his demonstration of the result resembles a bridge, or because this is the first difficult result in Euclid, and acts to separate those who can understand Euclid's geometry from those who cannot. A well-known fallacy is the false proof of the statement that all triangles are isosceles, first published by W. W. Rouse Ball in 1892, and later republished in Lewis Carroll's posthumous Lewis Carroll Picture Book. The fallacy is rooted in Euclid's lack of recognition of the concept of betweenness and the resulting ambiguity of inside versus outside of figures.
Mathematics
Two-dimensional space
null
315598
https://en.wikipedia.org/wiki/Caravel
Caravel
The caravel (Portuguese: , ) is a small sailing ship that may be rigged with just lateen sails, or with a combination of lateen and square sails. It was known for its agility and speed and its capacity for sailing windward (beating). Caravels were used by the Portuguese and Spanish for the voyages of exploration during the 15th and 16th centuries, in the Age of Discovery. The caravel is a poorly understood type of vessel. Though there are now some archaeologically investigated wrecks that are most likely caravels, information on this type is limited. We have a better understanding of the ships of the Greeks and Romans of classical antiquity than we do of the caravel. History The long development of the caravel was probably influenced by various Mediterranean tending or coastal craft. Among these influences might have been the boats known as , that were introduced to the Islamic controlled parts of Iberia Al-Andalus from the Maghreb. The earliest caravels appeared in the thirteenth century along the coasts of Galicia and Portugal as single-masted fishing vessels. They were small, lightly built vessels of up to 20 tons at most, carrying, in one example, a crew of five men. Evidence suggests that these were . They carried a single-masted, triangular lateen sail rig. By the fourteenth century, their size had increased and their use had spread; for instance, there is mention, in 1307, of larger caravels of up to 30 tons in Biscay. Caravels were a common type of vessel in the coastal waters of the Iberian Peninsula in the fifteenth century. The caravel was the preferred vessel of Portuguese explorers like Diogo Cão, Bartolomeu Dias, Gaspar, and Miguel Corte-Real, and was also used by Spanish expeditions like those of Christopher Columbus. They were agile and easier to navigate than the barca and barinel, with a tonnage of 50 to 160 tons and 1 to 3 masts. Being smaller and having a shallow keel, the caravel was suited for sailing shallow coastal waters and up rivers. With the Mediterranean-type lateen sails attached it was highly maneuverable in shallow waters, while with the square Atlantic-type sails attached it was very fast when crossing the open sea. Its economy, speed, and agility made it esteemed as the best sailing vessel of its time. Its main drawback was its limited capacity for cargo and crew but this did not hinder its success. The exploration done with caravels made the spice trade of the Portuguese and the Spanish possible. However, for the trade itself, the caravel was soon replaced by the larger carrack (nau), which could carry larger, more profitable cargoes. The caravel was one of the pinnacle ships in Iberian ship development from 1400 to 1600. Etymology The English name caravel derives from the Portuguese , which in turn may derive from the or the perhaps indicating some continuity of its carvel build through the ages. Design The earliest caravels in the thirteenth century were small and are believed to have been un-decked, carrying one mast with lateen sails, while later types were larger and had two or three masts and decks. Caravels such as the caravela tilhlda of the 15th century had an average length of between , an average capacity of 50 to 60 tons, a high length-to-beam ratio of around 3.5 to 1, and narrow ellipsoidal frame (unlike the circular frame of the nau), making them very fast and maneuverable but with a limited cargo capacity. It was in such ships that Christopher Columbus set out on his expedition in 1492, while the Santa María was a small carrack of about 150 tons and served as the flagship, the Pinta and ' were caravels of around 15–20 m with a beam of 6 m and a displacement of around 60–75 tons. The Niña was re-rigged by Columbus with square rig to give better performance on the Atlantic crossingmost of which was following favourable winds, for which lateen was less suitable. Square-rigged caravel Towards the end of the 15th century, the Portuguese developed a larger version of the caravel, bearing a forecastle and sterncastle – though not as high as those of a carrack, which would have made it unweatherly – but most distinguishable for its square-rigged foremast, and three other masts bearing lateen rig. In this form it was referred to in Portuguese as a "round caravel" () as in Iberian tradition, a bulging square sail is said to be round. It was employed in coast-guard fleets near the Strait of Gibraltar and as an armed escort for merchant ships between Portugal and Brazil and in the Cape Route. Some consider this a forerunner of the fighting galleon and it remained in use until the 17th century.
Technology
Naval transport
null
315729
https://en.wikipedia.org/wiki/Cockatiel
Cockatiel
The cockatiel (; Nymphicus hollandicus), also known as the weero/weiro or quarrion, is a medium-sized parrot that is a member of its own branch of the cockatoo family endemic to Australia. They are prized as exotic household pets and companion parrots throughout the world and are relatively easy to breed compared to other parrots. As a caged bird, cockatiels are second in popularity only to the budgerigar. The cockatiel is the only member of the genus Nymphicus. It was previously unclear whether the cockatiel is a crested parakeet or small cockatoo; however, more recent molecular studies have assigned it to its own subfamily, Nymphicinae. It is, therefore, now classified as the smallest subfamily of the Cacatuidae (cockatoo family). Cockatiels are native to Australia, favouring the Australian wetlands, scrublands, and bushlands. There are many different mutations of this bird. Taxonomy and etymology Originally described by Scottish writer and naturalist Robert Kerr in 1793 as Psittacus hollandicus, the cockatiel (or cockateel) was moved to its own genus, Nymphicus, by Wagler in 1832. Its genus name reflects the experience of one of the earliest groups of Europeans to see the birds in their native habitat; the travellers thought the birds were so beautiful that they named them after mythical nymphs. The specific name hollandicus refers to New Holland, a historical name for Australia. Its biological relationships were for a long time uncertain; it is now placed in a monotypic subfamily Nymphicinae, but was sometimes in the past classified among the Platycercinae, the broad-tailed parrots. This issue was settled with molecular studies. A 1984 study of protein allozymes signalled its closer relationship to cockatoos than to other parrots, and mitochondrial 12S rRNA sequence data places it among the Calyptorhynchinae (dark cockatoos) subfamily. The unique, parakeet (meaning long-tailed parrot) morphological feature is a consequence of the decrease in size and accompanying change of ecological niche. Sequence analysis of intron 7 of the nuclear β-fibrinogen gene, on the other hand, indicates that it may yet be distinct enough as to warrant recognition of the Nymphicinae rather than inclusion of the genus in the Calyptorhynchinae. The cockatiel is now biologically classified as a genuine member of Cacatuidae on account of sharing all of the cockatoo family's biological features, namely, the erectile crest, a gallbladder, powder down, suppressed cloudy-layer (which precludes the display of blue and green structural colours), and facial feathers covering the sides of the beak, all of which are rarely found outside the family Cacatuidae. This biological relation to other cockatoos is further supported by the existence of at least one documented case of a successful hybrid between a cockatiel and a galah, another cockatoo species. Description Appearance The cockatiel's distinctive crest expresses the animal's emotional state. The crest is dramatically vertical when the cockatiel is startled or excited, gently oblique in its neutral or relaxed state, and flattened close to the head when the animal is angry or defensive. The crest is also held flat but protrudes outward in the back when the cockatiel is trying to appear alluring or flirtatious. When the cockatiel is tired, the crest is seen positioned halfway upwards, with the tip of the crest usually curling upward. In contrast to most cockatoos, the cockatiel has long tail feathers roughly making up half of its total length. At , the cockatiel is the smallest of the cockatoos, which are generally larger at between . The "normal grey" or "wild-type" cockatiel's plumage is primarily grey with prominent white flashes on the outer edges of each wing. The face of the male is yellow or white, while the face of the female is primarily grey or light grey, and both sexes feature a round orange area on both ears, often referred to as "cheddar cheeks". This orange colouration is generally vibrant in adult males, and often quite muted in females. Visual sexing is often possible with this variant of the bird. Sexual dimorphism Most wild cockatiel chicks and juveniles look female, and are virtually indistinguishable from the time of hatching until their first moulting. They display horizontal yellow stripes or bars on the ventral surface of their tail feathers, yellow spots on the ventral surface of the primary flight feathers of their wings, a grey coloured crest and face, and a dull orange patch on each of their cheeks. However some modern-day mutations are sex linked and the male and female chicks are easily distinguishable as soon as their feathers come in. Adult cockatiels with common coloring (grey body with yellow head) are sexually dimorphic, though to a lesser degree than many other avian species. This is only evident after the first moulting, typically occurring about six to nine months after hatching: the male loses the white or yellow barring and spots on the underside of his tail feathers and wings. The grey feathers on his cheeks and crest are replaced by bright yellow feathers, while the orange cheek patch becomes brighter and more distinct. The face and crest of the female will typically remain mostly grey with a yellowish tint, and a less vibrant orange cheek patch. Additionally, the female commonly retains the horizontal barring on the underside of her tail feathers. The colour in cockatiels is derived from two pigments: melanin (which provides the grey colour in the feathers, eyes, beak, and feet), and psittacofulvins (which provide the yellow colour on the face and tail and the orange colour of the cheek patch). The grey colour of the melanin overrides the yellow and orange of the psittacofulvins when both are present. The melanin content decreases in the face of the males as they mature, allowing the yellow and orange psittacofulvins to be more visible, while an increase in melanin content in the tail causes the disappearance of the horizontal yellow tail bars. In addition to these visible characteristics, the vocalisation of adult males is typically louder and more complex than that of females. But like most things this is not a hard and fast rule. Colour mutations Worldwide there are currently 22 cockatiel colour mutations established in aviculture, of which eight are exclusive to Australia. Mutations in captivity have emerged in various colours, some quite different from those observed in nature. Wild cockatiels are grey with visible differences between males and females. Male grey cockatiels typically have yellow heads while the female has a grey head. Juveniles tend to look like females with pinker beaks. The pied mutation first appeared in California in 1949. This mutation is a blotch of colour on an otherwise solid-coloured bird. For example, this may appear as a grey blotch on a yellow cockatiel. Lutino colouration was first seen in 1958. These birds lack the grey of their wild counterparts and are white to soft yellow. This is a popular colour; due to inbreeding, these cockatiels often have a small bald patch behind their crests. The cinnamon mutation, first seen in the 1950s, is very similar in appearance to the grey; however, these birds have a warmer, browner colouring. Pearling was first seen in 1967. This is seen as a feather of one colour with a different coloured edge, such as grey feathers with yellow tips. This distinctive pattern is on a bird's wings or back. The albino colour mutation is a lack of pigment. These birds are white with red eyes. Fallow cockatiels first appeared sometime in the 1970s. This mutation shows as a bird with cinnamon colouring with yellow sections. Other mutations include emerald/olive, dominant and recessive silver, and mutations exclusive to Australia: Australian fallow, faded (west coast silver), dilute/pastel silver (east coast silver), silver spangle (edged dilute), platinum, suffused (Australian olive), and pewter. Other mutations, such as face altering mutations, include whiteface, , dominant yellow cheek, sex-linked yellow cheek, gold cheek, cream face, and the Australian yellow cheek. Cockatiel colour mutations can become even more complex as one bird can have multiple colour mutations. For example, a yellow lutino cockatiel may have pearling – white spots on its back and wings. This is a double mutation. An example of a quadruple mutation would be cinnamon cockatiel with yellowface colouring with pearling and pied markings. Breeding and life span Breeding is triggered by seasonal rainfall. Cockatiels nest in tree hollows near a source of fresh water, often choosing eucalyptus/gum trees. The hen lays 4-7 eggs, one every other day, which she incubates for 17–23 days. The chicks fledge after 5 weeks. Cockatiels are the only cockatoo species which may reproduce by the end of their first year. The cockatiel's average life span is 12 to 15 years, though in captivity and under appropriate living conditions, a cockatiel could be expected to live from 16 to 25 years. The oldest living and confirmed specimen of cockatiel was reportedly 36 years old. Distribution and habitat Cockatiels are native to Australia, where they are found largely in arid or semi-arid country but always close to water. Largely nomadic, the species will move to where food and water is available. They are typically seen in pairs or small flocks. Sometimes, hundreds will flock around a single body of water. Wild cockatiels typically eat seeds, particularly Acacia, wheat, sunflower and Sorghum. To many farmers' dismay, they often eat cultivated crops. Cockatiels may be observed in and around western New South Wales and Queensland, Alice Springs, The Kimberley region and the northwestern corner of Western Australia. They are absent from the most fertile southwest and southeast corners of the country, the deepest Western Australian deserts, and Cape York Peninsula. Speech and vocalization Cockatiels can be very vocal and learn many spoken words and phrases by mimicking. Usually, males are faster to learn speech, mimicking or singing; their calls are also more varied. Cockatiels can also be taught to sing specific melodies, to the extent that some cockatiels have been demonstrated to synchronise their melodies with the songs of humans. Without being taught how to both male and female cockatiels repeat household sounds, including alarm clocks, phones, tunes or other birds from the outdoors.
Biology and health sciences
Psittaciformes
Animals
315794
https://en.wikipedia.org/wiki/SD%20card
SD card
Secure Digital, officially abbreviated as SD, is a proprietary, non-volatile, flash memory card format the SD Association (SDA) developed for use in portable devices. Because of their small physical dimensions, SD cards became widely used in many consumer electronic devices, such as digital cameras, camcorders, video game consoles, mobile phones, action cameras such as the GoPro Hero series, and camera drones. The standard was introduced in August 1999 by SanDisk, Panasonic (Matsushita) and Toshiba as an improvement on MultiMediaCards (MMCs). SDs have become an industry standard. The three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property (IP) rights associated with SD memory cards and SD host-and-ancillary products. In January 2000, the companies formed the SD Association (SDA), a non-profit organization to create and promote SD card standards. , the SDA has approximately 1,000 member companies. It uses several SD-3C-owned trademarked logos to enforce compliance with its specifications and denote compatibility. History 1999–2005: Creation and introduction of smaller formats In 1999, SanDisk, Panasonic (Matsushita) and Toshiba agreed to develop and market the Secure Digital (SD) memory card. The card was derived from the MultiMediaCard (MMC) and provided digital rights management (DRM) based on the Secure Digital Music Initiative (SDMI) standard and a high memory density ("data/bits per physical space"), i.e. a large quantity of data could be stored in a small physical space. SD was designed to compete with the Memory Stick, a flash storage format with DRM Sony had released the year before. Toshiba hoped the SD card's DRM would encourage music suppliers concerned about piracy to use SD cards. The trademarked SD logo was originally developed for the Super Density Disc, which was the unsuccessful Toshiba entry in the DVD format war. For this reason, the letter "D" is styled to resemble an optical disc. At the 2000 Consumer Electronics Show (CES), the three companies announced the creation of the SD Association (SDA) to promote SD cards. The SD Association, which was headquartered in San Ramon, California, United States, then had 30 member companies and product manufacturers that made interoperable memory cards and devices. Early samples of the SD card became available in the first quarter of 2000, and production quantities of 32 and 64 megabyte (MB) cards became available three months later. The first 64 MB cards were offered for sale for US$200. SD was envisioned as a single memory card format for several kinds of electronic devices, that could also function as an expansion slot for adding new capabilities for a device. The first 256 MB and 512 MB SD cards were announced in 2001. miniSD At March 2003 CeBIT, SanDisk Corporation introduced, announced and demonstrated the miniSD form factor. The SDA adopted the miniSD card in 2003 as a small-form-factor extension to the SD card standard. While the new cards were designed for mobile phones, they were usually packaged with a miniSD adapter that provided compatibility with a standard SD memory card slot. microSD MicroSD form-factor memory cards were introduced in 2004 by SanDisk at CeBIT and originally called T-Flash, and later TransFlash, commonly abbreviated to "TF". T-Flash was renamed microSD in 2005 when it was adopted by the SDA. TransFlash and microSD cards are functionally identical, allowing either to operate in devices made for the other. A passive adapter allows the use of microSD and TransFlash cards in SD card slots. 2006–2008: SDHC and SDIO In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC support built into the host device. Devices that support miniSDHC work with miniSD and miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD card. Since 2008, miniSD cards are no longer produced, due to market domination of the even smaller microSD cards. 2009–2019: SDXC The storage density of memory cards increased significantly throughout the 2010s, allowing the earliest devices to offer support for the SD:XC standard, such as the Samsung Galaxy S III and Samsung Galaxy Note II mobile phones, to expand their available storage to several hundreds of gigabytes. In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TB and speeds up to 300 MB/s. SDXC cards are formatted with the exFAT file system by default. SDXC was announced at the Consumer Electronics Show (CES) 2009 (January 7–10). At the same show, SanDisk and Sony also announced a comparable Memory Stick XC variant with the same 2 TB maximum as SDXC, and Panasonic announced plans to produce 64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card, a 32 GB card with a read/write speed of 400 Mbit/s. But only early in 2010 did compatible host devices come onto the market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D (also known as Rebel T2i) Digital SLR camera, a USB card reader from Panasonic, and an integrated SDXC card reader from JMicron. The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus, which does not have the bandwidth to support SDXC at full speed. In early 2010, commercial SDXC cards appeared from Toshiba (64 GB), Panasonic (64 GB and 48 GB), and SanDisk (64 GB). In early 2011, Centon Electronics, Inc. (64 GB and 128 GB) and Lexar (128 GB) began shipping SDXC cards rated at Speed Class 10. Pretec offered cards from 8 GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC card. Kingmax released a comparable product in 2011. In April 2012, Panasonic introduced MicroP2 card format for professional video applications. The cards are essentially full-size SDHC or SDXC UHS-II cards, rated at UHS Speed Class U1. An adapter allows MicroP2 cards to work in current P2 card equipment. Panasonic MicroP2 cards shipped in March 2013 and were the first UHS-II compliant products on market; initial offer includes a 32 GB SDHC card and a 64 GB SDXC card. Later that year, Lexar released the first 256 GB SDXC card, based on 20 nm NAND flash technology. In February 2014, SanDisk introduced the first 128 GB microSDXC card, which was followed by a 200 GB microSDXC card in March 2015. September 2014 saw SanDisk announce the first 512 GB SDXC card. Samsung announced the world's first EVO Plus 256 GB microSDXC card in May 2016, and in September 2016 Western Digital (SanDisk) announced that a prototype of the first 1 TB SDXC card would be demonstrated at Photokina. In August 2017, SanDisk launched a 400 GB microSDXC card. In January 2018, Integral Memory unveiled its 512 GB microSDXC card. In May 2018, PNY launched a 512 GB microSDXC card. In June 2018 Kingston announced its Canvas series of microSD cards which were capable of capacities up to 512 GB, in three variations, Select, Go! and React. In February 2019, Micron and SanDisk unveiled their microSDXC cards of 1 TB capacity. 2019–present: SDUC The Secure Digital Ultra Capacity (SDUC) format supports cards up to 128 TB and offers speeds up to 985 MB/s. In April 2024, Western Digital (SanDisk) revealed the world's first 4 TB SD card at NAB 2024, which will make use of the SDUC format. It is set to release in 2025. Capacity Secure Digital includes five card families available in three form factors. The five families are the original standard capacity (SDSC), high capacity (SDHC), extended capacity (SDXC), ultra capacity (SDUC) and SDIO, which combines input/output functions with data storage. SD (SDSC) The second-generation Secure Digital (SDSC or Secure Digital Standard Capacity) card was developed to improve on the MultiMediaCard (MMC) standard, which continued to evolve, but in a different direction. Secure Digital changed the MMC design in several ways: Asymmetrical shape of the sides of the SD card prevents inserting it upside down (whereas an MMC goes in most of the way but makes no contact if inverted). Most standard size SD cards are thick, with microSD versions being thick, compared to for MMCs. The SD specification defines a card called Thin SD with a thickness of 1.4 mm, but they occur only rarely, as the SDA went on to define even smaller form factors. The card's electrical contacts are recessed beneath the surface of the card, protecting them from contact with a user's fingers. The SD specification envisioned capacities and transfer rates exceeding those of MMC, and both of these functionalities have grown over time. For a comparison table, see below. While MMC uses a single pin for data transfers, the SD card added a four-wire bus mode for higher data rates. The SD card added Content Protection for Recordable Media (CPRM) security circuitry for digital rights management (DRM) content-protection. Addition of a write-protect notch Full-size SD cards do not fit into the slimmer MMC slots, and other issues also affect the ability to use one format in a host device designed for the other. SDHC The Secure Digital High Capacity (SDHC) format, announced in January 2006 and defined in version 2.0 of the SD specification, supports cards with capacities up to 32 GB. The SDHC trademark is licensed to ensure compatibility. SDHC cards are physically and electrically identical to standard-capacity SD cards (SDSC). The major compatibility issues between SDHC and SDSC cards are the redefinition of the Card-Specific Data (CSD) register in version 2.0 (see below), and the fact that SDHC cards are shipped preformatted with the FAT32 file system. Version 2.0 also introduces a high-speed bus mode for both SDSC and SDHC cards, which doubles the original Standard Speed clock to produce 25 MB/s. SDHC host devices are required to accept older SD cards. However, older host devices do not recognize SDHC or SDXC memory cards, although some devices can do so through a firmware upgrade. Older Windows operating systems released before Windows 7 require patches or service packs to support access to SDHC cards. SDXC The Secure Digital eXtended Capacity (SDXC) format, announced in January 2009 and defined in version 3.01 of the SD specification, supports cards up to 2 TB, compared to a limit of 32 GB for SDHC cards in the SD 2.0 specification. SDXC adopts Microsoft's exFAT file system as a mandatory feature. Version 3.01 also introduced the Ultra High Speed (UHS) bus for both SDHC and SDXC cards, with interface speeds from 50 MB/s to 104 MB/s for four-bit UHS-I bus. (this number has since been exceeded with SanDisk proprietary technology for 170 MB/s read, which is not proprietary anymore, as Lexar has the 1066x running at 160 MB/s read and 120 MB/s write via UHS 1, and Kingston also has their Canvas Go! Plus, also running at 170 MB/s). Version 4.0, introduced in June 2011, allows speeds of 156 MB/s to 312 MB/s over the four-lane (two differential lanes) UHS-II bus, which requires an additional row of physical pins. Version 5.0 was announced in February 2016 at CP+ 2016, and added "Video Speed Class" ratings for UHS cards to handle higher resolution video formats like 8K. The new ratings define a minimal write speed of 90 MB/s. SDXC cards are required to be formatted using exFAT, but many operating systems will support others. Windows Vista (SP1) and later and OS X (10.6.5 and later) have native support for exFAT. (Windows XP and Server 2003 can support exFAT via an optional update from Microsoft.) Most BSD and Linux distributions did not have exFAT support for legal reasons, though in Linux kernel 5.4 Microsoft open-sourced the spec and allowed the inclusion of an exFAT driver. Users of older kernels or BSD can manually install third-party implementations of exFAT (as a FUSE module) in order to be able to mount exFAT-formatted volumes. However, SDXC cards can be reformatted to use any file system (such as ext4, UFS, VFAT or NTFS), alleviating the restrictions associated with exFAT availability. The SD Association provides a formatting utility for Windows and Mac OS X that checks and formats SD, SDHC, SDXC and SDUC cards. Except for the change of file system, SDXC cards are mostly backward compatible with SDHC readers, and many SDHC host devices can use SDXC cards if they are first reformatted to the FAT32 file system. SDUC The Secure Digital Ultra Capacity (SDUC) format, described in the SD 7.0 specification, and announced in June 2018, supports cards up to 128 TB, regardless of form factor, either micro or full size, or interface type including UHS-I, UHS-II, UHS-III or SD Express. Speed SD card speed is customarily rated by its sequential read or write speed. The sequential performance aspect is the most relevant for storing and retrieving large files (relative to block sizes internal to the flash memory), such as images and multimedia. Small data (such as file names, sizes and timestamps) falls under the much lower speed limit of random access, which can be the limiting factor in some use cases. With early SD cards, a few card manufacturers specified the speed as a "times" ("×") rating, which compared the average speed of reading data to that of the original CD-ROM drive. This was superseded by the Speed Class Rating, which guarantees a minimum rate at which data can be written to the card. The newer families of SD card improve card speed by increasing the bus rate (the frequency of the clock signal that strobes information into and out of the card). Whatever the bus rate, the card can signal to the host that it is "busy" until a read or a write operation is complete. Compliance with a higher speed rating is a guarantee that the card limits its use of the "busy" indication. Bus Default Speed SD cards will read and write at speeds of 12.5 MB/s. High Speed High-Speed Mode (25 MB/s) was introduced to support digital cameras with 1.10 spec version. UHS (Ultra High Speed) The Ultra High Speed (UHS) bus is available on some SDHC and SDXC cards. Cards that comply with UHS show Roman numerals 'I', 'II' or 'III' next to the SD card logo, and report this capability to the host device. Use of UHS-I requires that the host device command the card to drop from 3.3-volt to 1.8-volt operation over the I/O interface pins and select the four-bit transfer mode, while UHS-II requires 0.4-volt operation. The higher speed rates of UHS-II and III are achieved by using two-lane 0.4 V low-voltage differential signaling (LVDS) on a second row of pins. Each lane is capable of transferring up to 156 MB/s. In full-duplex mode, one lane is used for Transmit while the other is used for Receive. In half-duplex mode both lanes are used for the same direction of data transfer allowing a double data rate at the same clock speed. In addition to enabling higher data rates, the UHS-II interface allows for lower interface power consumption, lower I/O voltage and lower electromagnetic interference (EMI). The following ultra-high speeds are specified: UHS-I Specified in SD version 3.01. Supports a clock frequency of 100 MHz (a quadrupling of the original "Default Speed"), which in four-bit transfer mode could transfer 50 MB/s (SDR50). UHS-I cards declared as UHS104 (SDR104) also support a clock frequency of 208 MHz, which could transfer 104 MB/s. Double data rate operation at 50 MHz (DDR50) is also specified in Version 3.01, and is mandatory for microSDHC and microSDXC cards labeled as UHS-I. In this mode, four bits are transferred when the clock signal rises and another four bits when it falls, transferring an entire byte on each full clock cycle, hence a 50 MB/s operation could be transferred using a 50 MHz clock. There is a proprietary UHS-I extension, called DDR200, originally created by SanDisk that increases transfer speed further to 170 MB/s. Unlike UHS-II, it does not use additional pins. It achieves this by using the 208 MHz frequency of the standard SDR104 mode, but using DDR transfers. This extension has since then been used by Lexar for their 1066x series (160 MB/s), Kingston Canvas Go Plus (170 MB/s) and the MyMemory PRO SD card (180 MB/s). UHS-II Specified in version 4.0, further raises the data transfer rate to a theoretical maximum of 156 MB/s (full-duplex) or 312 MB/s (half-duplex) using an additional row of pins for LVDS signalling (a total of 17 pins for full-size and 16 pins for micro-size cards). While first implementations in compact system cameras were seen three years after specification (2014), it took many more years until UHS-II was implemented on a regular basis. At the beginning of 2025, 100 DSLR and mirrorless cameras support UHS-II. UHS-III Version 6.0, released in February 2017, added two new data rates to the standard. FD312 provides 312 MB/s while FD624 doubles that. Both are full-duplex. The physical interface and pin-layout are the same as with UHS-II, retaining backward compatibility. SD Express The SD Express bus was released in June 2018 with SD specification 7.0. It uses a single PCIe lane to provide full-duplex 985 MB/s transfer speed. Supporting cards must also implement the NVM Express storage access protocol. The Express bus can be implemented by SDHC, SDXC and SDUC cards. For legacy application use, SD Express cards must also support High-Speed bus and UHS-I bus. The Express bus re-uses the pin layout of UHS-II cards and reserves the space for additional two pins that may be introduced in the future. Hosts which implement version 7.0 of the spec allow SD Cards to do direct memory access, which increases the attack surface of the host dramatically in the face of malicious SD cards. Version 8.0 was announced on 19 May 2020, with support for two PCIe lanes with an additional row of contacts and PCIe 4.0 transfer rates, for a maximum bandwidth of 3,938 MB/s. Version 9.0 was released in February 2022. Version 9.1 was announced in October 2023. microSD Express In February 2019, the SD Association announced microSD Express. The microSD Express cards offer PCI Express and NVMe interfaces, as the June 2018 SD Express release did, alongside the legacy microSD interface for continued backwards compatibility. The SDA also released visual marks to denote microSD Express memory cards to make matching the card and device easier for optimal device performance. Class The SD Association defines standard speed classes for SDHC/SDXC cards indicating minimum performance (minimum serial data writing speed). Both read and write speeds must exceed the specified value. The specification defines these classes in terms of performance curves that translate into the following minimum read-write performance levels on an empty card and suitability for different applications: The SD Association defines three types of Speed Class ratings: the original Speed Class, UHS Speed Class and Video Speed Class. Speed Class Speed Class ratings 2, 4 and 6 assert that the card supports the respective number of megabytes per second as a minimum sustained write speed for a card in a fragmented state. Class 10 asserts that the card supports 10 MB/s as a minimum non-fragmented sequential write speed and uses a High Speed bus mode. The host device can read a card's speed class and warn the user if the card reports a speed class that falls below an application's minimum need. By comparison, the older "×" rating measured maximum speed under ideal conditions, and was vague as to whether this was read speed or write speed. The graphical symbol for the speed class has a number encircled with 'C' (C2, C4, C6 and C10). "×" rating The "×" rating, which was used by some card manufacturers and made obsolete by speed classes, is a multiple of the standard CD-ROM drive speed of 150 KB/s (approximately 1.23 Mbit/s). Basic cards transfer data at up to six times (6×) the CD-ROM speed; that is, 900 kbit/s or 7.37 Mbit/s. The 2.0 specification defines speeds up to 200×, but is not as specific as Speed Classes are on how to measure speed. Manufacturers may report best-case speeds and may report the card's fastest read speed, which is typically faster than the write speed. Some vendors, including Transcend and Kingston, report their cards' write speed. When a card lists both a speed class and an "×" rating, the latter may be assumed a read speed only. UHS Speed Class UHS-I and UHS-II cards can use UHS Speed Class rating with two possible grades: class 1 for minimum write performance of at least 10 MB/s ('U1' symbol featuring number 1 inside 'U') and class 3 for minimum write performance of 30 MB/s ('U3' symbol featuring 3 inside 'U'), targeted at recording 4K video. Before November 2013, the rating was branded UHS Speed Grade and contained grades 0 (no symbol) and 1 ('U1' symbol). Manufacturers can also display standard speed class symbols (C2, C4, C6 and C10) alongside, or in place of UHS speed class. UHS memory cards work best with UHS host devices. The combination lets the user record HD resolution videos with tapeless camcorders while performing other functions. It is also suitable for real-time broadcasts and capturing large HD videos. Video Speed Class Video Speed Class defines a set of requirements for UHS cards to match the modern MLC NAND flash memory and supports progressive 4K and 8K video with minimum sequential writing speeds of 6 – 90 MB/s. The graphical symbols use a stylized 'V' followed by a number designating write speed (i.e. V6, V10, V30, V60 and V90). SD Express Speed Class Version 9.1 of the SD specification, introduced in October 2023, defines new SD Express speed classes. The graphical symbols use a stylized 'E' followed by a number designating the minimum read/write speed. The specified classes are E150, E300, E450 and E600. Application Performance Class Application Performance Class is a newly defined standard from the SD Specification 5.1 and 6.0 which not only define sequential Writing Speeds but also mandates a minimum IOPS for reading and writing. Class A1 requires a minimum of 1,500 reading and 500 writing operations per second using 4 kbytes blocks, while class A2 requires 4,000 and 2,000 IOPS. A2 class cards require host driver support as they use command queuing and write caching to achieve their higher speeds. Without they are guaranteed to at least reach A1 speeds. As of Linux kernel 5.15, it fully supports A2. Real-world performance In applications that require sustained write throughput, such as video recording, the device might not perform satisfactorily if the SD card's class rating falls below a particular speed. For example, a high-definition camcorder may require a card of not less than Class 6, suffering dropouts or corrupted video if a slower card is used. Digital cameras with slow cards may take a noticeable time after taking a photograph before being ready for the next, while the camera writes the first picture. The speed class rating does not totally characterize card performance. Different cards of the same class may vary considerably while meeting class specifications. A card's speed depends on many factors, including: The frequency of soft errors that the card's controller must re-try Write amplification: The flash controller may need to overwrite more data than requested. This has to do with performing read-modify-write operations on write blocks, freeing up (the much larger) erase blocks, while moving data around to achieve wear leveling. File fragmentation: where there is not sufficient space for a file to be recorded in a contiguous region, it is split into non-contiguous fragments. This does not cause rotational or head-movement delays as with electromechanical hard drives, but may decrease speed⁠—for instance, by requiring additional reads and computation to determine where on the card the file's next fragment is stored. In addition, speed may vary markedly between writing a large amount of data to a single file (sequential access, as when a digital camera records large photographs or videos) and writing a large number of small files (a random-access use common in smartphones). A study in 2012 found that, in this random-access use, some Class 2 cards achieved a write speed of 1.38 MB/s, while all cards tested of Class 6 or greater (and some of lower Classes; lower Class does not necessarily mean better small-file performance), including those from major manufacturers, were over 100 times slower. In 2014, a blogger measured a 300-fold performance difference on small writes; this time, the best card in this category was a class 4 card. Features Card security Commands to disable writes The host device can command the SD card to become read-only (to reject subsequent commands to write information to it). There are both reversible and irreversible host commands that achieve this. Write-protect notch Most full-size SD cards have a "mechanical write protect switch" allowing the user to advise the host computer that the user wants the device to be treated as read-only. This does not protect the data on the card if the host is compromised: "It is the responsibility of the host to protect the card. The position [i.e., setting] of the write protect switch is unknown to the internal circuitry of the card." Some host devices do not support write protection, which is an optional feature of the SD specification, and drivers and devices that do obey a read-only indication may give the user a way to override it. The switch is a sliding tab that covers a notch in the card. The miniSD and microSD formats do not directly support a write protection notch, but they can be inserted into full-size adapters which do. When looking at the SD card from the top, the right side (the side with the beveled corner) must be notched. On the left side, there may be a write-protection notch. If the notch is omitted, the card can be read and written. If the card is notched, it is read-only. If the card has a notch and a sliding tab which covers the notch, the user can slide the tab upward (toward the contacts) to declare the card read/write, or downward to declare it read-only. The diagram to the right shows an orange sliding write-protect tab in both the unlocked and locked positions. Cards sold with content that must not be altered are permanently marked read-only by having a notch and no sliding tab. Card password A host device can lock an SD card using a password of up to 16 bytes, typically supplied by the user. A locked card interacts normally with the host device except that it rejects commands to read and write data. A locked card can be unlocked only by providing the same password. The host device can, after supplying the old password, specify a new password or disable locking. Without the password (typically, in the case that the user forgets the password), the host device can command the card to erase all the data on the card for future re-use (except card data under DRM), but there is no way to gain access to the existing data. Windows Phone 7 devices use SD cards designed for access only by the phone manufacturer or mobile provider. An SD card inserted into the phone underneath the battery compartment becomes locked "to the phone with an automatically generated key" so that "the SD card cannot be read by another phone, device, or PC". Symbian devices, however, are some of the few that can perform the necessary low-level format operations on locked SD cards. It is therefore possible to use a device such as the Nokia N8 to reformat the card for subsequent use in other devices. smartSD cards A smartSD memory card is a microSD card with an internal "secure element" that allows the transfer of ISO 7816 Application Protocol Data Unit commands to, for example, JavaCard applets running on the internal secure element through the SD bus. Some of the earliest versions of microSD memory cards with secure elements were developed in 2009 by DeviceFidelity, Inc., a pioneer in near-field communication (NFC) and mobile payments, with the introduction of In2Pay and CredenSE products, later commercialized and certified for mobile contactless transactions by Visa in 2010. DeviceFidelity also adapted the In2Pay microSD to work with the Apple iPhone using the iCaisse, and pioneered the first NFC transactions and mobile payments on an Apple device in 2010. Various implementations of smartSD cards have been done for payment applications and secured authentication. In 2012 Good Technology partnered with DeviceFidelity to use microSD cards with secure elements for mobile identity and access control. microSD cards with Secure Elements and NFC (near-field communication) support are used for mobile payments, and have been used in direct-to-consumer mobile wallets and mobile banking solutions, some of which were launched by major banks around the world, including Bank of America, US Bank and Wells Fargo, while others were part of innovative new direct-to-consumer neobank programs such as moneto, first launched in 2012. microSD cards with Secure Elements have also been used for secure voice encryption on mobile devices, which allows for one of the highest levels of security in person-to-person voice communications. Such solutions are heavily used in intelligence and security. In 2011, HID Global partnered with Arizona State University to launch campus access solutions for students using microSD with Secure Element and MiFare technology provided by DeviceFidelity, Inc. This was the first time regular mobile phones could be used to open doors without need for electronic access keys. Vendor enhancements Vendors have sought to differentiate their products in the market through various vendor-specific features: Integrated Wi-Fi – Several companies produce SD cards with built-in Wi-Fi transceivers supporting static security (WEP 40/104/128, WPA-PSK and WPA2-PSK). The card lets any digital camera with an SD slot transmit captured images over a wireless network, or store the images on the card's memory until it is in range of a wireless network. Examples include: Eye-Fi / SanDisk, Transcend Wi-Fi, Toshiba FlashAir, Trek Flucard, PQI Air Card and LZeal ez Share. Some models geotag their pictures. Pre-loaded content – In 2006, SanDisk announced Gruvi, a microSD card with extra digital rights management features, which they intended as a medium for publishing content. SanDisk again announced pre-loaded cards in 2008, under the slotMusic name, this time not using any of the DRM capabilities of the SD card. In 2011, SanDisk offered various collections of 1000 songs on a single slotMusic card for about $40, now restricted to compatible devices and without the ability to copy the files. Integrated USB connector – The SanDisk SD Plus product can be plugged directly into a USB port without needing a USB card reader. Other companies introduced comparable products, such as the Duo SD product of OCZ Technology and the 3 Way (microSDHC, SDHC and USB) product of A-DATA, which was available in 2008 only. Different colors – SanDisk has used various colors of plastic or adhesive label, including a "gaming" line in translucent plastic colors that indicated the card's capacity. In 2006, the first 256MB microSD to used color-coded cards by Kingmax, which later other brands (e.g., SanDisk, Kioxia) had been implemented to this day. Integrated display – In 2006, ADATA announced a Super Info SD card with a digital display that provided a two-character label and showed the amount of unused memory on the card. SDIO cards A SDIO (Secure Digital Input Output) card is an extension of the SD specification to cover I/O functions. SDIO cards are only fully functional in host devices designed to support their input-output functions (typically PDAs like the Palm Treo, but occasionally laptops or mobile phones). These devices can use the SD slot to support GPS receivers, modems, barcode readers, FM radio tuners, TV tuners, RFID readers, digital cameras and interfaces to Wi-Fi, Bluetooth, Ethernet and IrDA. Many other SDIO devices have been proposed, but it is now more common for I/O devices to connect using the USB interface. SDIO cards support most of the memory commands of SD cards. SDIO cards can be structured as eight logical cards, although currently, the typical way that an SDIO card uses this capability is to structure itself as one I/O card and one memory card. The SDIO and SD interfaces are mechanically and electrically identical. Host devices built for SDIO cards generally accept SD memory cards without I/O functions. However, the reverse is not true, because host devices need suitable drivers and applications to support the card's I/O functions. For example, an HP SDIO camera usually does not work with PDAs that do not list it as an accessory. Inserting an SDIO card into any SD slot causes no physical damage nor disruption to the host device, but users may be frustrated that the SDIO card does not function fully when inserted into a seemingly compatible slot. (USB and Bluetooth devices exhibit comparable compatibility issues, although to a lesser extent thanks to standardized USB device classes and Bluetooth profiles.) The SDIO family comprises Low-Speed and Full-Speed cards. Both types of SDIO cards support Serial Peripheral Interface (SPI) and one-bit SD bus types. Low-Speed SDIO cards are allowed to also support the four-bit SD bus; Full-Speed SDIO cards are required to support the four-bit SD bus. To use an SDIO card as a "combo card" (for both memory and I/O), the host device must first select four-bit SD bus operation. Two other unique features of Low-Speed SDIO are a maximum clock rate of 400 kHz for all communications, and the use of Pin 8 as "interrupt" to try to initiate dialogue with the host device. Compatibility Host devices that comply with newer versions of the specification provide backward compatibility and accept older SD cards. For example, SDXC host devices accept all previous families of SD memory cards, and SDHC host devices also accept standard SD cards. Older host devices generally do not support newer card formats, and even when they might support the bus interface used by the card, there are several factors that arise: A newer card may offer greater capacity than the host device can handle (over 4 GB for SDHC, over 32 GB for SDXC). A newer card may use a file system the host device cannot navigate (FAT32 for SDHC, exFAT for SDXC) Use of an SDIO card requires the host device be designed for the input/output functions the card provides. The hardware interface of the card was changed starting with the version 2.0 (new high-speed bus clocks, redefinition of storage capacity bits) and SDHC family (ultra-high speed (UHS) bus) UHS-II has physically more pins but is backwards compatible to UHS-I and non-UHS for both slot and card. Some vendors produced SDSC cards above 1 GB before the SDA had standardized a method of doing so. Markets Due to their compact size, Secure Digital cards are used in many consumer electronic devices, and have become a widespread means of storing several gigabytes of data in a small size. Devices in which the user may remove and replace cards often, such as digital cameras, camcorders and video game consoles, tend to use full-sized cards. Devices in which small size is paramount, such as mobile phones, action cameras such as the GoPro Hero series, and camera drones, tend to use microSD cards. Mobile phones The microSD card has helped propel the smartphone market by giving both manufacturers and consumers greater flexibility and freedom. While cloud storage depends on stable internet connection and sufficiently voluminous data plans, memory cards in mobile devices provide location-independent and private storage expansion with much higher transfer rates and no network delay, enabling applications such as photography and video recording. While data stored internally on bricked devices is inaccessible, data stored on the memory card can be salvaged and accessed externally by the user as mass storage device. A benefit over USB on the go storage expansion is uncompromised ergonomy. The usage of a memory card also protects the mobile phone's non-replaceable internal storage from weardown from heavy applications such as excessive camera usage and portable FTP server hosting over WiFi Direct. Due to the technical development of memory cards, users of existing mobile devices are able to expand their storage further and priceworthier with time. Recent versions of major operating systems such as Windows Mobile and Android allow applications to run from microSD cards, creating possibilities for new usage models for SD cards in mobile computing markets, as well as clearing available internal storage space. SD cards are not the most economical solution in devices that need only a small amount of non-volatile memory, such as station presets in small radios. They may also not present the best choice for applications that require higher storage capacities or speeds as provided by other flash card standards such as CompactFlash. These limitations may be addressed by evolving memory technologies, such as the new SD 7.0 specifications which allow storage capabilities of up to 128 TB. Many personal computers of all types, including tablets and mobile phones, use SD cards, either through built-in slots or through an active electronic adapter. Adapters exist for the PC card, ExpressBus, USB, FireWire and the parallel printer port. Active adapters also let SD cards be used in devices designed for other formats, such as CompactFlash. The FlashPath adapter lets SD cards be used in a floppy disk drive. Some devices such as the Samsung Galaxy Fit (2011) and Samsung Galaxy Note 8.0 (2013) have an SD card compartment located externally and accessible by hand, while it is located under the battery cover on other devices. More recent mobile phones use a pin-hole ejection system for the tray which houses both the memory card and SIM card. Counterfeits Commonly found on the market are mislabeled or counterfeit Secure Digital cards that report a fake capacity or run slower than labeled. Software tools exist to check and detect counterfeit products, and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit. Detection of counterfeit cards usually involves copying files with random data to the SD card until the card's capacity is reached, and copying them back. The files that were copied back can be tested either by comparing checksums (e.g. MD5), or trying to compress them. The latter approach leverages the fact that counterfeited cards let the user read back files, which then consist of easily compressible uniform data (for example, repeating 0xFFs). Digital cameras Secure Digital memory cards can be used in Sony XDCAM EX camcorders with an adapter. Personal computers Although many personal computers accommodate SD cards as an auxiliary storage device using a built-in slot, or can accommodate SD cards by means of a USB adapter, SD cards cannot be used as the primary hard disk through the onboard ATA controller, because none of the SD card variants support ATA signalling. Primary hard disk use requires a separate SD host controller or an SD-to-CompactFlash converter. However, on computers that support bootstrapping from a USB interface, an SD card in a USB adapter can be the boot disk, provided it contains an operating system that supports USB access once the bootstrap is complete. In laptop and tablet computers, memory cards in an integrated memory card reader offer an ergonomical benefit over USB flash drives, as the latter sticks out of the device, and the user would need to be cautious not to bump it while transporting the device, which could damage the USB port. Memory cards have a unified shape and do not reserve a USB port when inserted into a computer's dedicated card slot. Since late 2009, newer Apple computers with installed SD card readers have been able to boot in macOS from SD storage devices, when properly formatted to Mac OS Extended file format and the default partition table set to GUID Partition Table. SD cards are increasing in usage and popularity among owners of vintage computers like Atari 8-bit computers. For example SIO2SD (SIO is an Atari port for connecting external devices) is used nowadays. Software for an 8-bit Atari may be included on one SD card that may have less than 4–8 GB of disk size (2019). Embedded systems In 2008, the SDA specified Embedded SD, "leverag[ing] well-known SD standards" to enable non-removable SD-style devices on printed circuit boards. However this standard was not adopted by the market while the MMC standard became the de facto standard for embedded systems. SanDisk provides such embedded memory components under the iNAND brand. While some modern microcontrollers integrate SDIO hardware which uses the faster proprietary four-bit SD bus mode, almost all modern microcontrollers at least have SPI units that can interface to an SD card operating in the slower one-bit SPI bus mode. If not, SPI can also be emulated by bit banging (e.g. a SD card slot soldered to a Linksys WRT54G-TM router and wired to GPIO pins using DD-WRT's Linux kernel achieved only throughput). Music distribution Prerecorded microSDs have been used to commercialize music under the brands slotMusic and slotRadio by SanDisk and MQS by Astell & Kern. Technical details Physical size The SD card specification defines three physical sizes. The SD and SDHC families are available in all three sizes, but the SDXC and SDUC families are not available in the mini size, and the SDIO family is not available in the micro size. Smaller cards are usable in larger slots through use of a passive adapter. Standard SD (SDSC), SDHC, SDXC, SDIO, SDUC (as thin as MMC) for Thin SD (rare) MiniSD miniSD, miniSDHC, miniSDIO microSD The micro form factor is the smallest SD card format. microSD, microSDHC, microSDXC, microSDUC Transfer modes Cards may support various combinations of the following bus types and transfer modes. The SPI bus mode and one-bit SD bus mode are mandatory for all SD families, as explained in the next section. Once the host device and the SD card negotiate a bus interface mode, the usage of the numbered pins is the same for all card sizes. SPI bus mode: Serial Peripheral Interface Bus is primarily used by embedded microcontrollers. This bus type supports only a 3.3-volt interface. This is the only bus type that does not require a host license. One-bit SD bus mode: Separate command and data channels and a proprietary transfer format. Four-bit SD bus mode: Uses extra pins plus some reassigned pins. This is the same protocol as the one-bit SD bus mode which uses one command and four data lines for faster data transfer. All SD cards support this mode. UHS-I and UHS-II require this bus type. Two differential lines SD UHS-II mode: Uses two low-voltage differential signaling interfaces to transfer commands and data. UHS-II cards include this interface in addition to the SD bus modes. The physical interface comprises 9 pins, except that the miniSD card adds two unconnected pins in the center and the microSD card omits one of the two VSS (Ground) pins.
Technology
Non-volatile memory
null
315952
https://en.wikipedia.org/wiki/Technology%20transfer
Technology transfer
Technology transfer (TT), also called transfer of technology (TOT), is the process of transferring (disseminating) technology from the person or organization that owns or holds it to another person or organization, in an attempt to transform inventions and scientific outcomes into new products and services that benefit society. Technology transfer is closely related to (and may arguably be considered a subset of) knowledge transfer. A comprehensive definition of technology transfer today includes the notion of collaborative process as it became clear that global challenges could be resolved only through the development of global solutions. Knowledge and technology transfer plays a crucial role in connecting innovation stakeholders and moving inventions from creators to public and private users. Intellectual property (IP) is an important instrument of technology transfer, as it establishes an environment conducive to sharing research results and technologies. Analysis in 2003 showed that the context, or environment, and motives of each organization involved will influence the method of technology transfer employed. The motives behind the technology transfer were not necessarily homogenous across organization levels, especially when commercial and government interests are combined. The protection of IP rights enables all parties, including universities and research institutions to ensure ownership of the scientific outcomes of their intellectual activity, and to control the use of IP in accordance with their mission and core values. IP protection gives academic institutions capacity to market their inventions, attract funding, seek industrial partners and assure dissemination of new technologies through means such as licensing or creation of start-ups for the benefit of society. In practice Technology transfers may occur between universities, businesses (of any size, ranging from small, medium, to large), governments, across geopolitical borders, both formally and informally, and both openly and secretly. Often it occurs by concerted effort to share skills, knowledge, technologies, manufacturing methods, samples, and facilities among the participants. While the Technology Transfer process involves many activities, which can be represented in many ways, in reality, technology transfer is a fluid and dynamic process that rarely follows a linear course. Typical steps include: Knowledge creation Disclosure Assessment and evaluation IP protection Fundraising and technology development Marketing Commercialization Product development, and Impact. Technology transfer aims to ensure that scientific and technological developments are accessible to a wider range of users who can then further develop and exploit the technology into new products, processes, applications, materials, or services. It is closely related to (and may arguably be considered a subset of) knowledge transfer. Horizontal transfer is the movement of technologies from one area to another. Transfer of technology is primarily horizontal. Vertical transfer occurs when technologies are moved from applied research centers to research and development departments. Spin-outs Spin-outs are used where the host organization does not have the necessary will, resources, or skills to develop new technology. Often these approaches are associated with raising of venture capital (VC) as a means of funding the development process, a practice common in the United States and the European Union. Research spin-off companies are a popular vehicle of commercialization in Canada, where the rate of licensing of Canadian university research remains far below that of the US. Local venture capital organizations such as the Mid-Atlantic Venture Association (MAVA) also sponsor conferences at which investors assess the potential for commercialization of technology. Technology brokers are people who discovered how to bridge the emergent worlds and apply scientific concepts or processes to new situations or circumstances. A related term, used almost synonymously, especially in Europe, is "technology valorisation". While conceptually the practice has been utilized for many years (in ancient times, Archimedes was notable for applying science to practical problems), the present-day volume of research, combined with high-profile failures at Xerox PARC and elsewhere, has led to a focus on the process itself. Whereas technology transfer can involve the dissemination of highly complex technology from capital-intensive origins to low-capital recipients (and can involve aspects of dependency and fragility of systems), it also can involve appropriate technology, not necessarily high-tech or expensive, that is better disseminated, yielding robustness and independence of systems. Informal promotion Technology transfer is also promoted through informal means, such as at conferences organized by various groups, including the Ewing Marion Kauffman Foundation and the Association of University Technology Managers (AUTM), and at "challenge" competitions by organizations such as the Center for Advancing Innovation in Maryland. AUTM represents over 3,100 technology transfer professionals, and more than 800 universities, research centers, hospitals, businesses and government organizations. The most frequently used informal means of technology transfer are through education, studies, professional exchange of opinions, movement of people, seminars, workshops. . There are numerous professional associations and TTO Networks enhancing different forms of collaboration among technology managers in order to facilitate this "informal" transfer of best practices and experiences. In addition to AUTM, other regional and international associations include the Association of European Science and Technology Transfer Professionals (ASTP), the Alliance of Technology Transfer Professionals (ATTP), Licensing Executives Society (LES), Praxis Auril and others. There are also national Technology transfer associations and networks, such as the National Association of Technology Transfer Offices in Mexico (Red OTT Mexico), the Brazilian Forum of Innovation and Technology Transfer Managers (FORTEC), the Alliance of TechTransfer Professionals of the Philippines (AToP), the South African Research and Innovation Management Association (SARIMA), and other associations. They promote cooperation in technology transfer and the exchange of best practices and experiences among professionals, as today international technology transfer is considered one of the most effective ways to bring people together to find solutions to global problems such as COVID-19, climate change or cyber-attacks. IP policies Universities and research institutions seeking to partner with industry or other organizations can adopt an institutional intellectual property policy for effective intellectual property management and technology transfer. Such policies provide structure, predictability, and a n environment, in which commercialization partners (industrial sponsors, consultants, non-profit organizations, SMEs, governments) and research stakeholders (researchers, technicians, students, visiting researchers, etc.) can access and share knowledge, technology and IP. National IP strategies are measures taken by a government to realize its IP policy objectives. Organizations A research result may be of scientific and commercial interest, but patents are normally only issued for practical processes, and so someone—not necessarily the researchers—must come up with a specific practical process. Another consideration is commercial value; for example, while there are many ways to accomplish nuclear fusion, the ones of commercial value are those that generate more energy than they require to operate. The process to commercially exploit research varies widely. It can involve licensing agreements or setting up joint ventures and partnerships to share both the risks and rewards of bringing new technologies to market. Other corporate vehicles, e.g. spin-outs, are used where the host organization does not have the necessary will, resources, or skills to develop new technology. Often these approaches are associated with raising of venture capital (VC) as a means of funding the development process. Research spin-off companies are a popular vehicle of commercialization in Canada, where the rate of licensing of Canadian university research remains far below that of the US. Scholars Jeffrey Stoff and Alex Joske have argued that the Chinese Communist Party's united front "influence apparatus intersects with or directly supports its global technology transfer apparatus." Technology transfer offices Many universities and research institutions, and governmental organizations now have an Office of Technology Transfer (TTO, also known as "Tech Transfer" or "TechXfer") dedicated to identifying research that has potential commercial interest and strategies for how to exploit it. Technology Transfer Offices are usually created within a university in order to manage IP assets of the university, and the transfer of knowledge and technology to industry. Sometimes, their mandate includes any interaction or contractual relation with the private sector, or other responsibilities, depending on the mission of the institutions. Common names for such offices differ. Some examples include Technology Licensing Office (TLO), Technology Management Office, Research Contracts and IP Services Office, Technology Transfer Interface, Industry Liaisons Office, IP and Technology Management Office, and Nucleus of Technological Innovation. Technology transfer offices may work on behalf of research institutions, governments, and even large multinationals. Where start-ups and spin-outs are the clients, commercial fees are sometimes waived in lieu of an equity stake in the business. As a result of the potential complexity of the technology transfer process, technology transfer organizations are often multidisciplinary, including economists, engineers, lawyers, marketers and scientists. The dynamics of the technology transfer process have attracted attention in their own right, and there are several dedicated societies and journals. Technology and Innovation Support Centers Technology and Innovation Support Centers (TISCs) help innovators access patent information, scientific and technical literature and search tools and databases and make more effective use of these resources to promote innovation, technology transfer, commercialization and utilization of technologies. The WIPO TISCs program currently supports over 80 countries. WIPO supports its member states in establishing and developing TISCs in universities and other institutions in numerous countries around the world. Services offered by TISCs may include: access to online patent and non-patent (scientific and technical) resources and IP-related publications; assistance in searching and retrieving technology information; training in database search; on-demand searches (novelty, state-of-the-art and infringement); monitoring technology and competitors; basic information on industrial property laws, management and strategy, and technology commercialization and marketing. Science technology parks Science and technology parks (STP) are territories usually affiliated with a university or a research institution, which accommodate and foster the growth of companies based therein through technology transfer and open innovation. Technology incubators Technology business incubators (TBIs) are organizations that help startup companies and individual entrepreneurs develop their businesses by providing a range of services, including training, brokering and financing. IP marketplaces Intellectual Property marketplaces are Internet-based platforms that allow innovators to connect with potential partners and/or clients. For example, online platform WIPO GREEN enable collaborations in specific areas of knowledge transfer and facilitate matchmaking between technology providers and technology seekers. Government and intellectual property support There has been a marked increase in technology transfer intermediaries specialized in their field since 1980, stimulated in large part by the Bayh–Dole Act and equivalent legislation in other countries, which provided additional incentives for research exploitation. Due to the increasing focus on technology transfer there are several forms of intermediary institutions at work in this sector, from TTOs to IP 'trolls' that act outside the Bayh–Dole Act provisions. Due to the risk of exploitation, intellectual property policy, training and systems support for technology transfer by government, research institutes and universities, have been international and regionally-focused organisation, such as the World Intellectual Property Organisation and the European Union. Partnership intermediaries The U.S. government's annual budget funds over $100 billion in research and development activity, which leads to a continuous pipeline of new inventions and technologies from within government laboratories. Through legislation including the Bayh–Dole Act, Congress encourages the private sector to use those technologies with commercial potential through technology transfer mechanisms such as Cooperative Research and Development Agreements, Patent License Agreements, Educational Partnership Agreements, and state/local government partnerships. The term "partnership intermediary" means an agency of a state or local government—or a nonprofit entity owned, chartered, funded, or operated by or on behalf of a state or local government—that assists, counsels, advises, evaluates, or otherwise cooperates with small business firms; institutions of higher education defined in section 201(a) of the Higher Education Act of 1965 (20 USC § 1141 [a]); or educational institutions within the meaning of section 2194 of Title 10, United States Code, that need or can make demonstrably productive use of technology-related assistance from a federal laboratory, including state programs receiving funds under cooperative agreements entered into under section 5121 of the Omnibus Trade and Competitiveness Act of 1988 (15 USC § 2781). During COVID-19 pandemic Technology transfer had a direct impact on contributing to global public health issues, by enabling global access to COVID-19 vaccines. During 2021, vaccine developers concluded over 200 technology transfer agreements. One example was AstraZeneca concluding the licensing and technology transfer agreements on AstraZeneca with the Serum Institute of India and with Daiichi Sankyo of Japan to supply vaccines for COVID-19, which were developed in collaboration with the University of Oxford. In this process Intellectual Property was part of the solution and an important tool for facilitation of affordable global access to COVID 19 treatments – as it was the case in two licensing agreements between Medicines Patent Pool (MPP) and pharmaceutical companies Merck and Pfizer. Drawbacks Despite incentives to move research into production, the practical aspects are sometimes difficult to perform in practice. Using DoD technology readiness levels as a criterion (for example), research tends to focus on TRL (technology readiness level) 1–3 while readiness for production tends to focus on TRL 6–7 or higher. Bridging TRL-3 to TRL-6 has proven to be difficult in some organizations. Attempting to rush research (prototypes) into production (fully tested under diverse conditions, reliable, maintainable, etc.) tends to be more costly and time-consuming than expected. Power political and realpolitik incentives in technology transfer are cognized to be negative factors in destructive applications. Technology transfer to dictatorial regimes is thought to be disruptive for the scientific purposes.
Technology
General
null
315973
https://en.wikipedia.org/wiki/Private%20transport
Private transport
Private transport (as opposed to public transport) is the personal or individual use of transportation which are not available for use by the general public, where in theory the user can decide freely on the time and route of transit ('choice rider' vs. 'captive rider'), using vehicles such as: private car, company car, bicycle, dicycle, self-balancing scooter, motorcycle, scooter, aircraft, boat, snowmobile, carriage, horse, etc., or recreational equipment such as roller skates, inline skates, sailboat, sailplane, skateboard etc. Definition Private transport is in contrast to public transport, and commercial non-public transport. While private transportation may be used alongside nearly all modes of public transportation, private railroad cars are rare (e.g. royal train), although heritage railways are not. Unlike many forms of public transportation, which may be government subsidized or operated by privately owned commercial organizations for mass or general public use, the entire cost of private transportation is born directly or indirectly by the individual user(s). However some scholars argue that it is inaccurate to say that the costs are covered by individual user because big (and often dominant) part of cost of private transportation is the cost of infrastructure on which individual trips rely. They therefore work also with model of quasi-private mobility. Personal transport Private transportation includes both non-motorized methods of private transit (pedestrians, cyclists, skaters, etc.) and all forms of self-propelled transport vehicles. Shared personal transport Non-public passenger transport in vehicles owned by the driver or passenger or operated by the driver. Commercial transport Shared vehicle fleets without driver Self driven transport in vehicles not owned by either the passengers or driver. Shared vehicle fleets with driver Non-scheduled transit vehicles, taxicabs and rickshaws, which are rented or hired in the short-term on-demand with driver, belong, even if the user can freely decide on the time and route of transit, to the special forms of 'public transport'. Shared individual vehicle journeys Means of transport are fixed route and fixed schedule passenger services, for example, excursion riverboats, tourist cable cars, resort ski lifts. Usage Private transport is the dominant form of transportation in most of the world. In the United States, for example, 86.2% of passenger miles are by passenger vehicles, motorcycles, and trucks. Examples of private transport Motorized: Automobile Motorboat Electric bicycle Electric skateboard Hovercraft Moped Motorcycle Motorized wheelchair Private aviation Private jet Motor ship Submarine Electric scooter Electric unicycle Mobility scooter SUV Pick-up truck Limousine Non-motorized: Bicycle Horse-drawn vehicle Hot air balloon Ice skates Inline skates Pack animal Roller skates Scooter Skateboard Walking Wheelchair Sustainability Cycling and walking, above all, have been recognized as the most sustainable transport systems. In general, all muscle-driven mobility will have a similar energy efficiency while at the same time being almost emission-free (apart from the exhaled during breathing). The negative environmental impact of private transport can be alleviated by choosing the optimal modal share for a given environment and transport requirements. Dedicated infrastructure Automobile repair shop Controlled-access highway Diner Drive-thru Drive-in theater Filling station Garage (residential) Motel Parking lot Rest area Retail park Roadside zoo Safari park Roads Racetrack (Cars) Car Dealership Tollbooth Park and ride
Technology
Basics_7
null
316052
https://en.wikipedia.org/wiki/Dalbergia
Dalbergia
Dalbergia is a large genus of small to medium-size trees, shrubs and lianas in the pea family, Fabaceae, subfamily Faboideae. It was recently assigned to the informal monophyletic Dalbergia clade (or tribe): the Dalbergieae. The genus has a wide distribution, native to the tropical regions of Central and South America, Africa, Madagascar and Southern Asia. Fossil record A fossil †Dalbergia phleboptera seed pod has been found in a Chattian deposit, in the municipality of Aix-en-Provence in France. Fossils of †Dalbergia nostratum have been found in rhyodacite tuff of Lower Miocene age in Southern Slovakia near the town of Lučenec. Fossil seed pods of †Dalbergia mecsekense have been found in a Sarmatian deposit in Hungary. †Dalbergia lucida fossils have been described from the Xiaolongtan Formation of late Miocene age in Kaiyuan County, Yunnan Province, China. Uses Many species of Dalbergia are important timber trees, valued for their decorative and often fragrant wood, rich in aromatic oils. The most famous of these are the rosewoods, so-named because of the smell of the timber when cut, but several other valuable woods are yielded by the genus. Species such as Dalbergia nigra known as Rio, Bahia, Brazilian rosewood, palisander de Rio Grande, or jacaranda and Dalbergia latifolia known as (East) Indian Rosewood or Sonokeling have been heavily used in furniture given their colour and grain. Several East Asian species are important materials in traditional Chinese furniture. The (Brazilian) tulipwood (D. decipularis) is cream coloured with red or salmon stripes. It is most often used in crossbanding and other veneers; it should not be confused with the "tulipwood" of the American tulip tree Liriodendron tulipifera, used in inexpensive cabinetwork. The similarly used (but purple with darker stripes), and also Brazilian, kingwood is yielded by D. cearensis. Both are smallish to medium-sized trees, to 10 m. Another notable timber is cocobolo, mainly from D. retusa, a Central American timber with spectacular decorative orange red figure on freshly cut surfaces which slowly fades in air to more subdued tones and hues. Dalbergia sissoo (Indian rosewood) is primarily used for furniture in northern India. Its export is highly regulated due to recent high rates of tree death due to unknown causes. Dalbergia sissoo has historically been the primary rosewood species of northern India. This wood is strong and tough, with color golden to dark brown. It is extremely durable and handsome, and it maintains its shape well. It can be easily seasoned. It is difficult to work, but it takes a fine polish. It is used for high quality furniture, plywoods, bridge piles, sporting goods, and railway sleepers. It is a very good material for decorative work and carvings. Its density is 770 kg/m3. African blackwood (D. melanoxylon) is an intensely black wood in demand for making woodwind musical instruments. Dalbergia species are used as food plants by the larvae of some Lepidoptera species including Bucculatrix mendax which feeds exclusively on Dalbergia sissoo. The Dalbergia species are notorious for causing allergic reactions due to the presence of sensitizing quinones in the wood. Conservation All Dalbergia species are protected under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). All but Dalbergia nigra are listed in Appendix II, with D.nigra listed in Appendix I. Species Dalbergia comprises the following species: Dalbergia abbreviata Craib Dalbergia abrahamii Bosser & R. Rabev. Dalbergia acariiantha Harms Dalbergia acuta Benth. Dalbergia acutifoliolata Mendonca & Sousa Dalbergia adami Berhaut Dalbergia afzeliana G. Don Dalbergia ajudana Harms Dalbergia albertisii Prain Dalbergia albiflora Hutch. & Dalziel subsp. albiflora Hutch. & Dalziel subsp. echinocarpa Hepper Dalbergia altissima Baker f. Dalbergia altissima Pittier Dalbergia amazonica (Radlk.) Ducke Dalbergia andapensis Bosser & R. Rabev. Dalbergia antsirananae Phillipson, Crameri & N.Wilding Dalbergia arbutifolia Baker Dalbergia armata E. Mey. — Hluhluwe creeper Dalbergia assamica Benth. Dalbergia aurea Bosser & R. Rabev. Dalbergia bakeri Baker Dalbergia balansae Prain Dalbergia baronii Baker — Madagascar rosewood, Palisander rosewood, Palissandre voamboana Dalbergia bathiei R. Vig. Dalbergia beccarii Prain Dalbergia beddomei Thoth. Dalbergia benthamii Prain Dalbergia bignonae Berhaut Dalbergia bintuluensis Sunarno & Ohashi Dalbergia boehmii Taub. Dalbergia bojeri Drake Dalbergia boniana Gagnep. Dalbergia borneensis Prain Dalbergia brachystachya Bosser & R. Rabev. Dalbergia bracteolata Baker Dalbergia brasiliensis Vogel Dalbergia brownei (Jacq.) Urb. — Coin vine Dalbergia burmanica Prain Dalbergia calderonii Standl. subsp. calderonii Standl. subsp. molinae Rudd Dalbergia calycina Benth. Dalbergia campenonii Drake Dalbergia cana Kurz Dalbergia candenatensis (Dennst.) Prain Dalbergia canescens (Elmer) Merr. Dalbergia capuronii Bosser & R. Rabev. Dalbergia carringtoniana Sousa Dalbergia catingicola Harms Dalbergia caudata G. Don Dalbergia cearensis Ducke — Kingwood Dalbergia chapelieri Baill. Dalbergia chlorocarpa R. Vig. Dalbergia chontalensis Standl. & L.O. Williams Dalbergia clarkei Thoth. Dalbergia cochinchinensis Pierre ex Laness. — Siamese rosewood, Thailand rosewood, Tracwood (synonym Dalbergia cambodiana Pierre) Dalbergia commiphoroides Baker f. Dalbergia confertiflora Benth. Dalbergia congensis Baker f. Dalbergia congesta Wight & Arn. Dalbergia congestiflora Pittier Dalbergia coromandeliana Prain Dalbergia crispa Hepper Dalbergia cubilquitzensis (Donn. Sm.) Pittier Dalbergia cucullata Pittier Dalbergia cuiabensis Benth. Dalbergia cultrata Benth. Dalbergia cumingiana Benth. Dalbergia curtisii Prain Dalbergia cuscatlanica (Standl.) Standl. Dalbergia dalzielii Hutch. & Dalziel Dalbergia darienensis Rudd Dalbergia davidii Bosser & R. Rabev. Dalbergia debilis J.F. Macbr. Dalbergia decipularis Rizzini & A. Mattos — Tulipwood Dalbergia delphinensis Bosser & R. Rabev. Dalbergia densa Benth. Dalbergia densiflora (Benth.) Benth. Dalbergia discolor Blume Dalbergia duarensis Thoth. Dalbergia dyeriana Harms Dalbergia ealaensis De Wild. Dalbergia ecastaphyllum (L.) Taub. — Coin vine Dalbergia elegans A.M. Carvalho Dalbergia emirnensis Benth. Dalbergia enneaphylla Pittier Dalbergia entadoides Prain Dalbergia eremicola Polhill Dalbergia ernest-ulei Hoehne Dalbergia errans Craib Dalbergia erubescens Bosser & R. Rabev. Dalbergia falcata Prain Dalbergia fischeri Taub. Dalbergia floribunda Craib Dalbergia florifera De Wild. Dalbergia foliolosa Benth. Dalbergia foliosa (Benth.) A.M. Carvalho Dalbergia forbesii Prain Dalbergia fouilloyana Pellegr. Dalbergia frutescens (Vell.) Britton — Brazilian tulipwood, Jacarandá rosa, Pau de fuso, Pau rosa, Pinkwood, Tulipwood Dalbergia funera Standl. Dalbergia fusca Pierre Dalbergia gardneriana Benth. Dalbergia gentilii De Wild. Dalbergia gilbertii Cronquist Dalbergia glaberrima Bosser & R. Rabev. Dalbergia glabra (Mill.) Standl. Dalbergia glandulosa Benth. Dalbergia glaucescens (Benth.) Benth. Dalbergia glaucocarpa Bosser & R. Rabev. Dalbergia glaziovii Harms Dalbergia glomerata Hemsl. Dalbergia godefroyi Prain Dalbergia gossweileri Baker f. Dalbergia gracilis Benth. Dalbergia granadillo Pittier Dalbergia grandibracteata De Wild. Dalbergia grandistipula A.M. Carvalho Dalbergia greveana Baill. Dalbergia guttembergii A.M. Carvalho Dalbergia hainanensis Merr. & Chun Dalbergia hancei Benth. Dalbergia havilandii Prain Dalbergia henryana Prain Dalbergia heudelotii Stapf Dalbergia hiemalis Malme Dalbergia hildebrandtii Vatke Dalbergia hirticalyx Bosser & R. Rabev. Dalbergia horrida (Dennst.) Mabb. Dalbergia hortensis Heringer & al. Dalbergia hoseana Prain Dalbergia hostilis Benth. Dalbergia hullettii Prain Dalbergia humbertii R. Vig. Dalbergia hupeana Hance Dalbergia hygrophila (Benth.) Hoehne Dalbergia intermedia A.M. Carvalho Dalbergia intibucana Standl. & L.O. Williams Dalbergia inundata Benth. Dalbergia iquitosensis Harms Dalbergia jaherii Burck Dalbergia junghuhnii Benth. Dalbergia kerrii Craib Dalbergia kingiana Prain Dalbergia kisantuensis De Wild. & T. Durand Dalbergia kostermansii Sunarno & Ohashi Dalbergia kunstleri Prain Dalbergia kurzii Prain Dalbergia lacei Thoth. Dalbergia lactea Vatke Dalbergia lakhonensis Gagnep. Dalbergia lanceolaria L. f. – Viet. vảy ốc, bạt ong, trắc múi giáo, Burmese: သစ်ပုပ်, Malayalam: വെള്ളീട്ടി Dalbergia lastoursvillensis Pellegr. Dalbergia lateriflora Benth. Dalbergia latifolia Roxb. — Bombay blackwood, East Indian rosewood, Indian palisandre, Indian rosewood, Irugudujava, Java palisandre, Malabar, Sonokeling, Shisham, Sitsal, Satisal Dalbergia laxiflora Micheli Dalbergia lemurica Bosser & R. Rabev. Dalbergia librevillensis Pellegr. Dalbergia louisii Cronquist Dalbergia louvelii R. Vig. — violet rosewood Dalbergia macrosperma Baker Dalbergia madagascariensis Vatke Dalbergia malabarica Prain Dalbergia malangensis Sousa Dalbergia marcaniana Craib Dalbergia maritima R. Vig. Dalbergia martinii F. White Dalbergia mayumbensis Baker f. Dalbergia melanocardium Pittier Dalbergia melanoxylon Guill. & Perr. — African blackwood, African ebony, African grenadilo, Banbanus, Ebene, Granadilla, Granadille d'Afrique, Mpingo, Pau preto, Poyi, Zebrawood Dalbergia menoeides Prain Dalbergia mexicana Pittier Dalbergia microphylla Chiov. Dalbergia millettii Benth. Dalbergia mimosella (Blanco) Prain Dalbergia mimosoides Franch. Dalbergia miscolobium Benth. Dalbergia mollis Bosser & R. Rabev. Dalbergia monetaria L. f. — Moneybush Dalbergia monophylla G.A. Black Dalbergia monticola Bosser & R. Rabev. Dalbergia multijuga E. Mey. Dalbergia negrensis (Radlk.) Ducke Dalbergia neoperrieri Bosser & R. Rabev. Dalbergia ngounyensis Pellegr. Dalbergia nigra (Vell.) Benth. — Bahia rosewood, Brazilian rosewood, Cabiuna, Caviuna, Jacarandá, Jacarandá de Brasil, Palisander, Palisandre da Brésil, Pianowood, Rio rosewood, Rosewood, Obuina Dalbergia nigrescens Kurz Dalbergia nitida (Benth.) Hoehne Dalbergia nitidula Baker Dalbergia noldeae Harms Dalbergia normandii Bosser & R. Rabev. Dalbergia obcordata N.Wilding, Phillipson & Crameri Dalbergia obovata E. Mey. — Climbing flat bean Dalbergia obtusifolia (Baker) Prain Dalbergia odorifera T.C. Chen — Fragrant rosewood Dalbergia oligophylla Hutch. & Dalziel Dalbergia oliveri Prain (synonyms: Dalbergia bariensis Pierre, Dalbergia dongnaiensis Pierre, D. duperreana Pierre & Dalbergia mammosa Pierre) Dalbergia orientalis Bosser & R. Rabev. Dalbergia ovata Benth. Dalbergia pachycarpa (De Wild. & T. Durand) De Wild. Dalbergia palo-escrito Rzed. — Palo escrito Dalbergia parviflora Roxb. Dalbergia paucifoliolata Lundell Dalbergia peguensis Thoth. Dalbergia peishaensis Chun & T. Chen Dalbergia peltieri Bosser & R. Rabev. Dalbergia pervillei Vatke Dalbergia pierreana Prain Dalbergia pinnata (Lour.) Prain Dalbergia pluriflora Baker f. Dalbergia polyadelpha Prain Dalbergia polyphylla Benth. Dalbergia prainii Thoth. Dalbergia pseudo-ovata Thoth. Dalbergia pseudo-sissoo Miq. Dalbergia pseudobaronii R. Vig. Dalbergia purpurascens Baill. Dalbergia reniformis Roxb. Dalbergia reticulata Merr. Dalbergia retusa Hemsl. — Caviuna, Cocobolo, Cocobolo prieto, Funeram, Granadillo, Jacarandáholz, Nambar, Nicaraguan rosewood, Palisander, Palissandro, Palo negro, Pau preto, Rosewood, Urauna Dalbergia revoluta Ducke Dalbergia richardsii Sunarno & Ohashi Dalbergia riedelii (Benth.) Sandwith Dalbergia rimosa Roxb. Dalbergia riparia (Mart.) Benth. Dalbergia rostrata Hassk. Dalbergia rubiginosa Roxb. Dalbergia rufa G. Don Dalbergia rugosa Hepper Dalbergia sacerdotum Prain Dalbergia sambesiaca Schinz Dalbergia sampaioana Kuhlm. & Hoehne Dalbergia sandakanensis Sunarno & Ohashi Dalbergia saxatilis Hook. f. Dalbergia scortechinii (Prain) Prain Dalbergia sericea G. Don Dalbergia setifera Hutch. & Dalziel Dalbergia simpsonii Rudd Dalbergia sissoides Wight & Arn. Dalbergia sissoo DC. — Agara, Agaru, Errasissu, Gette, Hihu, Indian rosewood, Irugudujava, Iruvil, Iti, Khujrap, Padimi, Safedar, Sheesham, Shinshapa, Shisham, Shishma, Shishom, Sinsupa, Sissoo, Sisu, Tali, Tenach, Tukreekung, Yette Dalbergia spinosa Roxb. Dalbergia spruceana (Benth.) Benth. — Amazon rosewood Dalbergia stenophylla Prain Dalbergia stercoracea Prain Dalbergia stevensonii Standl. — Honduras rosewood, Nagaed Dalbergia stipulacea Roxb. Dalbergia suaresensis Baill. Dalbergia subcymosa Ducke Dalbergia succirubra Gagnep. & Craib Dalbergia teijsmannii Sunarno & Ohashi Dalbergia teixeirae Sousa Dalbergia thomsonii Benth. Dalbergia thorelii Gagnep. Dalbergia tilarana N. Zamora Dalbergia tinnevelliensis Thoth. Dalbergia tonkinensis Prain Dalbergia travancorica Thoth. Dalbergia trichocarpa Baker Dalbergia tricolor Drake Dalbergia tsaratananensis Bosser & R. Rabev. Dalbergia tsiandalana R. Vig. Dalbergia tsoi Merr. & Chun Dalbergia tucurensis Donn. Sm. — Guatemalan rosewood Dalbergia uarandensis (Chiov.) Thulin Dalbergia urschii Bosser & R. Rabev. Dalbergia vacciniifolia Vatke Dalbergia velutina Benth. Dalbergia verrucosa Craib Dalbergia viguieri Bosser & R. Rabev. Dalbergia villosa (Benth.) Benth. Dalbergia volubilis Roxb. Dalbergia wattii C.B. Clarke Dalbergia xerophila Bosser & R. Rabev. Dalbergia yunnanensis Franch.
Biology and health sciences
Fabales
Plants
316061
https://en.wikipedia.org/wiki/Sluice
Sluice
A sluice ( ) is a water channel containing a sluice gate, a type of lock to manage the water flow and water level. It can also be an open channel which processes material, such as a river sluice used in gold prospecting or fossicking. A mill race, leet, flume, penstock or lade is a sluice channeling water toward a water mill. The terms sluice, sluice gate, knife gate, and slide gate are used interchangeably in the water and wastewater control industry. Operation "Sluice gate" refers to a movable gate allowing water to flow under it. When a sluice is lowered, water may spill over the top, in which case the gate operates as a weir. Usually, a mechanism drives the sluice up or down. This may be a simple, hand-operated, chain pulled/lowered, worm drive or rack-and-pinion drive, or it may be electrically or hydraulically powered. A flap sluice, however, operates automatically, without external intervention or inputs. Types of sluice gates Flap sluice gate A fully automatic type, controlled by the pressure head across it; operation is similar to that of a check valve. It is a gate hinged at the top. When pressure is from one side, the gate is kept closed; a pressure from the other side opens the sluice when a threshold pressure is surpassed. Vertical rising sluice gate A plate sliding in the vertical direction, which may be controlled by machinery. Radial sluice gate A structure, where a small part of a cylindrical surface serves as the gate, supported by radial constructions going through the cylinder's radius. On occasion, a counterweight is provided. Rising sector sluice gate Also a part of a cylindrical surface, which rests at the bottom of the channel and rises by rotating around its centre. Needle sluice A sluice formed by a number of thin needles held against a solid frame through water pressure as in a needle dam. Fan gate () This type of gate was invented by the Dutch hydraulic engineer in 1808. He was Inspector-General for Waterstaat (Water resource management) of the Kingdom of Holland at the time. The Fan door has the special property that it can open in the direction of high water solely using water pressure. This gate type was primarily used to purposely inundate certain regions, for instance in the case of the Hollandic Water Line. Nowadays this type of gate can still be found in a few places, for example in Gouda. A fan gate has a separate chamber that can be filled with water and is separated on the high-water-level side of the sluice by a large door. When a tube connecting the separate chamber with the high-water-level side of the sluice is opened, the water level, and with that the water pressure in this chamber, will rise to the same level as that on the high-water-level side. As there is no height difference across the larger gate, it exerts no force. However the smaller gate has a higher level on the upstream side, which exerts a force to close the gate. When the tube to the low water side is opened the water level in the chamber will fall. Due to the difference in the surface areas of the doors there will be a net force opening the gate. Designing the sluice gate Sluice gates are one of the most common hydraulic structures used to control or measure the flow in open channels. Vertical rising sluice gates are the most common in open channels and can operate under two flow regimes: free flow and submerged flow. The most important depths in designing of sluice gates are: : upstream depth : opening of the sluice gate : the minimum depth of flow after the sluice gate : the initial depth of the hydraulic jump : the secondary depth of the hydraulic jump : downstream depth Logging sluices In the mountains of the United States, sluices transported logs from steep hillsides to downslope sawmill ponds or yarding areas. Nineteenth-century logging was traditionally a winter activity for men who spent summers working on farms. Where there were freezing nights, water might be applied to logging sluices every night so a fresh coating of slippery ice would reduce friction of logs placed in the sluice the following morning. Placer mining applications Sluice boxes are often used in the recovery of black sands, gold, and other minerals from placer deposits during placer mining operations. They may be small-scale, as used in prospecting, or much larger, as in commercial operations, where the material is sometimes screened using a trommel, screening plant or sieve. Traditional sluices have transverse riffles over a carpet or rubber matting, which trap the heavy minerals, gemstones, and other valuable minerals. Since the early 2000s more miners and prospectors are relying on more modern and effective matting systems. The result is a concentrate which requires additional processing. Types of material Aluminium Most sluices are formed with aluminium using a press brake to form a U shape Wood Traditionally wood was the material of choice for sluice gates. Cast iron Cast iron has been popular when constructing sluice gates for years. This material is great at keeping the strength needed when dealing with powerful water levels. Stainless steel In most cases, stainless steel is lighter than the older cast iron material. Fibre-reinforced plastic (FRP) In modern times, newer materials such as fibre-reinforced plastic are being used to build sluices. These modern technologies have many of the attributes of the older materials, while introducing advantages such as corrosion resistance and much lighter weights. Regional names for sluice gates In the Somerset Levels, sluice gates are known as clyse or clyce. Most of the inhabitants of Guyana refer to sluices as kokers. The Sinhala people in Sri Lanka, who had an ancient civilization based on harvested rain water, refer to sluices as Horovuwa. Gallery
Technology
Hydraulic infrastructure
null
316083
https://en.wikipedia.org/wiki/Golden%20algae
Golden algae
The Chrysophyceae, usually called chrysophytes, chrysomonads, golden-brown algae or golden algae, are a large group of algae, found mostly in freshwater. Golden algae is also commonly used to refer to a single species, Prymnesium parvum, which causes fish kills. The Chrysophyceae should not be confused with the Chrysophyta, which is a more ambiguous taxon. Although "chrysophytes" is the anglicization of "Chrysophyta", it generally refers to the Chrysophyceae. Members Originally they were taken to include all such forms of the diatoms and multicellular brown algae, but since then they have been divided into several different groups (e.g., Haptophyceae, Synurophyceae) based on pigmentation and cell structure. Some heterotrophic flagellates as the bicosoecids and choanoflagellates were sometimes seen as related to golden algae too. They are now usually restricted to a core group of closely related forms, distinguished primarily by the structure of the flagella in motile cells, also treated as an order Chromulinales. It is possible membership will be revised further as more species are studied in detail. The Chrysophyceae have been placed by some in the polyphyletic Chromista. The broader monophyletic group to which the Chrysophyceae belong includes various non-algae including the bicosoecids, not the collar flagellates, opalines, oomycete fungi, proteromonads, actinophryid heliozoa, and other heterotrophic flagellates and is referred to as the Stramenopiles. Description The "primary" cell of chrysophytes contains two specialized flagella. The active, "feathered" (with mastigonemes) flagellum is oriented toward the moving direction. The smooth passive flagellum, oriented toward the opposite direction, may be present only in rudimentary form in some species. An important characteristic used to identify members of the class Chrysophyceae is the presence of a siliceous cyst that is formed endogenously. Called statospore, stomatocyst or statocyst, this structure is usually globose and contains a single pore. The surface of mature cysts may be ornamented with different structural elements and are useful to distinguish species. Most members are unicellular flagellates, with either two visible flagella, as in Ochromonas, or sometimes one, as in Chromulina. The Chromulinales as first defined by Pascher in 1910 included only the latter type, with the former treated as the order Ochromonadales. However, structural studies have revealed that a short second flagellum, or at least a second basal body, is always present, so this is no longer considered a valid distinction. Most of these have no cell covering. Some have loricae or shells, such as Dinobryon, which grows in branched colonies. Most forms with silicaceous scales are now considered a separate group, the synurids, but a few belong among the Chromulinales proper, such as Paraphysomonas. Some members are generally amoeboid, with long branching cell extensions, though they pass through flagellate stages as well. Chrysamoeba and Rhizochrysis are typical of these. There is also one species, Myxochrysis paradoxa, which has a complex life cycle involving a multinucleate plasmodial stage, similar to those found in slime molds. These were originally treated as the order Chrysamoebales. The superficially similar Rhizochromulina was once included here, but is now given its own order based on differences in the structure of the flagellate stage. Other members are non-motile. Cells may be naked and embedded in mucilage, such as Chrysosaccus, or coccoid and surrounded by a cell wall, as in Chrysosphaera. A few are filamentous or even parenchymatous in organization, such as Phaeoplaca. These were included in various older orders, most of the members of which are now included in separate groups. Hydrurus and its allies, freshwater genera which form branched gelatinous filaments, are often placed in the separate order Hydrurales, but may belong here. Classifications Pascher (1914) Classification of the class Chrysophyceae according to Pascher (1914): Division Chrysophyta Class Chrysophyceae Order Chrysomonadales Order Chrysocapsales Order Chrysosphaerales Order Chrysotrichales Class Heterokontae Class Diatomeae Smith (1938) According to Smith (1938): Class Chrysophyceae Order Chrysomonadales Suborder Cromulinae (e.g., Mallomonas) Suborder Isochrysidineae (e.g., Synura) Suborder Ochromonadineae (e.g., Dinobryon) Order Rhizochrysidales (e.g., Chrysamoeba) Order Chrysocapsales (e.g., Hydrurus) Order Chrysotrichales (e.g., Phaeothamnion) Order Chrysosphaerales (e.g., Epichrysis) Bourrely (1957) According to Bourrely (1957): Class Chrysophyceae Order Phaeoplacales Order Stichogloeales Order Phaeothamniales Order Chrysapionales Order Thallochrysidales Order Chrysosphaerales Order Chrysosaccales Order Rhizochrysidales Order Ochromonadales Order Isochrysidales Order Silicoflagellales Order Craspedomonadales Order Chromulinales Starmach (1985) According to Starmach (1985): Class Chrysophyceae Subclass Heterochrysophycidae Order Chromulinales Order Ochromonadales Subclass Acontochrysophycidae Order Chrysarachniales Order Stylococcales Order Chrysosaccales Order Phaeoplacales Subclass Craspedomonadophycidae Order Monosigales Kristiansen (1986) Classification of the class Chrysophyceae and splinter groups according to Kristiansen (1986): Class Chrysophyceae Order Ochromonadales Order Mallomonadales Order Chrysamoebales Order Chrysocapsales Order Hydrurales Order Chrysosphaerales Order Phaeothamniales Order Sarcinochrysidales Class Pedinellophyceae Order Pedinellales Class Dictyochophyceae Order Dictyochales Margulis et al. (1990) Classification of the phylum Chrysophyta according to Margulis et al. (1990): Phylum Chrysophyta Class Chrysophyceae Class Pedinellophyceae Class Dictyochophyceae (= Silicoflagellata) van den Hoek et al. (1995) According to van den Hoek, Mann and Jahns (1995): Class Chrysophyceae Order Ochromonadales (e.g., Ochromonas, Pseudokephyrion, Dinobryon) Order Mallomonadales (= Class Synurophyceae, e.g., Mallomonas, Synura) Order Pedinellales (= Class Pedinellophyceae, e.g., Pedinella) Order Chrysamoebidales (e.g., Rhizochrysis, Chrysarachnion) Order Chrysocapsales (e.g., Chrysocapsa, Hydrurus) Order Chrysosphaerales (e.g., Chrysosphaera) Order Phaeothamniales (e.g., Phaeothamnion, Thallochrysis) Preisig (1995) Classification of the class Chrysophyceae and splinter groups according to Preisig (1995): Class Chrysophyceae Order Bicosoecales Order Chromulinales Order Hibberdiales Order Hydrurales Order Sancinochrysidales Order Chrysomioridales Class Dictyochophyceae Order Pedinellales Order Rhizochromulinales Order Dictyochales Class Synurophyceae Order Synurales Guiry and Guiry (2019) According to Guiry and Guiry (2019): Class Chrysophyceae Order Chromulinales Order Hibberdiales Order Hydrurales Order Rhizochrysidales Order Thallochrysidales Chrysophyceae ordo incertae sedis (11 genera) Ecology Chrysophytes live mostly in freshwater, and are important for studies of food web dynamics in oligotrophic freshwater ecosystems, and for assessment of environmental degradation resulting from eutrophication and acid rain. Evolution Chrysophytes contain the pigment fucoxanthin. Because of this, they were once considered to be a specialized form of cyanobacteria. Because many of these organisms had a silica capsule, they have a relatively complete fossil record, allowing modern biologists to confirm that they are, in fact, not derived from cyanobacteria, but rather an ancestor that did not possess the capability to photosynthesize. Many of the chrysophyta precursor fossils entirely lacked any type of photosynthesis-capable pigment. The most primitive stramenopiles are regarded as heterotrophic, such as the ancestors of the Chrysophyceae were likely heterotrophic flagellates that obtained their ability to photosynthesize from an endosymbiotic relationship with fucoxanthin-containing cyanobacteria.
Biology and health sciences
SAR supergroup
Plants
316157
https://en.wikipedia.org/wiki/Chamois
Chamois
The chamois (; ) (Rupicapra rupicapra) or Alpine chamois is a species of goat-antelope native to the mountains in Southern Europe, from the Pyrenees, the Alps, the Apennines, the Dinarides, the Tatra to the Carpathian Mountains, the Balkan Mountains, the Rila–Rhodope massif, Pindus, the northeastern mountains of Turkey, and the Caucasus. It has also been introduced to the South Island of New Zealand. Some subspecies of chamois are strictly protected in the EU under the European Habitats Directive. Description The chamois is a very small bovid. A fully grown chamois reaches a height of and measures . Males, which weigh , are slightly larger than females, which weigh . Both males and females have short, straightish horns which are hooked backwards near the tip, the horn of the male being thicker. In summer, the fur has a rich brown colour which turns to a light grey in winter. Distinct characteristics are white contrasting marks on the sides of the head with pronounced black stripes below the eyes, a white rump and a black stripe along the back. Biology and behaviour Female chamois and their young live in herds of up to 15 to 30 individuals; adult males tend to live solitarily for most of the year. During the rut (late November/early December in Europe, May in New Zealand), males engage in fierce battles for the attention of unmated females. An impregnated female undergoes a gestation period of 170 days, after which a single kid is usually born in May or early June. On rare occasions, twins may be born. If a mother is killed, other females in the herd may try to raise the young. Kids are weaned at six months of age and are fully grown by one year of age, but do not reach sexual maturity until they are three to four years old, although some females may mate at as early two years old. At sexual maturity, young males are forced out of their mother's herds by dominant males (who sometimes kill them), to wander somewhat nomadically until they can establish themselves as mature breeding specimens at eight to nine years of age. Chamois eat various types of vegetation, including highland grasses and herbs during the summer and conifers, barks and needles from trees in winter. Primarily diurnal in activity, they often rest around mid-day and may actively forage during moonlit nights. Chamois can reach an age of 22 years in captivity, although the average recorded age in the wild ranges from 15 to 17 years. Common causes of mortality can include avalanches, epidemics and predation. In the past, the principal predators were Eurasian lynxes, Persian leopards and Golden Jackal, gray wolves, and possibly brown bears and golden eagles, but humans are now the main predators of chamois. Chamois usually use speed and stealthy evasion to escape predators and can run at and can jump vertically into the air or over a distance of . Distribution and habitat The chamois is native to the Pyrenees, the mountains of south and central Europe, Turkey, and the Caucasus. It lives in precipitous, rugged, rocky terrain at moderately high elevations of up to at least . In Europe, the chamois spends the summer months in alpine meadows above the tree line, but moves to elevations of around to spend the winter in pine-dominated forests. In New Zealand Alpine chamois arrived in New Zealand in 1907 as a gift from the Austrian Emperor, Franz Joseph I in exchange for specimens of living ferns, rare birds and lizards. Albert E. L. Bertling, formerly head keeper of the Zoological Society's Gardens, Regent's Park, London, accepted an invitation from the New Zealand Government to deliver a consignment of chamois (two bucks and six does) to the colony. They arrived in Wellington, New Zealand, on 23 January 1907, on board SS Turakina. From Wellington the chamois were transhipped to the Manaroa and conveyed to Lyttelton, then by rail to Fairlie in South Canterbury and a four-day horse trek to Mount Cook. The first surviving releases were made in the Aoraki / Mount Cook region and these animals gradually spread over much of the South Island. In New Zealand, chamois hunting is unrestricted and even encouraged by the Department of Conservation to limit the animal's impact on New Zealand's native alpine flora. New Zealand chamois tend to weigh about 20% less than European individuals of the same age, suggesting that food supplies may be limited. Taxonomy The species R. rupicapra is categorized into seven subspecies: Hunting and wildlife management As their meat is considered tasty, chamois are popular game animals. Chamois have two traits that are exploited by hunters: the first is that they are most active in the morning and evening when they feed; the second is that they tend to look for danger originating from below, which means that a hunter stalking chamois from above is less likely to be observed and more likely to be successful. The tuft of hair from the back of the neck, the gamsbart (chamois "beard"), is traditionally worn as a decoration on hats throughout the alpine countries. Chamois leather Chamois leather, traditionally made from the hide of the chamois, is very smooth and absorbent and is favoured in cleaning, buffing, and polishing because it produces no scratching. Modern chamois leather may still be made from chamois hides, but hides of deer or domestic goats or sheep are much more commonly used. Chamois fabric An artificial fabric known as "chamois" is made variously from cotton flannel, PVA, viscose, and other materials with similar qualities. It is napped to produce a plush surface similar to moleskin or chamois leather.
Biology and health sciences
Bovidae
Animals
316410
https://en.wikipedia.org/wiki/Compass%20rose
Compass rose
A compass rose or compass star, sometimes called a wind rose or rose of the winds, is a polar diagram displaying the orientation of the cardinal directions (north, east, south, and west) and their intermediate points. It is used on compasses (including magnetic ones), maps (such as compass rose networks), or monuments. It is particularly common in navigation systems, including nautical charts, non-directional beacons (NDB), VHF omnidirectional range (VOR) systems, satellite navigation devices ("GPS"). Types Linguistic anthropological studies have shown that most human communities have four points of cardinal direction. The names given to these directions are usually derived from either locally-specific geographic features (e.g. "towards the hills", "towards the sea") or from celestial bodies (especially the sun) or from atmospheric features (winds, temperature). Most mobile populations tend to adopt sunrise and sunset for East and West and the direction from where different winds blow to denote North and South. Classical The ancient Greeks originally maintained distinct and separate systems of points and winds. The four Greek cardinal points (, , and ) were based on celestial bodies and used for orientation. The four Greek winds (, , , ) were confined to meteorology. Nonetheless, both systems were gradually conflated, and wind names came eventually to denote cardinal directions as well. In his meteorological studies, Aristotle identified ten distinct winds: two north–south winds (, ) and four sets of east–west winds blowing from different latitudes—the Arctic Circle (, ), the summer solstice horizon (, ), the equinox (, ) and the winter solstice (, ). Aristotle's system was asymmetric. To restore balance, Timosthenes of Rhodes added two more winds to produce the classical 12-wind rose, and began using the winds to denote geographical direction in navigation. Eratosthenes deducted two winds from Aristotle's system, to produce the classical eight-wind rose. The Romans (e.g. Seneca, Pliny) adopted the Greek 12-wind system, and replaced its names with Latin equivalents, e.g. , , , , etc. The De architectura of the Roman architect Vitruvius describes 24 winds. According to the chronicler Einhard (), the Frankish king Charlemagne himself came up with his own names for the classical 12 winds. During the Migration Period, the Germanic names for the cardinal directions entered the Romance languages, where they replaced the Latin names borealis with north, australis with south, occidentalis with west and orientalis with east. The following table gives a rough equivalence of the classical 12-wind rose with the modern compass directions (Note: the directions are imprecise since it is not clear at what angles the classical winds are supposed to be with each other; some have argued that they should be equally spaced at 30 degrees each; for more details, see the article on Classical compass winds). Sidereal The sidereal compass rose demarcates the compass points by the position of stars ("steering stars"; not to be confused with zenith stars) in the night sky, rather than winds. Arab navigators in the Red Sea and the Indian Ocean, who depended on celestial navigation, were using a 32-point sidereal compass rose before the end of the 10th century. In the northern hemisphere, the steady Pole Star (Polaris) was used for the N–S axis; the less-steady Southern Cross had to do for the southern hemisphere, as the southern pole star, Sigma Octantis, is too dim to be easily seen from Earth with the naked eye. The other thirty points on the sidereal rose were determined by the rising and setting positions of fifteen bright stars. Reading from North to South, in their rising and setting positions, these are: The western half of the rose would be the same stars in their setting position. The true position of these stars is only approximate to their theoretical equidistant rhumbs on the sidereal compass. Stars with the same declination formed a "linear constellation" or to provide direction as the night progressed. A similar sidereal compass was used by Polynesian and Micronesian navigators in the Pacific Ocean, although different stars were used in a number of cases, clustering around the east–west axis. Mariner's In Europe, the Classical 12-wind system continued to be taught in academic settings during the Medieval era, but seafarers in the Mediterranean came up with their own distinct 8-wind system. The mariners used names derived from the Mediterranean lingua franca, composed principally of Ligurian, mixed with Venetian, Sicilian, Provençal, Catalan, Greek and Arabic terms from around the Mediterranean basin. (N) Tramontana (NE) Greco (or Bora) (E) Levante (SE) Scirocco (or Exaloc) (S) Ostro (or Mezzogiorno) (SW) Libeccio (or Garbino) (W) Ponente (NW) Maestro (or Mistral) The exact origin of the mariner's eight-wind rose is obscure. Only two of its point names (Ostro, Libeccio) have Classical etymologies, the rest of the names seem to be autonomously derived. Two Arabic words stand out: Scirocco (SE) from al-Sharq (الشرق – east in Arabic) and the variant Garbino (SW), from al-Gharb (الغرب – west in Arabic). This suggests the mariner's rose was probably acquired by southern Italian seafarers; not from their classical Roman ancestors, but rather from Norman Sicily in the 11th to 12th centuries. The coasts of the Maghreb and Mashriq are SW and SE of Sicily respectively; the Greco (a NE wind), reflects the position of Byzantine-held Calabria-Apulia to the northeast of Arab Sicily, while the Maestro (a NW wind) is a reference to the Mistral wind that blows from the southern French coast towards northwest Sicily. The 32-point compass used for navigation in the Mediterranean by the 14th century, had increments of 11° between points. Only the eight principal winds (N, NE, E, SE, S, SW, W, NW) were given special names. The eight half-winds just combined the names of the two principal winds, e.g. Greco-Tramontana for NNE, Greco-Levante for ENE, and so on. Quarter-winds were more cumbersomely phrased, with the closest principal wind named first and the next-closest principal wind second, e.g. "Quarto di Tramontana verso Greco" (literally, "one quarter wind from North towards Northeast", i.e. North by East), and "Quarto di Greco verso Tramontana" ("one quarter wind from NE towards N", i.e. Northeast by North). Boxing the compass (naming all 32 winds) was expected of all Medieval mariners. Depiction on nautical charts In the earliest medieval portolan charts of the 14th century, compass roses were depicted as mere collections of color-coded compass rhumb lines: black for the eight main winds, green for the eight half-winds and red for the sixteen quarter-winds. The average portolan chart had sixteen such roses (or confluence of lines), spaced out equally around the circumference of a large implicit circle. The cartographer Cresques Abraham of Majorca, in his Catalan Atlas of 1375, was the first to draw an ornate compass rose on a map. By the end of the 15th century, Portuguese cartographers began drawing multiple ornate compass roses throughout the chart, one upon each of the sixteen circumference roses (unless the illustration conflicted with coastal details). The points on a compass rose were frequently labeled by the initial letters of the mariner's principal winds (T, G, L, S, O, L, P, M). From the outset, the custom also began to distinguish the north from the other points by a specific visual marker. Medieval Italian cartographers typically used a simple arrowhead or circumflex-hatted T (an allusion to the compass needle) to designate the north, while the Majorcan cartographic school typically used a stylized Pole Star for its north mark. The use of the fleur-de-lis as north mark was introduced by Pedro Reinel, and quickly became customary in compass roses (and is still often used today). Old compass roses also often used a Christian cross at Levante (E), indicating the direction of Jerusalem from the point of view of the Mediterranean sea. The twelve Classical winds (or a subset of them) were also sometimes depicted on portolan charts, albeit not on a compass rose, but rather separately on small disks or coins on the edges of the map. The compass rose was also depicted on traverse boards used on board ships to record headings sailed at set time intervals. Modern depictions The contemporary compass rose appears as two rings, one smaller and set inside the other. The outside ring denotes true cardinal directions while the smaller inside ring denotes magnetic cardinal directions. True north refers to the geographical location of the north pole while magnetic north refers to the direction towards which the north pole of a magnetic object (as found in a compass) will point. The angular difference between true and magnetic north is called variation, which varies depending on location. The angular difference between magnetic heading and compass heading is called deviation which varies by vessel and its heading. North arrows are often included in contemporary maps as part of the map layout. The modern compass rose has eight principal winds. Listed clockwise, these are: Although modern compasses use the names of the eight principal directions (N, NE, E, SE, etc.), older compasses use the traditional Italianate wind names of Medieval origin (Tramontana, Greco, Levante, etc.). Four-point compass roses use only the four "basic winds" or "cardinal directions" (North, East, South, West), with angles of difference at 90°. Eight-point compass roses use the eight principal winds—that is, the four cardinal directions (N, E, S, W) plus the four "intercardinal" or "ordinal directions" (NE, SE, SW, NW), at angles of difference of 45°. Twelve-point compass roses, with markings 30° apart, are often painted on airport ramps to assist with the adjustment of aircraft magnetic compass compensators. Sixteen-point compass roses are constructed by bisecting the angles of the principal winds to come up with intermediate compass points, known as half-winds, at angles of difference of 22°. The names of the half-winds are simply combinations of the principal winds to either side, principal then ordinal. E.g. North-northeast (NNE), East-northeast (ENE), etc. Using gradians, of which there are 400 in a circle, the sixteen-point rose has twenty-five gradians per point. Thirty-two-point compass roses are constructed by bisecting these angles, and coming up with quarter-winds at 11° angles of difference. Quarter-wind names are constructed with the names "X by Y", which can be read as "one quarter wind from X toward Y", where X is one of the eight principal winds and Y is one of the two adjacent cardinal directions. For example, North-by-east (NbE) is one quarter wind from North towards East, Northeast-by-north (NEbN) is one quarter wind from Northeast toward North. Naming all 32 points on the rose is called "boxing the compass". The 32-point rose has 11° between points, but is easily found by halving divisions and may have been easier for those not using a 360° circle. Eight points make a right angle and a point is easy to estimate allowing bearings to be given such as "two points off the starboard bow". Use as symbol The NATO symbol uses a four-pointed rose. Outward Bound uses the compass rose as the logo for various schools around the world. An 8-point compass rose was the logo of Varig, the largest airline in Brazil for many decades until its bankruptcy in 2006. An 8-point compass rose is a prominent feature in the logo of the Seattle Mariners Major League Baseball club. Hong Kong Correctional Services's crest uses a four-pointed compass rose. The compass rose is used as the symbol of the worldwide Anglican Communion of churches. A 16-point compass rose was IBM's logo for the System/360 product line. A 16-point compass rose is the official logo of the Spanish National University of Distance Education (Universidad Nacional de Educación a Distancia or UNED). A 16-point compass rose is present on the seal and the flag of the Central Intelligence Agency of the federal government of the United States (the CIA). Tattoos of eight-pointed stars are used by the Vor v Zakone to denote rank. In popular culture The Compass Rose is a 1982 collection of short stories by Ursula K. Le Guin.
Technology
Navigation
null
316414
https://en.wikipedia.org/wiki/Volcanic%20rock
Volcanic rock
Volcanic rocks (often shortened to volcanics in scientific contexts) are rocks formed from lava erupted from a volcano. Like all rock types, the concept of volcanic rock is artificial, and in nature volcanic rocks grade into hypabyssal and metamorphic rocks and constitute an important element of some sediments and sedimentary rocks. For these reasons, in geology, volcanics and shallow hypabyssal rocks are not always treated as distinct. In the context of Precambrian shield geology, the term "volcanic" is often applied to what are strictly metavolcanic rocks. Volcanic rocks and sediment that form from magma erupted into the air are called "pyroclastics," and these are also technically sedimentary rocks. Volcanic rocks are among the most common rock types on Earth's surface, particularly in the oceans. On land, they are very common at plate boundaries and in flood basalt provinces. It has been estimated that volcanic rocks cover about 8% of the Earth's current land surface. Characteristics Setting and size Lava Tephra Volcanic bomb Lapilli Volcanic ash Texture Volcanic rocks are usually fine-grained or aphanitic to glass in texture. They often contain clasts of other rocks and phenocrysts. Phenocrysts are crystals that are larger than the matrix and are identifiable with the unaided eye. Rhomb porphyry is an example with large rhomb shaped phenocrysts embedded in a very fine grained matrix. Volcanic rocks often have a vesicular texture caused by voids left by volatiles trapped in the molten lava. Pumice is a highly vesicular rock produced in explosive volcanic eruptions. Chemistry Most modern petrologists classify igneous rocks, including volcanic rocks, by their chemistry when dealing with their origin. The fact that different mineralogies and textures may be developed from the same initial magmas has led petrologists to rely heavily on chemistry to look at a volcanic rock's origin. The chemical classification of igneous rocks is based first on the total content of silicon and alkali metals (sodium and potassium) expressed as weight fraction of silica and alkali oxides (K2O plus Na2O). These place the rock in one of the fields of the TAS diagram. Ultramafic rock and carbonatites have their own specialized classification, but these rarely occur as volcanic rocks. Some fields of the TAS diagram are further subdivided by the ratio of potassium oxide to sodium oxide. Additional classifications may be made on the basis of other components, such as aluminum or iron content. Volcanic rocks are also broadly divided into subalkaline, alkaline, and peralkaline volcanic rocks. Subalkaline rocks are defined as rocks in which SiO2 < -3.3539 × 10−4 × A6 + 1.2030 × 10−2 × A5 - 1.5188 × 10−1 × A4 + 8.6096 × 10−1 × A3 - 2.1111 × A2 + 3.9492 × A + 39.0 where both silica and total alkali oxide content (A) are expressed as molar fraction. Because the TAS diagram uses weight fraction and the boundary between alkaline and subalkaline rock is defined in terms of molar fraction, the position of this curve on the TAS diagram is only approximate. Peralkaline volcanic rocks are defined as rocks having Na2O + K2O > Al2O3, so that some of the alkali oxides must be present as aegirine or sodic amphibole rather than feldspar. The chemistry of volcanic rocks is dependent on two things: the initial composition of the primary magma and the subsequent differentiation. Differentiation of most magmas tends to increase the silica (SiO2) content, mainly by crystal fractionation. The initial composition of most magmas is basaltic, albeit small differences in initial compositions may result in multiple differentiation series. The most common of these series are the tholeiitic, calc-alkaline, and alkaline. Mineralogy Most volcanic rocks share a number of common minerals. Differentiation of volcanic rocks tends to increase the silica (SiO2) content mainly by fractional crystallization. Thus, more evolved volcanic rocks tend to be richer in minerals with a higher amount of silica such as phyllo and tectosilicates including the feldspars, quartz polymorphs and muscovite. While still dominated by silicates, more primitive volcanic rocks have mineral assemblages with less silica, such as olivine and the pyroxenes. Bowen's reaction series correctly predicts the order of formation of the most common minerals in volcanic rocks. Occasionally, a magma may pick up crystals that crystallized from another magma; these crystals are called xenocrysts. Diamonds found in kimberlites are rare but well-known xenocrysts; the kimberlites do not create the diamonds, but pick them up and transport them to the surface of the Earth. Naming Volcanic rocks are named according to both their chemical composition and texture. Basalt is a very common volcanic rock with low silica content. Rhyolite is a volcanic rock with high silica content. Rhyolite has silica content similar to that of granite while basalt is compositionally equal to gabbro. Intermediate volcanic rocks include andesite, dacite, trachyte, and latite. Pyroclastic rocks are the product of explosive volcanism. They are often felsic (high in silica). Pyroclastic rocks are often the result of volcanic debris, such as ash, bombs and tephra, and other volcanic ejecta. Examples of pyroclastic rocks are tuff and ignimbrite. Shallow intrusions, which possess structure similar to volcanic rather than plutonic rocks, are also considered to be volcanic, shading into subvolcanic. The terms lava stone and lava rock are more used by marketers than geologists, who would likely say "volcanic rock" (because lava is a molten liquid and rock is solid). "Lava stone" may describe anything from a friable silicic pumice to solid mafic flow basalt, and is sometimes used to describe rocks that were never lava, but look as if they were (such as sedimentary limestone with dissolution pitting). To convey anything about the physical or chemical properties of the rock, a more specific term should be used; a good supplier will know what sort of volcanic rock they are selling. Composition of volcanic rocks The sub-family of rocks that form from volcanic lava are called igneous volcanic rocks (to differentiate them from igneous rocks that form from magma below the surface, called igneous plutonic rocks). The lavas of different volcanoes, when cooled and hardened, differ much in their appearance and composition. If a rhyolite lava-stream cools quickly, it can quickly freeze into a black glassy substance called obsidian. When filled with bubbles of gas, the same lava may form the spongy appearing pumice. Allowed to cool slowly, it forms a light-colored, uniformly solid rock called rhyolite. The lavas, having cooled rapidly in contact with the air or water, are mostly finely crystalline or have at least fine-grained ground-mass representing that part of the viscous semi-crystalline lava flow that was still liquid at the moment of eruption. At this time they were exposed only to atmospheric pressure, and the steam and other gases, which they contained in great quantity were free to escape; many important modifications arise from this, the most striking being the frequent presence of numerous steam cavities (vesicular structure) often drawn out to elongated shapes subsequently filled up with minerals by infiltration (amygdaloidal structure). As crystallization was going on while the mass was still creeping forward under the surface of the Earth, the latest formed minerals (in the ground-mass) are commonly arranged in subparallel winding lines that follow the direction of movement (fluxion or fluidal structure)—and larger early minerals that previously crystallized may show the same arrangement. Most lavas fall considerably below their original temperatures before emitted. In their behavior, they present a close analogy to hot solutions of salts in water, which, when they approach the saturation temperature, first deposit a crop of large, well-formed crystals (labile stage) and subsequently precipitate clouds of smaller less perfect crystalline particles (metastable stage). In igneous rocks the first generation of crystals generally forms before the lava has emerged to the surface, that is to say, during the ascent from the subterranean depths to the crater of the volcano. It has frequently been verified by observation that freshly emitted lavas contain large crystals borne along in a molten, liquid mass. The large, well-formed, early crystals (phenocrysts) are said to be porphyritic; the smaller crystals of the surrounding matrix or ground-mass belong to the post-effusion stage. More rarely lavas are completely fused at the moment of ejection; they may then cool to form a non-porphyritic, finely crystalline rock, or if more rapidly chilled may in large part be non-crystalline or glassy (vitreous rocks such as obsidian, tachylyte, pitchstone). A common feature of glassy rocks is the presence of rounded bodies (spherulites), consisting of fine divergent fibres radiating from a center; they consist of imperfect crystals of feldspar, mixed with quartz or tridymite; similar bodies are often produced artificially in glasses that are allowed to cool slowly. Rarely these spherulites are hollow or consist of concentric shells with spaces between (lithophysae). Perlitic structure, also common in glasses, consists of the presence of concentric rounded cracks owing to contraction on cooling. The phenocrysts or porphyritic minerals are not only larger than those of the ground-mass; as the matrix was still liquid when they formed they were free to take perfect crystalline shapes, without interference by the pressure of adjacent crystals. They seem to have grown rapidly, as they are often filled with enclosures of glassy or finely crystalline material like that of the ground-mass . Microscopic examination of the phenocrysts often reveals that they have had a complex history. Very frequently they show layers of different composition, indicated by variations in color or other optical properties; thus augite may be green in the center surrounded by various shades of brown; or they may be pale green centrally and darker green with strong pleochroism (aegirine) at the periphery. In the feldspars the center is usually richer in calcium than the surrounding layers, and successive zones may often be noted, each less calcic than those within it. Phenocrysts of quartz (and of other minerals), instead of sharp, perfect crystalline faces, may show rounded corroded surfaces, with the points blunted and irregular tongue-like projections of the matrix into the substance of the crystal. It is clear that after the mineral had crystallized it was partly again dissolved or corroded at some period before the matrix solidified. Corroded phenocrysts of biotite and hornblende are very common in some lavas; they are surrounded by black rims of magnetite mixed with pale green augite. The hornblende or biotite substance has proved unstable at a certain stage of consolidation, and has been replaced by a paramorph of augite and magnetite, which may partially or completely substitute for the original crystal but still retains its characteristic outlines. Mechanical behaviour of volcanic rocks The mechanical behaviour of volcanic rocks is complicated by their complex microstructure. For example, attributes such as the partitioning of the void space (pores and microcracks), pore and crystal size and shape, and hydrothermal alteration can all vary widely in volcanic rocks and can all influence the resultant mechanical behaviour (e.g., Young's modulus, compressive and tensile strength, and the pressure at which they transition from brittle to ductile behaviour). As for other crustal rocks, volcanic rocks are brittle and ductile at low and high effective confining pressures, respectively. Brittle behaviour is manifest as faults and fractures, and ductile behaviour can either be distributed (cataclastic pore collapse) or localised (compaction bands). Understanding the mechanical behaviour of volcanic rocks can help us better understand volcanic hazards, such as flank collapse.
Physical sciences
Igneous rocks
Earth science
316528
https://en.wikipedia.org/wiki/Thallus
Thallus
Thallus (: thalli), from Latinized Greek (), meaning "a green shoot" or "twig", is the vegetative tissue of some organisms in diverse groups such as algae, fungi, some liverworts, lichens, and the Myxogastria. A thallus usually names the entire body of a multicellular non-moving organism in which there is no organization of the tissues into organs. Many of these organisms were previously known as the thallophytes, a polyphyletic group of distantly related organisms. An organism or structure resembling a thallus is called thalloid, thalloidal, thalliform, thalline, or thallose. Even though thalli do not have organized and distinct parts (leaves, roots, and stems) as do the vascular plants, they may have analogous structures that resemble their vascular "equivalents". The analogous structures have similar function or macroscopic structure, but different microscopic structure; for example, no thallus has vascular tissue. In exceptional cases such as the Lemnoideae, where the structure of a vascular plant is in fact thallus-like, it is referred to as having a thalloid structure, or sometimes as a thalloid. Although a thallus is largely undifferentiated in terms of its anatomy, there can be visible differences and functional differences. A kelp, for example, may have its thallus divided into three regions. The parts of a kelp thallus include the holdfast (anchor), stipe (supports the blades) and the blades (for photosynthesis). The thallus of a fungus is usually called a mycelium. The term thallus is also commonly used to refer to the vegetative body of a lichen. In seaweed, thallus is sometimes also called 'frond'. The gametophyte of some non-thallophyte plants – clubmosses, horsetails, and ferns is termed "prothallus".
Biology and health sciences
Fungal morphology and anatomy
Biology
316532
https://en.wikipedia.org/wiki/Spring%20%28season%29
Spring (season)
Spring, also known as springtime, is one of the four temperate seasons, succeeding winter and preceding summer. There are various technical definitions of spring, but local usage of the term varies according to local climate, cultures and customs. When it is spring in the Northern Hemisphere, it is autumn in the Southern Hemisphere and vice versa. At the spring (or vernal) equinox, days and nights are approximately twelve hours long, with daytime length increasing and nighttime length decreasing as the season progresses until the Summer Solstice in June (Northern Hemisphere) and December (Southern Hemisphere). Spring and "springtime" refer to the season, and also to ideas of rebirth, rejuvenation, renewal, resurrection and regrowth. Subtropical and tropical areas have climates better described in terms of other seasons, e.g. dry or wet, monsoonal or cyclonic. Cultures may have local names for seasons which have little equivalence to the terms originating in Europe. Etymology According to the Online Etymological Dictionary, "spring" in the sense of the season comes from phrases such as "springing time" (14th century) and "the spring of the year". This use is from an archaic noun meaning "act or time of springing or appearing; the first appearance; the beginning, birth, rise, or origin". Spring as a word in general appeared via the Middle English springen, via the Old English springan. These were verbs meaning to rise up or to burst forth, (see also the modern German springen 'jump') and are not believed to have originally related to the season. These all originate from Proto-Germanic *sprenganan. Meteorological reckoning Meteorologists generally define four seasons in many climatic areas: spring, summer, autumn (fall), and winter. These are determined by the values of their average temperatures on a monthly basis, with each season lasting three calendar months. The three warmest months are by definition summer, the three coldest months are winter, and the intervening gaps are spring and autumn. Meteorological spring can therefore, start on different dates in different regions. In the United States and United Kingdom, spring months are March, April, and May. In Ireland, following the Irish calendar, spring is often defined as February, March, and April. In Sweden, meteorologists define the beginning of spring as the first occasion on which the average 24 hours temperature exceeds zero degrees Celsius for seven consecutive days, thus the date varies with latitude and elevation (but no earlier than 15 February, and no later than 31 July). In Australia, New Zealand, South Africa and Brazil the spring months are September, October, and November. Astronomical and solar reckoning In the Northern Hemisphere (with countries such as Germany, the United States, Canada, and the UK), solar reckoning was traditionally used with the solstices and equinoxes representing the midpoints of each season, however, the astronomical vernal equinox (varying between 19 and 21 March) can be taken to mark the first day of spring with the summer solstice (around 21 June) marked as first day of summer. By solar reckoning, Spring is held to begin 1 February until the first day of Summer on May Day, with the summer and winter solstices being marked as Midsummer and Midwinter respectively, instead of as the beginning of the season as is the case with astronomical reckoning. In Persian culture the first day of spring is the first day of the first month (called Farvardin) which begins on 20 or 21 March. In the traditional Chinese calendar, the "spring" season () consists of the days between Lichun (3–5 February), taking Chunfen (20–22 March) as its midpoint, then ending at Lixia (5–7 May). Similarly, according to the Celtic tradition, which is based solely on daylight and the strength of the noon sun, spring begins in early February (near Imbolc or Candlemas) and continues until early May (Beltane), with Saint Patrick's Day (17 March) being regarded as the middle day of spring. Late Roman Republic scholar Marcus Terentius Varro defined spring as lasting from the seventh day before the Ides of Februarius (7 February) to the eighth day before the Ides of Maius (8 May). The spring season in India is culturally in the months of March and April, with an average temperature of approx 32 °C. Some people in India especially from Karnataka state celebrate their new year in spring, Ugadi. Ecological reckoning The beginning of spring is not always determined by fixed calendar dates. The phenological or ecological definition of spring relates to biological indicators, such as the blossoming of a range of plant species, the activities of animals, and the special smell of soil that has reached the temperature for micro flora to flourish. These indicators, along with the beginning of spring, vary according to the local climate and according to the specific weather of a particular year. In England, Wales and Northern Ireland, the National Trust runs the #BlossomWatch campaign, which encourages people to share images of blossom with one another, as an early indicator of the arrival of the season. Some ecologists divide the year into six seasons. In addition to spring, ecological reckoning identifies an earlier separate prevernal (early or pre-spring) season between the hibernal (winter) and vernal (spring) seasons. This is a time when only the hardiest flowers like the crocus are in bloom, sometimes while there is still some snowcover on the ground. Natural events During early spring, the axis of the Earth is increasing its tilt relative to the Sun, and the length of daylight rapidly increases for the relevant hemisphere. The hemisphere begins to warm significantly, causing new plant growth to "spring forth", giving the season its name. Any snow begins to melt, swelling streams with runoff and any frosts become less severe. In climates that have no snow, and rare frosts, air and ground temperatures increase more rapidly. Many flowering plants bloom at this time of year, in a long succession, sometimes beginning when snow is still on the ground and continuing into early summer. In normally snowless areas, "spring" may begin as early as February (Northern Hemisphere) or August (Southern Hemisphere), heralded by the blooming of deciduous magnolias, cherries, and quince. Many temperate areas have a dry spring, and wet autumn (fall), which brings about flowering in this season, more consistent with the need for water, as well as warmth. Subarctic areas may not experience "spring" at all until May. While spring is a result of the warmth caused by the changing orientation of the Earth's axis relative to the Sun, the weather in many parts of the world is affected by other, less predictable events. The rainfall in spring (or any season) follows trends more related to longer cycles—such as the solar cycle—or events created by ocean currents and ocean temperatures—for example, the El Niño effect and the Southern Oscillation Index. Unstable spring weather may occur more often when warm air begins to invade from lower latitudes, while cold air is still pushing from the Polar regions. Flooding is also most common in and near mountainous areas during this time of year, because of snow-melt which is accelerated by warm rains. In North America, Tornado Alley is most active at this time of year, especially since the Rocky Mountains prevent the surging hot and cold air masses from spreading eastward, and instead force them into direct conflict. Besides tornadoes, supercell thunderstorms can also produce dangerously large hail and very high winds, for which a severe thunderstorm warning or tornado warning is usually issued. Even more so than in winter, the jet streams play an important role in unstable and severe Northern Hemisphere weather in springtime. In recent decades, season creep has been observed, which means that many phenological signs of spring are occurring earlier in many regions by around two days per decade. Spring in the Southern Hemisphere is different in several significant ways to that of the Northern Hemisphere for several reasons, including: There is no land bridge between Southern Hemisphere countries and the Antarctic zone capable of bringing in cold air without the temperature-mitigating effects of extensive tracts of water; The vastly greater amount of ocean in the Southern Hemisphere at most latitudes; There is a circumpolar flow of air (the roaring 40s and 50s) uninterrupted by large land masses; No equivalent jet streams; and The peculiarities of the reversing ocean currents in the Pacific. Cultural associations Carnival Carnival is practiced by many Christians around the world in the days before Lent (40 days, without Sundays, before Easter). It is the first spring festival of the new year for many. Easter Easter is the most important religious feast in the Christian liturgical year. Christians believe that Jesus was resurrected from the dead on the "third day" (two days after his crucifixion), and celebrate this resurrection on Easter Day, two days after Good Friday. Since the Last Supper was a Passover Seder, the date of Easter can be calculated as the first Sunday after the start of Passover. This is usually (see Passover below) the first Sunday after the first full moon following the spring equinox. The date of Easter varies between 22 March and 25 April (which corresponds to between 4 April and 8 May in the Gregorian Calendar for the Eastern and Oriental Orthodox Churches using the Julian Calendar). In this celebration, the children do an easter egg hunt. May Day The First of May is the date of many public holidays. In many countries, May Day is synonymous with International Workers' Day, or Labour Day, which celebrates the social and economic achievements of the labour movement. As a day of celebration, the holiday has ancient origins, and it can relate to many customs that have survived into modern times. Many of these customs are due to May Day being a cross-quarter day, meaning that (in the Northern Hemisphere where it is almost exclusively celebrated) it falls approximately halfway between the spring equinox and summer solstice. In the Celtic tradition, this date marked the end of spring and the beginning of summer. Passover The Passover begins on the 15th day of the month of Nisan, which typically falls in March or April of the Gregorian calendar on the night of a full moon after the northern spring equinox. However, due to leap months falling after the vernal equinox, Passover sometimes starts on the second full moon after vernal equinox, as in 2016. Jews celebrate this holiday to commemorate their escape from slavery in Egypt as described in the book of Exodus in the Torah. Foods consumed during Passover seders, such as lamb and barley, are tied to springtime seasonal availability. In this celebration, children recite the Four Questions during the seder and hunt for the afikoman afterwards. Allhallowtide The Western Christian season encompassing the triduum of All Saints' Eve (Halloween), All Saints' Day (All Hallows') and All Souls' Day are observed in the spring in the Southern hemisphere.
Physical sciences
Seasons
null
316577
https://en.wikipedia.org/wiki/Launch%20pad
Launch pad
A launch pad is an above-ground facility from which a rocket-powered missile or space vehicle is vertically launched. The term launch pad can be used to describe just the central launch platform (mobile launcher platform), or the entire complex (launch complex). The entire complex will include a launch mount or launch platform to physically support the vehicle, a service structure with umbilicals, and the infrastructure required to provide propellants, cryogenic fluids, electrical power, communications, telemetry, rocket assembly, payload processing, storage facilities for propellants and gases, equipment, access roads, and drainage. Most launch pads include fixed service structures to provide one or more access platforms to assemble, inspect, and maintain the vehicle and to allow access to the spacecraft, including the loading of crew. The pad may contain a flame deflection structure to prevent the intense heat of the rocket exhaust from damaging the vehicle or pad structures, and a sound suppression system spraying large quantities of water may be employed. The pad may also be protected by lightning arresters. A spaceport typically includes multiple launch complexes and other supporting infrastructure. A launch pad is distinct from a missile launch facility (or missile silo or missile complex), which also launches a missile vertically but is located underground in order to help harden it against enemy attack. The launch complex for liquid fueled rockets often has extensive ground support equipment including propellant tanks and plumbing to fill the rocket before launch. Cryogenic propellants (liquid oxygen oxidizer, and liquid hydrogen or liquid methane fuel) need to be continuously topped off (i.e., boil-off replaced) during the launch sequence (countdown), as the vehicle awaits liftoff. This becomes particularly important as complex sequences may be interrupted by planned or unplanned holds to fix problems. Most rockets need to be supported and held down for a few seconds after ignition while the engines build up to full thrust. The vehicle is commonly held on the pad by hold-down arms or explosive bolts, which are triggered when the vehicle is stable and ready to fly, at which point all umbilical connections with the pad are released. History Precursors to modern rocketry, such as fireworks and rocket launchers, did not generally require dedicated launch pads. This was due in part to their relatively portable size, as well as the sufficiency of their casings in sustaining stresses. One of the first pads for a liquid-fueled rocket, what would later be named the Goddard Rocket Launching Site after Robert H. Goddard's series of launch tests starting in 1926, consisted of a mount situated on an open field in rural Massachusetts. The mount consisted of a frame with a series of gasoline and liquid oxygen lines feeding into the rocket. It wasn't until the 1930s that rockets were increasing enough in size and strength that specialized launch facilities became necessary. The Verein für Raumschiffahrt in Germany was permitted after a request for funding in 1930 to move from farms to the Berlin rocket launching site (), a repurposed ammunition dump. A test stand was built for liquid-propellant rockets in Kummersdorf in 1932, where the early designs from the Aggregat series of ballistic missiles were afterwards developed. This site was also the location of the first casualties in rocket development, when Dr. Wahmke and 2 assistants were killed, and another assistant was injured. A propellant fuel tank exploded, while experimenting with mixing 90% hydrogen peroxide and alcohol, before combustion. In May 1937, Dornberger, and most of his staff, moved to the Peenemünde Army Research Center on the island of Usedom on the Baltic coast which offered much greater space and secrecy. Dr. Thiel and his staff followed in the summer of 1940. Test Stand VI at Pennemünde was an exact replica to Kummersdorf's large test stand. It was this site which saw the development of the V-2 rocket. Test Stand VII was the principle testing facility at the Peenemünde Airfield and was capable of static firing rocket motors with up to 200 tons of thrust. Launch pads would increase in complexity over the following decades throughout and following the Space Race. Where large volumes of exhaust gases are expelled during engine testing or vehicle launch, a flame deflector might be implemented to mitigate damage to the surrounding pad and direct exhaust. This is especially important with reusable launch vehicles to increase efficiency of launches while minimizing time spent refurbishing. Construction The construction of a launch pad begins with site selection, considering various geographical and logistical factors. It is often advantageous to position the launch pad on the coast, particularly with the ocean to the east, to leverage the Earth's rotation and increase the specific impulse of launches. Space programs such as Soviet space program or the French space program without this luxury may utilize facilities outside of their main territory such as the Baikonur Cosmodrome or Guiana Space Centre to launch for them. This orientation also allows for safe trajectory paths, minimizing risks to populated areas during ascent. Facilities Transport of rockets to the pad Each launch site is unique, but a few broad types can be described by the means by which the space vehicle gets to the pad. Horizontally integrated rockets travel horizontally with the tail forward to the launch site on a transporter erector launcher and are then raised to the vertical position over the flame duct. Examples include all large Soviet rockets, including Soyuz, Proton, N1, and Energia. This method is also used by the SpaceX and Electron launch vehicles. Silo launched rockets are assembled inside of a missile silo. This method is only used by converted ICBMs due to the difficulty and expense of constructing a silo that can contain the forces of a rocket launch. Vertically integrated rockets can be assembled in a separate hangar on a mobile launcher platform (MLP). The MLP contains the umbilical structure and is carried to the launch site on a large vehicle called Crawler-transporter. Launch Complex 39 at the Kennedy Space Center is an example of a facility using this method. A similar system is used to launch Ariane 5 rockets at ELA-3 at Guiana Space Centre. Vertically assembled vehicles can also be transported on a mobile launcher platform resting on two parallel standard gauge railroad tracks that run from the integration building to launch area. This system is in use for the Atlas V and future Vulcan. At SLC-6 and SLC-37, rockets are assembled on the launch mount. A windowless rail-mounted building encloses the launch pad and gantry to protect the vehicle from the elements, and for purposes of military secrecy. Prior to launch, the building is rolled away. This method is also used at Kagoshima for the M-V. The former Sea Launch service used the converted self-propelled oil drilling platform Ocean Odyssey to transport Zenit 3SL rockets horizontally to the Equator, and then to erect and launch them from a floating launch platform into geostationary transfer orbits. Service structure A service structure is a steel framework or tower that is built on a launch pad to facilitate assembly and servicing. An umbilical tower also usually includes an elevator which allows maintenance and crew access. Immediately before ignition of the rocket's motors, all connections between the tower and the craft are severed, and the bridges over which these connections pass often quickly swing away to prevent damage to the structure or vehicle. Flame deflector systems A flame deflector, flame diverter or flame trench is a structure or device designed to redirect or disperse the flame, heat, and exhaust gases produced by rocket engines or other propulsion systems. The amount of thrust generated by a rocket launch, along with the sound it produces during liftoff, can damage the launchpad and service structure, as well as the launch vehicle. The primary goal of the diverter is to prevent the flame from causing damage to equipment, infrastructure, or the surrounding environment. Flame diverters can be found at rocket launch sites and test stands where large volumes of exhaust gases are expelled during engine testing or vehicle launch. Sound suppression systems Sites for launching large rockets are often equipped with a sound suppression system to absorb or deflect acoustic energy generated during a rocket launch. As engine exhaust gasses exceed the speed of sound, they collide with the ambient air and shockwaves are created, with noise levels approaching 200 db. This energy can be reflected by the launch platform and pad surfaces, and could potentially cause damage to the launch vehicle, payload, and crew. For instance, the maximum admissible overall sound power level (OASPL) for payload integrity is approximately 145 db. Sound is dissipated by huge volumes of water distributed across the launch pad and launch platform during liftoff. Water-based acoustic suppression systems are common on launch pads. They aid in reducing acoustic energy by injecting large quantities of water below the launch pad into the exhaust plume and in the area above the pad. Flame deflectors or flame trenches are designed to channel rocket exhaust away from the launch pad but also redirect acoustic energy away. Hydrogen burn-off systems In rockets using liquid hydrogen as their source of propellant, hydrogen burn-off systems (HBOI), also known as radially outward firing igniters (ROFI), can be utilized to prevent the build up of free gaseous hydrogen (GH2) in the aft engine area of the vehicle prior to engine start. Too much excess hydrogen in the aft during engine start can result in an overpressure blast wave that could damage the launch vehicle and surrounding pad structures. Validating engine performance and system readiness The SpaceX launch sequence includes a hold-down feature of the launch pad that allows full engine ignition and systems check before liftoff. After the first-stage engine starts, the launcher is held down and not released for flight until all propulsion and vehicle systems are confirmed to be operating normally. Similar hold-down systems have been used on launch vehicles such as Saturn V and Space Shuttle. An automatic safe shut-down and unloading of propellant occur if any abnormal conditions are detected. Prior to the launch date, SpaceX sometimes completes a test cycle, culminating in a three-and-a-half second first stage engine static firing as well.
Technology
Basics_6
null
316612
https://en.wikipedia.org/wiki/Spring%20%28hydrology%29
Spring (hydrology)
A spring is a natural exit point at which groundwater emerges from an aquifer and flows across the ground surface as surface water. It is a component of the hydrosphere, as well as a part of the water cycle. Springs have long been important for humans as a source of fresh water, especially in arid regions which have relatively little annual rainfall. Springs are driven out onto the surface by various natural forces, such as gravity and hydrostatic pressure. A spring produced by the emergence of geothermally heated groundwater is known as a hot spring. The yield of spring water varies widely from a volumetric flow rate of nearly zero to more than for the biggest springs. Formation Springs are formed when groundwater flows onto the surface. This typically happens when the water table reaches above the surface level, or if the terrain depresses sharply. Springs may also be formed as a result of karst topography,[aquifers or volcanic activity. Springs have also been observed on the ocean floor, spewing warmer, low-salinity water directly into the ocean. Springs formed as a result of karst topography create karst springs, in which ground water travels through a network of cracks and fissures—openings ranging from intergranular spaces to large caves, later emerging in a spring. The forcing of the spring to the surface can be the result of a confined aquifer in which the recharge area of the spring water table rests at a higher elevation than that of the outlet. Spring water forced to the surface by elevated sources are artesian wells. This is possible even if the outlet is in the form of a cave. In this case the cave is used like a hose by the higher elevated recharge area of groundwater to exit through the lower elevation opening. Non-artesian springs may simply flow from a higher elevation through the earth to a lower elevation and exit in the form of a spring, using the ground like a drainage pipe. Still other springs are the result of pressure from an underground source in the earth, in the form of volcanic or magma activity. The result can be water at elevated temperature and pressure, i.e. hot springs and geysers. The action of the groundwater continually dissolves permeable bedrock such as limestone and dolomite, creating vast cave systems. Types Depression springs occur along a depression, such as the bottom of alluvial valleys, basins, or valleys made of highly permeable materials. Contact springs, which occur along the side of a hill or mountain, are created when the groundwater is underlaid by an impermeable layer of rock or soil known as an aquiclude or aquifuge Fracture, or joint occur when groundwater running along an impermeable layer of rock meets a crack (fracture) or joint in the rock. Tubular springs occur when groundwater flows from circular fissures such as those found in caverns (solution tubular springs) or lava tubular springs found in lava tube caves. Artesian springs typically occur at the lowest point in a given area. An artesian spring is created when the pressure for the groundwater becomes greater than the pressure from the atmosphere. In this case the water is pushed straight up out of the ground. Wonky holes are freshwater submarine exit points for coral and sediment-covered, sediment-filled old river channels. Karst springs occur as outflows of groundwater that are part of a karst hydrological system. Thermal springs are heated by geothermal activity; they have a water temperature significantly higher than the mean air temperature of the surrounding area. Geysers are a type of hot spring where steam is created underground by trapped superheated groundwater resulting in recurring eruptions of hot water and steam. Carbonated springs, such as Soda Springs Geyser, are springs that emit naturally occurring carbonated water, due to dissolved carbon dioxide in the water content. They are sometimes called boiling springs or bubbling springs. "Gushette springs pour from cliff faces" Helocrene springs are diffuse that sustain marshlands with groundwater. Flow Spring discharge, or resurgence, is determined by the spring's recharge basin. Factors that affect the recharge include the size of the area in which groundwater is captured, the amount of precipitation, the size of capture points, and the size of the spring outlet. Water may leak into the underground system from many sources including permeable earth, sinkholes, and losing streams. In some cases entire creeks seemingly disappear as the water sinks into the ground via the stream bed. Grand Gulf State Park in Missouri is an example of an entire creek vanishing into the groundwater system. The water emerges away, forming some of the discharge of Mammoth Spring in Arkansas. Human activity may also affect a spring's discharge—withdrawal of groundwater reduces the water pressure in an aquifer, decreasing the volume of flow. Classification Springs fall into three general classifications: perennial (springs that flow constantly during the year); intermittent (temporary springs that are active after rainfall, or during certain seasonal changes); and periodic (as in geysers that vent and erupt at regular or irregular intervals). Springs are often classified by the volume of the water they discharge. The largest springs are called "first-magnitude", defined as springs that discharge water at a rate of at least 2800 liters or of water per second. Some locations contain many first-magnitude springs, such as Florida where there are at least 27 known to be that size; the Missouri and Arkansas Ozarks, which contain 10 known of first-magnitude; and 11 more in the Thousand Springs area along the Snake River in Idaho. The scale for spring flow is as follows: Water content Minerals become dissolved in the water as it moves through the underground rocks. This mineral content is measured as total dissolved solids (TDS). This may give the water flavor and even carbon dioxide bubbles, depending on the nature of the geology through which it passes. This is why spring water is often bottled and sold as mineral water, although the term is often the subject of deceptive advertising. Mineral water contains no less than 250 parts per million (ppm) of tds. Springs that contain significant amounts of minerals are sometimes called 'mineral springs'. (Springs without such mineral content, meanwhile, are sometimes distinguished as 'sweet springs'.) Springs that contain large amounts of dissolved sodium salts, mostly sodium carbonate, are called 'soda springs'. Many resorts have developed around mineral springs and are known as spa towns. Mineral springs are alleged to have healing properties. Soaking in them is said to result in the absorption of the minerals from the water. Some springs contain arsenic levels that exceed the 10 ppb World Health Organization (WHO) standard for drinking water. Where such springs feed rivers they can also raise the arsenic levels in the rivers above WHO limits. Water from springs is usually clear. However, some springs may be colored by the minerals that are dissolved in the water. For instance, water heavy with iron or tannins will have an orange color. In parts of the United States a stream carrying the outflow of a spring to a nearby primary stream may be called a spring branch, spring creek, or run. Groundwater tends to maintain a relatively long-term average temperature of its aquifer; so flow from a spring may be cooler than other sources on a summer day, but remain unfrozen in the winter. The cool water of a spring and its branch may harbor species such as certain trout that are otherwise ill-suited to a warmer local climate. Types of mineral springs Sulfur springs contain a high level of dissolved sulfur or hydrogen sulfide in the water. Historically they have been used to alleviate the symptoms of arthritis and other inflammatory diseases. Borax springs Gypsum springs Saline springs Iron springs (chalybeate spring) Radium springs (or radioactive springs) have a detectable level of radiation produced by the natural radioactive decay process Uses Springs have been used for a variety of human needs - including drinking water, domestic water supply, irrigation, mills, navigation, and electricity generation. Modern uses include recreational activities such as fishing, swimming, and floating; therapy; water for livestock; fish hatcheries; and supply for bottled mineral water or bottled spring water. Springs have taken on a kind of mythic quality in that some people falsely believe that springs are always healthy sources of drinking water. They may or may not be. One must take a comprehensive water quality test to know how to use a spring appropriately, whether for a mineral bath or drinking water. Springs that are managed as spas will already have such a test. Drinking water Springs are often used as sources for bottled water. When purchasing bottled water labeled as spring water one can often find the water test for that spring on the website of the company selling it. Irrigation Springs have been used as sources of water for gravity-fed irrigation of crops. Indigenous people of the American Southwest built spring-fed acequias that directed water to fields through canals. The Spanish missionaries later used this method. Sacred springs A sacred spring, or holy well, is a small body of water emerging from underground and revered in some religious context: Christian and/or pagan and/or other. The lore and mythology of ancient Greece was replete with sacred and storied springs—notably, the Corycian, Pierian and Castalian springs. In medieval Europe, pagan sacred sites frequently became Christianized as holy wells. The term "holy well" is commonly employed to refer to any water source of limited size (i.e., not a lake or river, but including pools and natural springs and seeps), which has some significance in local folklore. This can take the form of a particular name, an associated legend, the attribution of healing qualities to the water through the numinous presence of its guardian spirit or of a Christian saint, or a ceremony or ritual centered on the well site. Christian legends often recount how the action of a saint caused a spring's water to flow - a familiar theme, especially in the hagiography of Celtic saints. Thermal springs The geothermally heated groundwater that flows from thermal springs is greater than human body temperature, usually in the range of , but they can be hotter. Those springs with water cooler than body temperature but warmer than air temperature are sometimes referred to as warm springs. Bathing and balneotherapy Hot springs or geothermal springs have been used for balneotherapy, bathing, and relaxation for thousands of years. Because of the folklore surrounding hot springs and their claimed medical value, some have become tourist destinations and locations of physical rehabilitation centers. Geothermal energy Hot springs have been used as a heat source for thousands of years. In the 20th century, they became a renewable resource of geothermal energy for heating homes and buildings. The city of Beppu, Japan contains 2,217 hot spring well heads that provide the city with hot water. Hot springs have also been used as a source of sustainable energy for greenhouse cultivation and the growing of crops and flowers. Terminology Spring boil Spring pool Spring runs also called rheocrene springs Spring vent Cultural representations Springs have been represented in culture through art, mythology, and folklore throughout history. The Fountain of Youth is a mythical spring which was said to restore youth to anyone who drank from it. It has been claimed that the fountain is located in St. Augustine, Florida, and was discovered by Juan Ponce de León in 1513. However, it has not demonstrated the power to restore youth, and most historians dispute the veracity of Ponce de León's discovery. Pythia, also known as the Oracle at Delphi was the high priestess of the Temple of Apollo. She delivered prophesies in a frenzied state of divine possession that were "induced by vapours rising from a chasm in the rock". It is believed that the vapors were emitted from the Kerna spring at Delphi. The Greek myth of Narcissus describes a young man who fell in love with his reflection in the still pool of a spring. Narcissus gazed into "an unmuddied spring, silvery from its glittering waters, which neither shepherds nor she-goats grazing on the mountain nor any other cattle had touched, which neither bird nor beast nor branch fallen from a tree had disturbed." (Ovid) The early 20th century American photographer, James Reuel Smith created a comprehensive series of photographs documenting the historical springs of New York City before they were capped by the city after the advent of the municipal water system. Smith later photographed springs in Europe leading to his book, Springs and Wells in Greek and Roman Literature, Their Legends and Locations (1922). The 19th century Japanese artists Utagawa Hiroshige and Utagawa Toyokuni III created a series of wood-block prints, Two Artists Tour the Seven Hot Springs (Sōhitsu shichitō meguri) in 1854. The Chinese city Jinan is known as "a City of Springs" (Chinese: 泉城), because of its 72 spring attractions and numerous micro spring holes spread over the city centre.
Physical sciences
Hydrology
null
316617
https://en.wikipedia.org/wiki/Spring%20%28device%29
Spring (device)
A spring is a device consisting of an elastic but largely rigid material (typically metal) bent or molded into a form (especially a coil) that can return into shape after being compressed or extended. Springs can store energy when compressed. In everyday use, the term most often refers to coil springs, but there are many different spring designs. Modern springs are typically manufactured from spring steel. An example of a non-metallic spring is the bow, made traditionally of flexible yew wood, which when drawn stores energy to propel an arrow. When a conventional spring, without stiffness variability features, is compressed or stretched from its resting position, it exerts an opposing force approximately proportional to its change in length (this approximation breaks down for larger deflections). The rate or spring constant of a spring is the change in the force it exerts, divided by the change in deflection of the spring. That is, it is the gradient of the force versus deflection curve. An extension or compression spring's rate is expressed in units of force divided by distance, for example or N/m or lbf/in. A torsion spring is a spring that works by twisting; when it is twisted about its axis by an angle, it produces a torque proportional to the angle. A torsion spring's rate is in units of torque divided by angle, such as N·m/rad or ft·lbf/degree. The inverse of spring rate is compliance, that is: if a spring has a rate of 10 N/mm, it has a compliance of 0.1 mm/N. The stiffness (or rate) of springs in parallel is additive, as is the compliance of springs in series. Springs are made from a variety of elastic materials, the most common being spring steel. Small springs can be wound from pre-hardened stock, while larger ones are made from annealed steel and hardened after manufacture. Some non-ferrous metals are also used, including phosphor bronze and titanium for parts requiring corrosion resistance, and low-resistance beryllium copper for springs carrying electric current. History Simple non-coiled springs have been used throughout human history, e.g. the bow (and arrow). In the Bronze Age more sophisticated spring devices were used, as shown by the spread of tweezers in many cultures. Ctesibius of Alexandria developed a method for making springs out of an alloy of bronze with an increased proportion of tin, hardened by hammering after it was cast. Coiled springs appeared early in the 15th century, in door locks. The first spring powered-clocks appeared in that century and evolved into the first large watches by the 16th century. In 1676 British physicist Robert Hooke postulated Hooke's law, which states that the force a spring exerts is proportional to its extension. On March 8, 1850, John Evans, Founder of John Evans' Sons, Incorporated, opened his business in New Haven, Connecticut, manufacturing flat springs for carriages and other vehicles, as well as the machinery to manufacture the springs. Evans was a Welsh blacksmith and springmaker who emigrated to the United States in 1847, John Evans' Sons became "America's oldest springmaker" which continues to operate today. Types Classification Springs can be classified depending on how the load force is applied to them: Tension/extension spring The spring is designed to operate with a tension load, so the spring stretches as the load is applied to it. Compression spring Designed to operate with a compression load, so the spring gets shorter as the load is applied to it. Torsion spring Unlike the above types in which the load is an axial force, the load applied to a torsion spring is a torque or twisting force, and the end of the spring rotates through an angle as the load is applied. Constant spring Supported load remains the same throughout deflection cycle Variable spring Resistance of the coil to load varies during compression Variable stiffness spring Resistance of the coil to load can be dynamically varied for example by the control system, some types of these springs also vary their length thereby providing actuation capability as well They can also be classified based on their shape: Flat spring Made of a flat spring steel. Machined spring Manufactured by machining bar stock with a lathe and/or milling operation rather than a coiling operation. Since it is machined, the spring may incorporate features in addition to the elastic element. Machined springs can be made in the typical load cases of compression/extension, torsion, etc. Serpentine spring A zig-zag of thick wire, often used in modern upholstery/furniture. Garter spring A coiled steel spring that is connected at each end to create a circular shape. Common types The most common types of spring are: Cantilever spring A flat spring fixed only at one end like a cantilever, while the free-hanging end takes the load. Coil spring Also known as a helical spring. A spring (made by winding a wire around a cylinder) is of two types: Tension or extension springs are designed to become longer under load. Their turns (loops) are normally touching in the unloaded position, and they have a hook, eye or some other means of attachment at each end. Compression springs are designed to become shorter when loaded. Their turns (loops) are not touching in the unloaded position, and they need no attachment points. Hollow tubing springs can be either extension springs or compression springs. Hollow tubing is filled with oil and the means of changing hydrostatic pressure inside the tubing such as a membrane or miniature piston etc. to harden or relax the spring, much like it happens with water pressure inside a garden hose. Alternatively tubing's cross-section is chosen of a shape that it changes its area when tubing is subjected to torsional deformation: change of the cross-section area translates into change of tubing's inside volume and the flow of oil in/out of the spring that can be controlled by valve thereby controlling stiffness. There are many other designs of springs of hollow tubing which can change stiffness with any desired frequency, change stiffness by a multiple or move like a linear actuator in addition to its spring qualities. Arc spring A pre-curved or arc-shaped helical compression spring, which is able to transmit a torque around an axis. Volute spring A compression coil spring in the form of a cone so that under compression the coils are not forced against each other, thus permitting longer travel. Balance spring Also known as a hairspring. A delicate spiral spring used in watches, galvanometers, and places where electricity must be carried to partially rotating devices such as steering wheels without hindering the rotation. Leaf spring A flat spring used in vehicle suspensions, electrical switches, and bows. V-spring Used in antique firearm mechanisms such as the wheellock, flintlock and percussion cap locks. Also door-lock spring, as used in antique door latch mechanisms. Other types Other types include: Belleville washer A disc shaped spring commonly used to apply tension to a bolt (and also in the initiation mechanism of pressure-activated landmines) Constant-force spring A tightly rolled ribbon that exerts a nearly constant force as it is unrolled Gas spring A volume of compressed gas. Ideal spring An idealised perfect spring with no weight, mass, damping losses, or limits, a concept used in physics. The force an ideal spring would exert is exactly proportional to its extension or compression. Mainspring A spiral ribbon-shaped spring used as a power store of clockwork mechanisms: watches, clocks, music boxes, windup toys, and mechanically powered flashlights Negator spring A thin metal band slightly concave in cross-section. When coiled it adopts a flat cross-section but when unrolled it returns to its former curve, thus producing a constant force throughout the displacement and negating any tendency to re-wind. The most common application is the retracting steel tape rule. Progressive rate coil springs A coil spring with a variable rate, usually achieved by having unequal distance between turns so that as the spring is compressed one or more coils rests against its neighbour. Rubber band A tension spring where energy is stored by stretching the material. Spring washer Used to apply a constant tensile force along the axis of a fastener. Torsion spring Any spring designed to be twisted rather than compressed or extended. Used in torsion bar vehicle suspension systems. Wave spring various types of spring made compact by using waves to give a spring effect. Physics Hooke's law An ideal spring acts in accordance with Hooke's law, which states that the force with which the spring pushes back is linearly proportional to the distance from its equilibrium length: , where is the displacement vector – the distance from its equilibrium length. is the resulting force vector – the magnitude and direction of the restoring force the spring exerts is the rate, spring constant or force constant of the spring, a constant that depends on the spring's material and construction. The negative sign indicates that the force the spring exerts is in the opposite direction from its displacement Most real springs approximately follow Hooke's law if not stretched or compressed beyond their elastic limit. Coil springs and other common springs typically obey Hooke's law. There are useful springs that don't: springs based on beam bending can for example produce forces that vary nonlinearly with displacement. If made with constant pitch (wire thickness), conical springs have a variable rate. However, a conical spring can be made to have a constant rate by creating the spring with a variable pitch. A larger pitch in the larger-diameter coils and a smaller pitch in the smaller-diameter coils forces the spring to collapse or extend all the coils at the same rate when deformed. Simple harmonic motion Since force is equal to mass, m, times acceleration, a, the force equation for a spring obeying Hooke's law looks like: The mass of the spring is small in comparison to the mass of the attached mass and is ignored. Since acceleration is simply the second derivative of x with respect to time, This is a second order linear differential equation for the displacement as a function of time. Rearranging: the solution of which is the sum of a sine and cosine: and are arbitrary constants that may be found by considering the initial displacement and velocity of the mass. The graph of this function with (zero initial position with some positive initial velocity) is displayed in the image on the right. Energy dynamics In simple harmonic motion of a spring-mass system, energy will fluctuate between kinetic energy and potential energy, but the total energy of the system remains the same. A spring that obeys Hooke's Law with spring constant k will have a total system energy E of: Here, A is the amplitude of the wave-like motion that is produced by the oscillating behavior of the spring. The potential energy U of such a system can be determined through the spring constant k and its displacement x: The kinetic energy K of an object in simple harmonic motion can be found using the mass of the attached object m and the velocity at which the object oscillates v: Since there is no energy loss in such a system, energy is always conserved and thus: Frequency & period The angular frequency ω of an object in simple harmonic motion, given in radians per second, is found using the spring constant k and the mass of the oscillating object m: The period T, the amount of time for the spring-mass system to complete one full cycle, of such harmonic motion is given by: The frequency f, the number of oscillations per unit time, of something in simple harmonic motion is found by taking the inverse of the period: Theory In classical physics, a spring can be seen as a device that stores potential energy, specifically elastic potential energy, by straining the bonds between the atoms of an elastic material. Hooke's law of elasticity states that the extension of an elastic rod (its distended length minus its relaxed length) is linearly proportional to its tension, the force used to stretch it. Similarly, the contraction (negative extension) is proportional to the compression (negative tension). This law actually holds only approximately, and only when the deformation (extension or contraction) is small compared to the rod's overall length. For deformations beyond the elastic limit, atomic bonds get broken or rearranged, and a spring may snap, buckle, or permanently deform. Many materials have no clearly defined elastic limit, and Hooke's law can not be meaningfully applied to these materials. Moreover, for the superelastic materials, the linear relationship between force and displacement is appropriate only in the low-strain region. Hooke's law is a mathematical consequence of the fact that the potential energy of the rod is a minimum when it has its relaxed length. Any smooth function of one variable approximates a quadratic function when examined near enough to its minimum point as can be seen by examining the Taylor series. Therefore, the force – which is the derivative of energy with respect to displacement – approximates a linear function. Force of fully compressed spring where E – Young's modulus d – spring wire diameter L – free length of spring n – number of active windings – Poisson ratio D – spring outer diameter Zero-length springs Zero-length spring is a term for a specially designed coil spring that would exert zero force if it had zero length. That is, in a line graph of the spring's force versus its length, the line passes through the origin. A real coil spring will not contract to zero length because at some point the coils touch each other. "Length" here is defined as the distance between the axes of the pivots at each end of the spring, regardless of any inelastic portion in-between. Zero-length springs are made by manufacturing a coil spring with built-in tension (A twist is introduced into the wire as it is coiled during manufacture; this works because a coiled spring unwinds as it stretches), so if it could contract further, the equilibrium point of the spring, the point at which its restoring force is zero, occurs at a length of zero. In practice, the manufacture of springs is typically not accurate enough to produce springs with tension consistent enough for applications that use zero length springs, so they are made by combining a negative length spring, made with even more tension so its equilibrium point would be at a negative length, with a piece of inelastic material of the proper length so the zero force point would occur at zero length. A zero-length spring can be attached to a mass on a hinged boom in such a way that the force on the mass is almost exactly balanced by the vertical component of the force from the spring, whatever the position of the boom. This creates a horizontal pendulum with very long oscillation period. Long-period pendulums enable seismometers to sense the slowest waves from earthquakes. The LaCoste suspension with zero-length springs is also used in gravimeters because it is very sensitive to changes in gravity. Springs for closing doors are often made to have roughly zero length, so that they exert force even when the door is almost closed, so they can hold it closed firmly. Uses Airsoft gun Aerospace Retractable ballpoint pens Buckling spring keyboards Clockwork clocks, watches, and other things Firearms Forward or aft spring, a method of mooring a vessel to a shore fixture Gravimeters Industrial Equipment Jewelry: Clasp mechanisms Most folding knives, and switchblades Lock mechanisms: Key-recognition and for coordinating the movements of various parts of the lock. Spring mattresses Medical Devices Pogo Stick Pop-open devices: CD players, tape recorders, toasters, etc. Spring reverb Toys; the Slinky toy is just a spring Trampoline Upholstery coil springs Vehicle suspension, Leaf springs
Technology
Components_2
null
316993
https://en.wikipedia.org/wiki/Emulsion%20polymerization
Emulsion polymerization
In polymer chemistry, emulsion polymerization is a type of radical polymerization that usually starts with an emulsion incorporating water, monomers, and surfactants. The most common type of emulsion polymerization is an oil-in-water emulsion, in which droplets of monomer (the oil) are emulsified (with surfactants) in a continuous phase of water. Water-soluble polymers, such as certain polyvinyl alcohols or hydroxyethyl celluloses, can also be used to act as emulsifiers/stabilizers. The name "emulsion polymerization" is a misnomer that arises from a historical misconception. Rather than occurring in emulsion droplets, polymerization takes place in the latex/colloid particles that form spontaneously in the first few minutes of the process. These latex particles are typically 100 nm in size, and are made of many individual polymer chains. The particles are prevented from coagulating with each other because each particle is surrounded by the surfactant ('soap'); the charge on the surfactant repels other particles electrostatically. When water-soluble polymers are used as stabilizers instead of soap, the repulsion between particles arises because these water-soluble polymers form a 'hairy layer' around a particle that repels other particles, because pushing particles together would involve compressing these chains. Emulsion polymerization is used to make several commercially important polymers. Many of these polymers are used as solid materials and must be isolated from the aqueous dispersion after polymerization. In other cases the dispersion itself is the end product. A dispersion resulting from emulsion polymerization is often called a latex (especially if derived from a synthetic rubber) or an emulsion (even though "emulsion" strictly speaking refers to a dispersion of an immiscible liquid in water). These emulsions find applications in adhesives, paints, paper coating and textile coatings. They are often preferred over solvent-based products in these applications due to the absence of volatile organic compounds (VOCs) in them. Advantages of emulsion polymerization include: High molecular weight polymers can be made at fast polymerization rates. By contrast, in bulk and solution free-radical polymerization, there is a tradeoff between molecular weight and polymerization rate. The continuous water phase is an excellent conductor of heat, enabling fast polymerization rates without loss of temperature control. Since polymer molecules are contained within the particles, the viscosity of the reaction medium remains close to that of water and is not dependent on molecular weight. The final product can be used as is and does not generally need to be altered or processed. Disadvantages of emulsion polymerization include: Surfactants and other polymerization adjuvants remain in the polymer or are difficult to remove For dry (isolated) polymers, water removal is an energy-intensive process Emulsion polymerizations are usually designed to operate at high conversion of monomer to polymer. This can result in significant chain transfer to polymer. Can not be used for condensation, ionic, or Ziegler-Natta polymerization, although some exceptions are known. History The early history of emulsion polymerization is connected with the field of synthetic rubber. The idea of using an emulsified monomer in an aqueous suspension or emulsion was first conceived at Bayer, before World War I, in an attempt to prepare synthetic rubber. The impetus for this development was the observation that natural rubber is produced at room temperature in dispersed particles stabilized by colloidal polymers, so the industrial chemists tried to duplicate these conditions. The Bayer workers used naturally occurring polymers such as gelatin, ovalbumin, and starch to stabilize their dispersion. By today's definition these were not true emulsion polymerizations, but suspension polymerizations. The first "true" emulsion polymerizations, which used a surfactant and polymerization initiator, were conducted in the 1920s to polymerize isoprene. Over the next twenty years, through the end of World War II, efficient methods for production of several forms of synthetic rubber by emulsion polymerization were developed, but relatively few publications in the scientific literature appeared: most disclosures were confined to patents or were kept secret due to wartime needs. After World War II, emulsion polymerization was extended to production of plastics. Manufacture of dispersions to be used in latex paints and other products sold as liquid dispersions commenced. Ever more sophisticated processes were devised to prepare products that replaced solvent-based materials. Ironically, synthetic rubber manufacture turned more and more away from emulsion polymerization as new organometallic catalysts were developed that allowed much better control of polymer architecture. Theoretical overview The first successful theory to explain the distinct features of emulsion polymerization was developed by Smith and Ewart, and Harkins in the 1940s, based on their studies of polystyrene. Smith and Ewart arbitrarily divided the mechanism of emulsion polymerization into three stages or intervals. Subsequently, it has been recognized that not all monomers or systems undergo these particular three intervals. Nevertheless, the Smith-Ewart description is a useful starting point to analyze emulsion polymerizations. The Smith-Ewart-Harkins theory for the mechanism of free-radical emulsion polymerization is summarized by the following steps: A monomer is dispersed or emulsified in a solution of surfactant and water, forming relatively large droplets in water. Excess surfactant creates micelles in the water. Small amounts of monomer diffuse through the water to the micelle. A water-soluble initiator is introduced into the water phase where it reacts with monomer in the micelles. (This characteristic differs from suspension polymerization where an oil-soluble initiator dissolves in the monomer, followed by polymer formation in the monomer droplets themselves.) This is considered Smith-Ewart interval 1. The total surface area of the micelles is much greater than the total surface area of the fewer, larger monomer droplets; therefore the initiator typically reacts in the micelle and not the monomer droplet. Monomer in the micelle quickly polymerizes and the growing chain terminates. At this point the monomer-swollen micelle has turned into a polymer particle. When both monomer droplets and polymer particles are present in the system, this is considered Smith-Ewart interval 2. More monomer from the droplets diffuses to the growing particle, where more initiators will eventually react. Eventually the free monomer droplets disappear and all remaining monomer is located in the particles. This is considered Smith-Ewart interval 3. Depending on the particular product and monomer, additional monomer and initiator may be continuously and slowly added to maintain their levels in the system as the particles grow. The final product is a dispersion of polymer particles in water. It can also be known as a polymer colloid, a latex, or commonly and inaccurately as an 'emulsion'. Smith-Ewart theory does not predict the specific polymerization behavior when the monomer is somewhat water-soluble, like methyl methacrylate or vinyl acetate. In these cases homogeneous nucleation occurs: particles are formed without the presence or need for surfactant micelles. High molecular weights are developed in emulsion polymerization because the concentration of growing chains within each polymer particle is very low. In conventional radical polymerization, the concentration of growing chains is higher, which leads to termination by coupling, which ultimately results in shorter polymer chains. The original Smith-Ewart-Hawkins mechanism required each particle to contain either zero or one growing chain. Improved understanding of emulsion polymerization has relaxed that criterion to include more than one growing chain per particle, however, the number of growing chains per particle is still considered to be very low. Because of the complex chemistry that occurs during an emulsion polymerization, including polymerization kinetics and particle formation kinetics, quantitative understanding of the mechanism of emulsion polymerization has required extensive computer simulation. Robert Gilbert has summarized a recent theory. More detailed treatment of Smith-Ewart theory Interval 1 When radicals generated in the aqueous phase encounter the monomer within the micelle, they initiate polymerization. The conversion of monomer to polymer within the micelle lowers the monomer concentration and generates a monomer concentration gradient. Consequently, the monomer from monomer droplets and uninitiated micelles begin to diffuse to the growing, polymer-containing, particles. Those micelles that did not encounter a radical during the earlier stage of conversion begin to disappear, losing their monomer and surfactant to the growing particles. The theory predicts that after the end of this interval, the number of growing polymer particles remains constant. Interval 2 This interval is also known as steady state reaction stage. Throughout this stage, monomer droplets act as reservoirs supplying monomer to the growing polymer particles by diffusion through the water. While at steady state, the ratio of free radicals per particle can be divided into three cases. When the number of free radicals per particle is less than , this is called Case 1. When the number of free radicals per particle equals , this is called Case 2. And when there is greater than radical per particle, this is called Case 3. Smith-Ewart theory predicts that Case 2 is the predominant scenario for the following reasons. A monomer-swollen particle that has been struck by a radical contains one growing chain. Because only one radical (at the end of the growing polymer chain) is present, the chain cannot terminate, and it will continue to grow until a second initiator radical enters the particle. As the rate of termination is much greater than the rate of propagation, and because the polymer particles are extremely small, chain growth is terminated immediately after the entrance of the second initiator radical. The particle then remains dormant until a third initiator radical enters, initiating the growth of a second chain. Consequently, the polymer particles in this case either have zero radicals (dormant state), or 1 radical (polymer growing state) and a very short period of 2 radicals (terminating state) which can be ignored for the free radicals per particle calculation. At any given time, a micelle contains either one growing chain or no growing chains (assumed to be equally probable). Thus, on average, there is around 1/2 radical per particle, leading to the Case 2 scenario. The polymerization rate in this stage can be expressed by where is the homogeneous propagation rate constant for polymerization within the particles and is the equilibrium monomer concentration within a particle. represents the overall concentration of polymerizing radicals in the reaction. For Case 2, where the average number of free radicals per micelle are , can be calculated in following expression: where is number concentration of micelles (number of micelles per unit volume), and is the Avogadro constant (). Consequently, the rate of polymerization is then Interval 3 Separate monomer droplets disappear as the reaction continues. Polymer particles in this stage may be sufficiently large enough that they contain more than 1 radical per particle. Process considerations Emulsion polymerizations have been used in batch, semi-batch, and continuous processes. The choice depends on the properties desired in the final polymer or dispersion and on the economics of the product. Modern process control schemes have enabled the development of complex reaction processes, with ingredients such as initiator, monomer, and surfactant added at the beginning, during, or at the end of the reaction. Early styrene-butadiene rubber (SBR) recipes are examples of true batch processes: all ingredients added at the same time to the reactor. Semi-batch recipes usually include a programmed feed of monomer to the reactor. This enables a starve-fed reaction to ensure a good distribution of monomers into the polymer backbone chain. Continuous processes have been used to manufacture various grades of synthetic rubber. Some polymerizations are stopped before all the monomer has reacted. This minimizes chain transfer to polymer. In such cases the monomer must be removed or stripped from the dispersion. Colloidal stability is a factor in design of an emulsion polymerization process. For dry or isolated products, the polymer dispersion must be isolated, or converted into solid form. This can be accomplished by simple heating of the dispersion until all water evaporates. More commonly, the dispersion is destabilized (sometimes called "broken") by addition of a multivalent cation. Alternatively, acidification will destabilize a dispersion with a carboxylic acid surfactant. These techniques may be employed in combination with application of shear to increase the rate of destabilization. After isolation of the polymer, it is usually washed, dried, and packaged. By contrast, products sold as a dispersion are designed with a high degree of colloidal stability. Colloidal properties such as particle size, particle size distribution, and viscosity are of critical importance to the performance of these dispersions. Living polymerization processes that are carried out via emulsion polymerization such as iodine-transfer polymerization and RAFT have been developed. Controlled coagulation techniques can enable better control of the particle size and distribution. Components Monomers Typical monomers are those that undergo radical polymerization, are liquid or gaseous at reaction conditions, and are poorly soluble in water. Solid monomers are difficult to disperse in water. If monomer solubility is too high, particle formation may not occur and the reaction kinetics reduce to that of solution polymerization. Ethene and other simple olefins must be polymerized at very high pressures (up to 800 bar). Comonomers Copolymerization is common in emulsion polymerization. The same rules and comonomer pairs that exist in radical polymerization operate in emulsion polymerization. However, copolymerization kinetics are greatly influenced by the aqueous solubility of the monomers. Monomers with greater aqueous solubility will tend to partition in the aqueous phase and not in the polymer particle. They will not get incorporated as readily in the polymer chain as monomers with lower aqueous solubility. This can be avoided by a programmed addition of monomer using a semi-batch process. Ethene and other alkenes are used as minor comonomers in emulsion polymerization, notably in vinyl acetate copolymers. Small amounts of acrylic acid or other ionizable monomers are sometimes used to confer colloidal stability to a dispersion. Initiators Both thermal and redox generation of free radicals have been used in emulsion polymerization. Persulfate salts are commonly used in both initiation modes. The persulfate ion readily breaks up into sulfate radical ions above about 50 °C, providing a thermal source of initiation. Redox initiation takes place when an oxidant such as a persulfate salt, a reducing agent such as glucose, Rongalite, or sulfite, and a redox catalyst such as an iron compound are all included in the polymerization recipe. Redox recipes are not limited by temperature and are used for polymerizations that take place below 50 °C. Although organic peroxides and hydroperoxides are used in emulsion polymerization, initiators are usually water soluble and partition into the water phase. This enables the particle generation behavior described in the theory section. In redox initiation, either the oxidant or the reducing agent (or both) must be water-soluble, but one component can be water-insoluble. Surfactants Selection of the correct surfactant is critical to the development of any emulsion polymerization process. The surfactant must enable a fast rate of polymerization, minimize coagulum or fouling in the reactor and other process equipment, prevent an unacceptably high viscosity during polymerization (which leads to poor heat transfer), and maintain or even improve properties in the final product such as tensile strength, gloss, and water absorption. Anionic, nonionic, and cationic surfactants have been used, although anionic surfactants are by far most prevalent. Surfactants with a low critical micelle concentration (CMC) are favored; the polymerization rate shows a dramatic increase when the surfactant level is above the CMC, and minimization of the surfactant is preferred for economic reasons and the (usually) adverse effect of surfactant on the physical properties of the resulting polymer. Mixtures of surfactants are often used, including mixtures of anionic with nonionic surfactants. Mixtures of cationic and anionic surfactants form insoluble salts and are not useful. Examples of surfactants commonly used in emulsion polymerization include fatty acids, sodium lauryl sulfate, and alpha-olefin sulfonate. Non-surfactant stabilizers Some grades of polyvinyl alcohol and other water-soluble polymers can promote emulsion polymerization even though they do not typically form micelles and do not act as surfactants (for example, they do not lower surface tension). It is believed that growing polymer chains graft onto these water-soluble polymers, which stabilize the resulting particles. Dispersions prepared with such stabilizers typically exhibit excellent colloidal stability (for example, dry powders may be mixed into the dispersion without causing coagulation). However, they often result in products that are very water sensitive due to the presence of the water-soluble polymer. Other ingredients Other ingredients found in emulsion polymerization include chain transfer agents, buffering agents, and inert salts. Preservatives are added to products sold as liquid dispersions to retard bacterial growth. These are usually added after polymerization, however. Applications Polymers produced by emulsion polymerization can roughly be divided into three categories. Synthetic rubber Some grades of styrene-butadiene (SBR) Some grades of Polybutadiene Polychloroprene (Neoprene) Nitrile rubber Acrylic rubber Fluoroelastomer (FKM) Plastics Some grades of PVC Some grades of polystyrene Some grades of PMMA Acrylonitrile-butadiene-styrene terpolymer (ABS) Polyvinylidene fluoride Polyvinyl fluoride PTFE Dispersions (i.e. polymers sold as aqueous dispersions) polyvinyl acetate polyvinyl acetate copolymers polyacrylates Styrene-butadiene VAE (vinyl acetate – ethylene copolymers)
Physical sciences
Organic reactions
Chemistry
317311
https://en.wikipedia.org/wiki/El%20Ni%C3%B1o%E2%80%93Southern%20Oscillation
El Niño–Southern Oscillation
El Niño–Southern Oscillation (ENSO) is a global climate phenomenon that emerges from variations in winds and sea surface temperatures over the tropical Pacific Ocean. Those variations have an irregular pattern but do have some semblance of cycles. The occurrence of ENSO is not predictable. It affects the climate of much of the tropics and subtropics, and has links (teleconnections) to higher-latitude regions of the world. The warming phase of the sea surface temperature is known as "El Niño" and the cooling phase as "La Niña". The Southern Oscillation is the accompanying atmospheric oscillation, which is coupled with the sea temperature change. El Niño is associated with higher than normal air sea level pressure over Indonesia, Australia and across the Indian Ocean to the Atlantic. La Niña has roughly the reverse pattern: high pressure over the central and eastern Pacific and lower pressure through much of the rest of the tropics and subtropics. The two phenomena last a year or so each and typically occur every two to seven years with varying intensity, with neutral periods of lower intensity interspersed. El Niño events can be more intense but La Niña events may repeat and last longer. A key mechanism of ENSO is the Bjerknes feedback (named after Jacob Bjerknes in 1969) in which the atmospheric changes alter the sea temperatures that in turn alter the atmospheric winds in a positive feedback. Weaker easterly trade winds result in a surge of warm surface waters to the east and reduced ocean upwelling on the equator. In turn, this leads to warmer sea surface temperatures (called El Niño), a weaker Walker circulation (an east-west overturning circulation in the atmosphere) and even weaker trade winds. Ultimately the warm waters in the western tropical Pacific are depleted enough so that conditions return to normal. The exact mechanisms that cause the oscillation are unclear and are being studied. Each country that monitors the ENSO has a different threshold for what constitutes an El Niño or La Niña event, which is tailored to their specific interests. El Niño and La Niña affect the global climate and disrupt normal weather patterns, which as a result can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term surface cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on timescales of around ten years. The countries most affected by ENSO are developing countries that are bordering the Pacific Ocean and are dependent on agriculture and fishing. In climate change science, ENSO is known as one of the internal climate variability phenomena. Future trends in ENSO due to climate change are uncertain, although climate change exacerbates the effects of droughts and floods. The IPCC Sixth Assessment Report summarized the scientific knowledge in 2021 for the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase". The scientific consensus is also that "it is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". Definition and terminology The El Niño–Southern Oscillation is a single climate phenomenon that periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases in the oscillation which are deemed to occur when specific ocean and atmospheric conditions are reached or exceeded. An early recorded mention of the term "El Niño" ("The Boy" in Spanish) to refer to climate occurred in 1892, when Captain Camilo Carrillo told the geographical society congress in Lima that Peruvian sailors named the warm south-flowing current "El Niño" because it was most noticeable around Christmas. Although pre-Columbian societies were certainly aware of the phenomenon, the indigenous names for it have been lost to history. The capitalized term El Niño refers to the Christ Child, Jesus, because periodic warming in the Pacific near South America is usually noticed around Christmas. Originally, the term El Niño applied to an annual weak warm ocean current that ran southwards along the coast of Peru and Ecuador at about Christmas time. However, over time the term has evolved and now refers to the warm and negative phase of the El Niño–Southern Oscillation (ENSO). The original phrase, El Niño de Navidad, arose centuries ago, when Peruvian fishermen named the weather phenomenon after the newborn Christ. La Niña ("The Girl" in Spanish) is the colder counterpart of El Niño, as part of the broader ENSO climate pattern. In the past, it was also called an anti-El Niño and El Viejo, meaning "the old man." A negative phase exists when atmospheric pressure over Indonesia and the west Pacific is abnormally high and pressure over the east Pacific is abnormally low, during El Niño episodes, and a positive phase is when the opposite occurs during La Niña episodes, and pressure over Indonesia is low and over the west Pacific is high. Fundamentals On average, the temperature of the ocean surface in the tropical East Pacific is roughly cooler than in the tropical West Pacific. The sea surface temperature (SST) of the West Pacific northeast of Australia averages around . SSTs in the East Pacific off the western coast of South America are closer to . Strong trade winds near the equator push water away from the East Pacific and towards the West Pacific. This water is slowly warmed by the Sun as it moves west along the equator. The ocean surface near Indonesia is typically around higher than near Peru because of the buildup of water in the West Pacific. The thermocline, or the transitional zone between the warmer waters near the ocean surface and the cooler waters of the deep ocean, is pushed downwards in the West Pacific due to this water accumulation. The total weight of a column of ocean water is almost the same in the western and east Pacific. Because the warmer waters of the upper ocean are slightly less dense than the cooler deep ocean, the thicker layer of warmer water in the western Pacific means the thermocline there must be deeper. The difference in weight must be enough to drive any deep water return flow. Consequently, the thermocline is tilted across the tropical Pacific, rising from an average depth of about in the West Pacific to a depth of about in the East Pacific. Cooler deep ocean water takes the place of the outgoing surface waters in the East Pacific, rising to the ocean surface in a process called upwelling. Along the western coast of South America, water near the ocean surface is pushed westward due to the combination of the trade winds and the Coriolis effect. This process is known as Ekman transport. Colder water from deeper in the ocean rises along the continental margin to replace the near-surface water. This process cools the East Pacific because the thermocline is closer to the ocean surface, leaving relatively little separation between the deeper cold water and the ocean surface. Additionally, the northward-flowing Humboldt Current carries colder water from the Southern Ocean to the tropics in the East Pacific. The combination of the Humboldt Current and upwelling maintains an area of cooler ocean waters off the coast of Peru. The West Pacific lacks a cold ocean current and has less upwelling as the trade winds are usually weaker than in the East Pacific, allowing the West Pacific to reach warmer temperatures. These warmer waters provide energy for the upward movement of air. As a result, the warm West Pacific has on average more cloudiness and rainfall than the cool East Pacific. ENSO describes a quasi-periodic change of both oceanic and atmospheric conditions over the tropical Pacific Ocean. These changes affect weather patterns across much of the Earth. The tropical Pacific is said to be in one of three states of ENSO (also called "phases") depending on the atmospheric and oceanic conditions. When the tropical Pacific roughly reflects the average conditions, the state of ENSO is said to be in the neutral phase. However, the tropical Pacific experiences occasional shifts away from these average conditions. If trade winds are weaker than average, the effect of upwelling in the East Pacific and the flow of warmer ocean surface waters towards the West Pacific lessen. This results in a cooler West Pacific and a warmer East Pacific, leading to a shift of cloudiness and rainfall towards the East Pacific. This situation is called El Niño. The opposite occurs if trade winds are stronger than average, leading to a warmer West Pacific and a cooler East Pacific. This situation is called La Niña and is associated with increased cloudiness and rainfall over the West Pacific. Bjerknes feedback The close relationship between ocean temperatures and the strength of the trade winds was first identified by Jacob Bjerknes in 1969. Bjerknes also hypothesized that ENSO was a positive feedback system where the associated changes in one component of the climate system (the ocean or atmosphere) tend to reinforce changes in the other. For example, during El Niño, the reduced contrast in ocean temperatures across the Pacific results in weaker trade winds, further reinforcing the El Niño state. This process is known as Bjerknes feedback. Although these associated changes in the ocean and atmosphere often occur together, the state of the atmosphere may resemble a different ENSO phase than the state of the ocean or vice versa. Because their states are closely linked, the variations of ENSO may arise from changes in both the ocean and atmosphere and not necessarily from an initial change of exclusively one or the other. Conceptual models explaining how ENSO operates generally accept the Bjerknes feedback hypothesis. However, ENSO would perpetually remain in one phase if Bjerknes feedback were the only process occurring. Several theories have been proposed to explain how ENSO can change from one state to the next, despite the positive feedback. These explanations broadly fall under two categories. In one view, the Bjerknes feedback naturally triggers negative feedbacks that end and reverse the abnormal state of the tropical Pacific. This perspective implies that the processes that lead to El Niño and La Niña also eventually bring about their end, making ENSO a self-sustaining process. Other theories view the state of ENSO as being changed by irregular and external phenomena such as the Madden–Julian oscillation, tropical instability waves, and westerly wind bursts. Walker circulation The three phases of ENSO relate to the Walker circulation, which was named after Gilbert Walker who discovered the Southern Oscillation during the early twentieth century. The Walker circulation is an east-west overturning circulation in the vicinity of the equator in the Pacific. Upward air is associated with high sea temperatures, convection and rainfall, while the downward branch occurs over cooler sea surface temperatures in the east. During El Niño, as the sea surface temperatures change so does the Walker Circulation. Warming in the eastern tropical Pacific weakens or reverses the downward branch, while cooler conditions in the west lead to less rain and downward air, so the Walker Circulation first weakens and may reverse. Southern Oscillation The Southern Oscillation is the atmospheric component of ENSO. This component is an oscillation in surface air pressure between the tropical eastern and the western Pacific Ocean waters. The strength of the Southern Oscillation is measured by the Southern Oscillation Index (SOI). The SOI is computed from fluctuations in the surface air pressure difference between Tahiti (in the Pacific) and Darwin, Australia (on the Indian Ocean). El Niño episodes have negative SOI, meaning there is lower pressure over Tahiti and higher pressure in Darwin. La Niña episodes on the other hand have positive SOI, meaning there is higher pressure in Tahiti and lower in Darwin. Low atmospheric pressure tends to occur over warm water and high pressure occurs over cold water, in part because of deep convection over the warm water. El Niño episodes are defined as sustained warming of the central and eastern tropical Pacific Ocean, thus resulting in a decrease in the strength of the Pacific trade winds, and a reduction in rainfall over eastern and northern Australia. La Niña episodes are defined as sustained cooling of the central and eastern tropical Pacific Ocean, thus resulting in an increase in the strength of the Pacific trade winds, and the opposite effects in Australia when compared to El Niño. Although the Southern Oscillation Index has a long station record going back to the 1800s, its reliability is limited due to the latitudes of both Darwin and Tahiti being well south of the Equator, so that the surface air pressure at both locations is less directly related to ENSO. To overcome this effect, a new index was created, named the Equatorial Southern Oscillation Index (EQSOI). To generate this index, two new regions, centered on the Equator, were defined. The western region is located over Indonesia and the eastern one over the equatorial Pacific, close to the South American coast. However, data on EQSOI goes back only to 1949. Sea surface height (SSH) changes up or down by several centimeters in Pacific equatorial region with the ESNO: El Niño causes a positive SSH anomaly (raised sea level) because of thermal expansion while La Niña causes a negative SSH anomaly (lowered sea level) via contraction. Three phases of sea surface temperature The El Niño–Southern Oscillation is a single climate phenomenon that quasi-periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases which require certain changes to take place in both the ocean and the atmosphere before an event is declared. The cool phase of ENSO is La Niña, with SST in the eastern Pacific below average, and air pressure high in the eastern Pacific and low in the western Pacific. The ENSO cycle, including both El Niño and La Niña, causes global changes in temperature and rainfall. Neutral phase If the temperature variation from climatology is within 0.5 °C (0.9 °F), ENSO conditions are described as neutral. Neutral conditions are the transition between warm and cold phases of ENSO. Sea surface temperatures (by definition), tropical precipitation, and wind patterns are near average conditions during this phase. Close to half of all years are within neutral periods. During the neutral ENSO phase, other climate anomalies/patterns such as the sign of the North Atlantic Oscillation or the Pacific–North American teleconnection pattern exert more influence. El Niño phase El Niño conditions are established when the Walker circulation weakens or reverses and the Hadley circulation strengthens, leading to the development of a band of warm ocean water in the central and east-central equatorial Pacific (approximately between the International Date Line and 120°W), including the area off the west coast of South America, as upwelling of cold water occurs less or not at all offshore. This warming causes a shift in the atmospheric circulation, leading to higher air pressure in the western Pacific and lower in the eastern Pacific, with rainfall reducing over Indonesia, India and northern Australia, while rainfall and tropical cyclone formation increases over the tropical Pacific Ocean. The low-level surface trade winds, which normally blow from east to west along the equator, either weaken or start blowing from the other direction. El Niño phases are known to happen at irregular intervals of two to seven years, and lasts nine months to two years. The average period length is five years. When this warming occurs for seven to nine months, it is classified as El Niño "conditions"; when its duration is longer, it is classified as an El Niño "episode". It is thought that there have been at least 30 El Niño events between 1900 and 2024, with the 1982–83, 1997–98 and 2014–16 events among the strongest on record. Since 2000, El Niño events have been observed in 2002–03, 2004–05, 2006–07, 2009–10, 2014–16, 2018–19, and 2023–24. Major ENSO events were recorded in the years 1790–93, 1828, 1876–78, 1891, 1925–26, 1972–73, 1982–83, 1997–98, 2014–16, and 2023–24. During strong El Niño episodes, a secondary peak in sea surface temperature across the far eastern equatorial Pacific Ocean sometimes follows the initial peak. La Niña phase An especially strong Walker circulation causes La Niña, which is considered to be the cold oceanic and positive atmospheric phase of the broader El Niño–Southern Oscillation (ENSO) weather phenomenon, as well as the opposite of weather pattern, where sea surface temperature across the eastern equatorial part of the central Pacific Ocean will be lower than normal by 3–5 °C (5.4–9 °F). The phenomenon occurs as strong winds blow warm water at the ocean's surface away from South America, across the Pacific Ocean towards Indonesia. As this warm water moves west, cold water from the deep sea rises to the surface near South America. The movement of so much heat across a quarter of the planet, and particularly in the form of temperature at the ocean surface, can have a significant effect on weather across the entire planet. Tropical instability waves visible on sea surface temperature maps, showing a tongue of colder water, are often present during neutral or La Niña conditions. La Niña is a complex weather pattern that occurs every few years, often persisting for longer than five months. El Niño and La Niña can be indicators of weather changes across the globe. Atlantic and Pacific hurricanes can have different characteristics due to lower or higher wind shear and cooler or warmer sea surface temperatures. A timeline of all La Niña episodes between 1900 and 2023. Note that each forecast agency has a different criteria for what constitutes a La Niña event, which is tailored to their specific interests. La Niña events have been observed for hundreds of years, and occurred on a regular basis during the early parts of both the 17th and 19th centuries. Since the start of the 20th century, La Niña events have occurred during the following years: Transitional phases Transitional phases at the onset or departure of El Niño or La Niña can also be important factors on global weather by affecting teleconnections. Significant episodes, known as Trans-Niño, are measured by the Trans-Niño index (TNI). Examples of affected short-time climate in North America include precipitation in the Northwest US and intense tornado activity in the contiguous US. Variations ENSO Modoki The first ENSO pattern to be recognised, called Eastern Pacific (EP) ENSO, to distinguish if from others, involves temperature anomalies in the eastern Pacific. However, in the 1990s and 2000s, variations of ENSO conditions were observed, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but an anomaly also arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) ENSO, "dateline" ENSO (because the anomaly arises near the dateline), or ENSO "Modoki" (Modoki is Japanese for "similar, but different"). There are variations of ENSO additional to the EP and CP types, and some scientists argue that ENSO exists as a continuum, often with hybrid types. The effects of the CP ENSO are different from those of the EP ENSO. The El Niño Modoki is associated with more hurricanes more frequently making landfall in the Atlantic. La Niña Modoki leads to a rainfall increase over northwestern Australia and northern Murray–Darling basin, rather than over the eastern portion of the country as in a conventional EP La Niña. Also, La Niña Modoki increases the frequency of cyclonic storms over Bay of Bengal, but decreases the occurrence of severe storms in the Indian Ocean overall. The first recorded El Niño that originated in the central Pacific and moved toward the east was in 1986. Recent Central Pacific El Niños happened in 1986–87, 1991–92, 1994–95, 2002–03, 2004–05 and 2009–10. Furthermore, there were "Modoki" events in 1957–59, 1963–64, 1965–66, 1968–70, 1977–78 and 1979–80. Some sources say that the El Niños of 2006-07 and 2014-16 were also Central Pacific El Niños. Recent years when La Niña Modoki events occurred include 1973–1974, 1975–1976, 1983–1984, 1988–1989, 1998–1999, 2000–2001, 2008–2009, 2010–2011, and 2016–2017. The recent discovery of ENSO Modoki has some scientists believing it to be linked to global warming. However, comprehensive satellite data go back only to 1979. More research must be done to find the correlation and study past El Niño episodes. More generally, there is no scientific consensus on how/if climate change might affect ENSO. There is also a scientific debate on the very existence of this "new" ENSO. A number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO. Likewise, following the asymmetric nature of the warm and cold phases of ENSO, some studies could not identify similar variations for La Niña, both in observations and in the climate models, but some sources could identify variations on La Niña with cooler waters on central Pacific and average or warmer water temperatures on both eastern and western Pacific, also showing eastern Pacific Ocean currents going to the opposite direction compared to the currents in traditional La Niñas. ENSO Costero Coined by the Peruvian (ENFEN), ENSO Costero, or ENSO Oriental, is the name given to the phenomenon where the sea-surface temperature anomalies are mostly focused on the South American coastline, especially from Peru and Ecuador. Studies point many factors that can lead to its occurrence, sometimes accompanying, or being accompanied, by a larger EP ENSO occurrence, or even displaying opposite conditions from the observed ones in the other Niño regions when accompanied by Modoki variations. ENSO Costero events usually present more localized effects, with warm phases leading to increased rainfall over the coast of Ecuador, northern Peru and the Amazon rainforest, and increased temperatures over the northern Chilean coast, and cold phases leading to droughts on the peruvian coast, and increased rainfall and decreased temperatures on its mountainous and jungle regions. Because they don't influence the global climate as much as the other types, these events present lesser and weaker correlations to other significant ENSO features, neither always being triggered by Kelvin waves, nor always being accompanied by proportional Southern Oscillation responses. According to the Coastal Niño Index (ICEN), strong El Niño Costero events include 1957, 1982–83, 1997–98 and 2015–16, and La Niña Costera ones include 1950, 1954–56, 1962, 1964, 1966, 1967–68, 1970–71, 1975–76 and 2013. Monitoring and declaration of conditions Currently, each country has a different threshold for what constitutes an El Niño event, which is tailored to their specific interests, for example: In the United States, an El Niño is declared when the Climate Prediction Center, which monitors the sea surface temperatures in the Niño 3.4 region and the tropical Pacific, forecasts that the sea surface temperature will be above average or more for the next several seasons. The Niño 3.4 region stretches from the 120th to 170th meridians west longitude astride the equator five degrees of latitude on either side, are monitored. It is approximately to the southeast of Hawaii. The most recent three-month average for the area is computed, and if the region is more than 0.5 °C (0.9 °F) above (or below) normal for that period, then an El Niño (or La Niña) is considered in progress. The Australian Bureau of Meteorology looks at the trade winds, Southern Oscillation Index, weather models and sea surface temperatures in the Niño 3 and 3.4 regions, before declaring an ENSO event. The Japan Meteorological Agency declares that an ENSO event has started when the average five month sea surface temperature deviation for the Niño 3 region is over for six consecutive months or longer. The Peruvian government declares that an ENSO Costero is under way if the sea surface temperature deviation in the Niño 1+2 regions equal or exceed for at least three months. The United Kingdom's Met Office also uses a several month period to determine ENSO state. When this warming or cooling occurs for only seven to nine months, it is classified as El Niño/La Niña "conditions"; when it occurs for more than that period, it is classified as El Niño/La Niña "episodes". Effects of ENSO on global climate In climate change science, ENSO is known as one of the internal climate variability phenomena. The other two main ones are Pacific decadal oscillation and Atlantic multidecadal oscillation. La Niña impacts the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on decadal timescales. Climate change There is no sign that there are actual changes in the ENSO physical phenomenon due to climate change. Climate models do not simulate ENSO well enough to make reliable predictions. Future trends in ENSO are uncertain as different models make different predictions. It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the global warming, and then (e.g., after the lower layers of the ocean get warmer, as well), El Niño will become weaker. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. The consequences of ENSO in terms of the temperature anomalies and precipitation and weather extremes around the world are clearly increasing and associated with climate change. For example, recent scholarship (since about 2019) has found that climate change is increasing the frequency of extreme El Niño events. Previously there was no consensus on whether climate change will have any influence on the strength or duration of El Niño events, as research alternately supported El Niño events becoming stronger and weaker, longer and shorter. Over the last several decades, the number of El Niño events increased, and the number of La Niña events decreased, although observation of ENSO for much longer is needed to detect robust changes. Studies of historical data show the recent El Niño variation is most likely linked to global warming. For example, some results, even after subtracting the positive influence of decadal variation, are shown to be possibly present in the ENSO trend, the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years. A study published in 2023 by CSIRO researchers found that climate change may have increased by two times the likelihood of strong El Niño events and nine times the likelihood of strong La Niña events. The study stated it found a consensus between different models and experiments. The IPCC Sixth Assessment Report summarized the state of the art of research in 2021 into the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase" and "It is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". and "There is medium confidence that both ENSO amplitude and the frequency of high-magnitude events since 1950 are higher than over the period from 1850 and possibly as far back as 1400". Investigations regarding tipping points The ENSO is considered to be a potential tipping element in Earth's climate. Global warming can strengthen the ENSO teleconnection and resulting extreme weather events. For example, an increase in the frequency and magnitude of El Niño events have triggered warmer than usual temperatures over the Indian Ocean, by modulating the Walker circulation. This has resulted in a rapid warming of the Indian Ocean, and consequently a weakening of the Asian Monsoon. Effects of ENSO on weather patterns El Niño affects the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. Tropical cyclones Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. Areas west of Japan and Korea tend to experience many fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E, which would favor the Japanese archipelago. Based on modeled and observed accumulated cyclone energy (ACE), El Niño years usually result in less active hurricane seasons in the Atlantic Ocean, but instead favor a shift to tropical cyclone activity in the Pacific Ocean, compared to La Niña years favoring above average hurricane development in the Atlantic and less so in the Pacific basin. Over the Atlantic Ocean, vertical wind shear is increased, which inhibits tropical cyclone genesis and intensification, by causing the westerly winds to be stronger. The atmosphere over the Atlantic Ocean can also be drier and more stable during El Niño events, which can inhibit tropical cyclone genesis and intensification. Within the Eastern Pacific basin: El Niño events contribute to decreased easterly vertical wind shear and favor above-normal hurricane activity. However, the impacts of the ENSO state in this region can vary and are strongly influenced by background climate patterns. The Western Pacific basin experiences a change in the location of where tropical cyclones form during El Niño events, with tropical cyclone formation shifting eastward, without a major change in how many develop each year. As a result of this change, Micronesia is more likely, and China less likely, to be affected by tropical cyclones. A change in the location of where tropical cyclones form also occurs within the Southern Pacific Ocean between 135°E and 120°W, with tropical cyclones more likely to occur within the Southern Pacific basin than the Australian region. As a result of this change tropical cyclones are 50% less likely to make landfall on Queensland, while the risk of a tropical cyclone is elevated for island nations like Niue, French Polynesia, Tonga, Tuvalu, and the Cook Islands. Remote influence on tropical Atlantic Ocean A study of climate records has shown that El Niño events in the equatorial Pacific are generally associated with a warm tropical North Atlantic in the following spring and summer. About half of El Niño events persist sufficiently into the spring months for the Western Hemisphere Warm Pool to become unusually large in summer. Occasionally, El Niño's effect on the Atlantic Walker circulation over South America strengthens the easterly trade winds in the western equatorial Atlantic region. As a result, an unusual cooling may occur in the eastern equatorial Atlantic in spring and summer following El Niño peaks in winter. Cases of El Niño-type events in both oceans simultaneously have been linked to severe famines related to the extended failure of monsoon rains. Impacts on humans and ecosystems Economic impacts When El Niño conditions last for many months, extensive ocean warming and the reduction in easterly trade winds limits upwelling of cold nutrient-rich deep water, and its economic effect on local fishing for an international market can be serious. Developing countries that depend on their own agriculture and fishing, particularly those bordering the Pacific Ocean, are usually most affected by El Niño conditions. In this phase of the Oscillation, the pool of warm water in the Pacific near South America is often at its warmest in late December. More generally, El Niño can affect commodity prices and the macroeconomy of different countries. It can constrain the supply of rain-driven agricultural commodities; reduce agricultural output, construction, and services activities; increase food prices; and may trigger social unrest in commodity-dependent poor countries that primarily rely on imported food. A University of Cambridge Working Paper shows that while Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa face a short-lived fall in economic activity in response to an El Niño shock, other countries may actually benefit from an El Niño weather shock (either directly or indirectly through positive spillovers from major trading partners), for instance, Argentina, Canada, Mexico and the United States. Furthermore, most countries experience short-run inflationary pressures following an El Niño shock, while global energy and non-fuel commodity prices increase. The IMF estimates a significant El Niño can boost the GDP of the United States by about 0.5% (due largely to lower heating bills) and reduce the GDP of Indonesia by about 1.0%. Health and social impacts Extreme weather conditions related to the El Niño cycle correlate with changes in the incidence of epidemic diseases. For example, the El Niño cycle is associated with increased risks of some of the diseases transmitted by mosquitoes, such as malaria, dengue fever, and Rift Valley fever. Cycles of malaria in India, Venezuela, Brazil, and Colombia have now been linked to El Niño. Outbreaks of another mosquito-transmitted disease, Australian encephalitis (Murray Valley encephalitis—MVE), occur in temperate south-east Australia after heavy rainfall and flooding, which are associated with La Niña events. A severe outbreak of Rift Valley fever occurred after extreme rainfall in north-eastern Kenya and southern Somalia during the 1997–98 El Niño. ENSO conditions have also been related to Kawasaki disease incidence in Japan and the west coast of the United States, via the linkage to tropospheric winds across the north Pacific Ocean. ENSO may be linked to civil conflicts. Scientists at The Earth Institute of Columbia University, having analyzed data from 1950 to 2004, suggest ENSO may have had a role in 21% of all civil conflicts since 1950, with the risk of annual civil conflict doubling from 3% to 6% in countries affected by ENSO during El Niño years relative to La Niña years. Ecological consequences During the 1982–83, 1997–98 and 2015–16 ENSO events, large extensions of tropical forests experienced a prolonged dry period that resulted in widespread fires, and drastic changes in forest structure and tree species composition in Amazonian and Bornean forests. Their impacts do not restrict only vegetation, since declines in insect populations were observed after extreme drought and terrible fires during El Niño 2015–16. Declines in habitat-specialist and disturbance-sensitive bird species and in large-frugivorous mammals were also observed in Amazonian burned forests, while temporary extirpation of more than 100 lowland butterfly species occurred at a burned forest site in Borneo. In seasonally dry tropical forests, which are more drought tolerant, researchers found that El Niño induced drought increased seedling mortality. In a research published in October 2022, researchers studied seasonally dry tropical forests in a national park in Chiang Mai in Thailand for 7 years and observed that El Niño increased seedling mortality even in seasonally dry tropical forests and may impact entire forests in long run. Coral bleaching Following the El Nino event in 1997 – 1998, the Pacific Marine Environmental Laboratory attributes the first large-scale coral bleaching event to the warming waters. Most critically, global mass bleaching events were recorded in 1997-98 and 2015–16, when around 75-99% losses of live coral were registered across the world. Considerable attention was also given to the collapse of Peruvian and Chilean anchovy populations that led to a severe fishery crisis following the ENSO events in 1972–73, 1982–83, 1997–98 and, more recently, in 2015–16. In particular, increased surface seawater temperatures in 1982-83 also lead to the probable extinction of two hydrocoral species in Panamá, and to a massive mortality of kelp beds along 600 km of coastline in Chile, from which kelps and associated biodiversity slowly recovered in the most affected areas even after 20 years. All these findings enlarge the role of ENSO events as a strong climatic force driving ecological changes all around the world – particularly in tropical forests and coral reefs. Impacts by region Observations of ENSO events since 1950 show that impacts associated with such events depend on the time of year. While certain events and impacts are expected to occur, it is not certain that they will happen. The impacts that generally do occur during most El Niño events include below-average rainfall over Indonesia and northern South America, and above average rainfall in southeastern South America, eastern equatorial Africa, and the southern United States. Africa La Niña results in wetter-than-normal conditions in southern Africa from December to February, and drier-than-normal conditions over equatorial east Africa over the same period. The effects of El Niño on rainfall in southern Africa differ between the summer and winter rainfall areas. Winter rainfall areas tend to get higher rainfall than normal and summer rainfall areas tend to get less rain. The effect on the summer rainfall areas is stronger and has led to severe drought in strong El Niño events. Sea surface temperatures off the west and south coasts of South Africa are affected by ENSO via changes in surface wind strength. During El Niño the south-easterly winds driving upwelling are weaker which results in warmer coastal waters than normal, while during La Niña the same winds are stronger and cause colder coastal waters. These effects on the winds are part of large scale influences on the tropical Atlantic and the South Atlantic High-pressure system, and changes to the pattern of westerly winds further south. There are other influences not known to be related to ENSO of similar importance. Some ENSO events do not lead to the expected changes. Antarctica Many ENSO linkages exist in the high southern latitudes around Antarctica. Specifically, El Niño conditions result in high-pressure anomalies over the Amundsen and Bellingshausen Seas, causing reduced sea ice and increased poleward heat fluxes in these sectors, as well as the Ross Sea. The Weddell Sea, conversely, tends to become colder with more sea ice during El Niño. The exact opposite heating and atmospheric pressure anomalies occur during La Niña. This pattern of variability is known as the Antarctic dipole mode, although the Antarctic response to ENSO forcing is not ubiquitous. Asia In Western Asia, during the region's November–April rainy season, there is increased precipitation in the El Niño phase and reduced precipitation in the La Niña phase on average. During El Niño years: As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it takes the rain with it, causing extensive drought in the western Pacific and rainfall in the normally dry eastern Pacific. Singapore experienced the driest February in 2010 since records began in 1869, with only 6.3 mm of rain falling in the month. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat in China. In March 2008, La Niña caused a drop in sea surface temperatures over Southeast Asia by . It also caused heavy rains over the Philippines, Indonesia, and Malaysia. Australia Across most of the continent, El Niño and La Niña have more impact on climate variability than any other factor. There is a strong correlation between the strength of La Niña and rainfall: the greater the sea surface temperature and Southern Oscillation difference from normal, the larger the rainfall change. During El Niño events, the shift in rainfall away from the Western Pacific may mean that rainfall across Australia is reduced. Over the southern part of the continent, warmer than average temperatures can be recorded as weather systems are more mobile and fewer blocking areas of high pressure occur. The onset of the Indo-Australian Monsoon in tropical Australia is delayed by two to six weeks, which as a consequence means that rainfall is reduced over the northern tropics. The risk of a significant bushfire season in south-eastern Australia is higher following an El Niño event, especially when it is combined with a positive Indian Ocean Dipole event. Europe El Niño's effects on Europe are controversial, complex and difficult to analyze, as it is one of several factors that influence the weather over the continent and other factors can overwhelm the signal. North America La Niña causes mostly the opposite effects of El Niño: above-average precipitation across the northern Midwest, the northern Rockies, Northern California, and the Pacific Northwest's southern and eastern regions. Meanwhile, precipitation in the southwestern and southeastern states, as well as southern California, is below average. This also allows for the development of many stronger-than-average hurricanes in the Atlantic and fewer in the Pacific. ENSO is linked to rainfall over Puerto Rico. During an El Niño, snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well-below normal across the Upper Midwest and Great Lakes states. During a La Niña, snowfall is above normal across the Pacific Northwest and western Great Lakes. In Canada, La Niña will, in general, cause a cooler, snowier winter, such as the near-record-breaking amounts of snow recorded in the La Niña winter of 2007–2008 in eastern Canada. In the spring of 2022, La Niña caused above-average precipitation and below-average temperatures in the state of Oregon. April was one of the wettest months on record, and La Niña effects, while less severe, were expected to continue into the summer. Over North America, the main temperature and precipitation impacts of El Niño generally occur in the six months between October and March. In particular, the majority of Canada generally has milder than normal winters and springs, with the exception of eastern Canada where no significant impacts occur. Within the United States, the impacts generally observed during the six-month period include wetter-than-average conditions along the Gulf Coast between Texas and Florida, while drier conditions are observed in Hawaii, the Ohio Valley, Pacific Northwest and the Rocky Mountains. Study of more recent weather events over California and the southwestern United States indicate that there is a variable relationship between El Niño and above-average precipitation, as it strongly depends on the strength of the El Niño event and other factors. Though it has been historically associated with high rainfall in California, the effects of El Niño depend more strongly on the "flavor" of El Niño than its presence or absence, as only "persistent El Niño" events lead to consistently high rainfall. To the north across Alaska, La Niña events lead to drier than normal conditions, while El Niño events do not have a correlation towards dry or wet conditions. During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track. During La Niña events, the storm track shifts far enough northward to bring wetter than normal winter conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. During the El Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Isthmus of Tehuantepec The synoptic condition for the Tehuantepecer, a violent mountain-gap wind in between the mountains of Mexico and Guatemala, is associated with high-pressure system forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores-Bermuda high pressure system. Wind magnitude is greater during El Niño years than during La Niña years, due to the more frequent cold frontal incursions during El Niño winters. Tehuantepec winds reach to , and on rare occasions . The wind's direction is from the north to north-northeast. It leads to a localized acceleration of the trade winds in the region, and can enhance thunderstorm activity when it interacts with the Intertropical Convergence Zone. The effects can last from a few hours to six days. Between 1942 and 1957, La Niña had an impact that caused isotope changes in the plants of Baja California, and that had helped scientists to study his impact. Pacific islands During an El Niño event, New Zealand tends to experience stronger or more frequent westerly winds during their summer, which leads to an elevated risk of drier than normal conditions along the east coast. There is more rain than usual though on New Zealand's West Coast, because of the barrier effect of the North Island mountain ranges and the Southern Alps. Fiji generally experiences drier than normal conditions during an El Niño, which can lead to drought becoming established over the Islands. However, the main impacts on the island nation is felt about a year after the event becomes established. Within the Samoan Islands, below average rainfall and higher than normal temperatures are recorded during El Niño events, which can lead to droughts and forest fires on the islands. Other impacts include a decrease in the sea level, possibility of coral bleaching in the marine environment and an increased risk of a tropical cyclone affecting Samoa. In the late winter and spring during El Niño events, drier than average conditions can be expected in Hawaii. On Guam during El Niño years, dry season precipitation averages below normal, but the probability of a tropical cyclone is more than triple what is normal, so extreme short duration rainfall events are possible. On American Samoa during El Niño events, precipitation averages about 10 percent above normal, while La Niña events are associated with precipitation averaging about 10 percent below normal. South America The effects of El Niño in South America are direct and strong. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. Because El Niño's warm pool feeds thunderstorms above, it creates increased rainfall across the east-central and eastern Pacific Ocean, including several portions of the South American west coast. The effects of El Niño in South America are direct and stronger than in North America. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. The effects during the months of February, March, and April may become critical along the west coast of South America, El Niño reduces the upwelling of cold, nutrient-rich water that sustains large fish populations, which in turn sustain abundant sea birds, whose droppings support the fertilizer industry. The reduction in upwelling leads to fish kills off the shore of Peru. The local fishing industry along the affected coastline can suffer during long-lasting El Niño events. Peruvian fisheries collapsed during the 1970s due to overfishing following the 1972 El Niño Peruvian anchoveta reduction. The fisheries were previously the world's largest, however, this collapse led to the decline of these fisheries. During the 1982–83 event, jack mackerel and anchoveta populations were reduced, scallops increased in warmer water, but hake followed cooler water down the continental slope, while shrimp and sardines moved southward, so some catches decreased while others increased. Horse mackerel have increased in the region during warm events. Shifting locations and types of fish due to changing conditions create challenges for the fishing industry. Peruvian sardines have moved during El Niño events to Chilean areas. Other conditions provide further complications, such as the government of Chile in 1991 creating restrictions on the fishing areas for self-employed fishermen and industrial fleets. Southern Brazil and northern Argentina also experience wetter than normal conditions during El Niño years, but mainly during the spring and early summer. Central Chile receives a mild winter with large rainfall, and the Peruvian-Bolivian Altiplano is sometimes exposed to unusual winter snowfall events. Drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. During a time of La Niña, drought affects the coastal regions of Peru and Chile. From December to February, northern Brazil is wetter than normal. La Niña causes higher than normal rainfall in the central Andes, which in turn causes catastrophic flooding on the Llanos de Mojos of Beni Department, Bolivia. Such flooding is documented from 1853, 1865, 1872, 1873, 1886, 1895, 1896, 1907, 1921, 1928, 1929 and 1931. Galápagos Islands The Galápagos Islands are a chain of volcanic islands, nearly 600 miles west of Ecuador, South America. in the Eastern Pacific Ocean. These islands support a wide diversity of terrestrial and marine species. The ecosystem is based on the normal trade winds which influence upwelling of cold, nutrient rich waters to the islands. During an El Niño event the trade winds weaken and sometimes blow from west to east, which causes the Equatorial current to weaken, raising surface water temperatures and decreasing nutrients in waters surrounding the Galápagos. El Niño causes a trophic cascade which impacts entire ecosystems starting with primary producers and ending with critical animals such as sharks, penguins, and seals. The effects of El Niño can become detrimental to populations that often starve and die back during these years. Rapid evolutionary adaptations are displayed amongst animal groups during El Niño years to mitigate El Niño conditions. History In geologic timescales Evidence is also strong for El Niño events during the early Holocene epoch 10,000 years ago. Different modes of ENSO-like events have been registered in paleoclimatic archives, showing different triggering methods, feedbacks and environmental responses to the geological, atmospheric and oceanographic characteristics of the time. These paleorecords can be used to provide a qualitative basis for conservation practices. Scientists have also found chemical signatures of warmer sea surface temperatures and increased rainfall caused by El Niño in coral specimens that are around 13,000 years old. In a paleoclimate study published in 2024, the authors suggest that El Niños had a strong influence on Earth's hothouse climate during the Permian-Triassic extinction event. The increasing intensity and duration of El Niño events were associated with active volcanism, which resulted in the dieback of vegetation, an increase in the amount of carbon dioxide in the atmosphere, a significant warming and disturbances in the circulation of air masses. During human history ENSO conditions have occurred at two- to seven-year intervals for at least the past 300 years, but most of them have been weak. El Niño may have led to the demise of the Moche and other pre-Columbian Peruvian cultures. A recent study suggests a strong El Niño effect between 1789 and 1793 caused poor crop yields in Europe, which in turn helped touch off the French Revolution. The extreme weather produced by El Niño in 1876–77 gave rise to the most deadly famines of the 19th century. The 1876 famine alone in northern China killed up to 13 million people. The phenomenon had long been of interest because of its effects on the guano industry and other enterprises that depend on biological productivity of the sea. It is recorded that as early as 1822, cartographer Joseph Lartigue, of the French frigate La Clorinde under Baron Mackau, noted the "counter-current" and its usefulness for traveling southward along the Peruvian coast. Charles Todd, in 1888, suggested droughts in India and Australia tended to occur at the same time; Norman Lockyer noted the same in 1904. An El Niño connection with flooding was reported in 1894 by Victor Eguiguren (1852–1919) and in 1895 by Federico Alfonso Pezet (1859–1929). In 1924, Gilbert Walker (for whom the Walker circulation is named) coined the term "Southern Oscillation". He and others (including Norwegian-American meteorologist Jacob Bjerknes) are generally credited with identifying the El Niño effect. The major 1982–83 El Niño led to an upsurge of interest from the scientific community. The period 1990–95 was unusual in that El Niños have rarely occurred in such rapid succession. An especially intense El Niño event in 1998 caused an estimated 16% of the world's reef systems to die. The event temporarily warmed air temperature by 1.5 °C, compared to the usual increase of 0.25 °C associated with El Niño events. Since then, mass coral bleaching has become common worldwide, with all regions having suffered "severe bleaching". Around 1525, when Francisco Pizarro made landfall in Peru, he noted rainfall in the deserts, the first written record of the impacts of El Niño. Related patterns Madden–Julian oscillation Link to the El Niño-Southern oscillation Pacific decadal oscillation Mechanisms Pacific Meridional Mode
Physical sciences
Climatology
null
4069796
https://en.wikipedia.org/wiki/Ball%20joint
Ball joint
In an automobile, ball joints are spherical bearings that connect the control arms to the steering knuckles, and are used on virtually every automobile made. They bionically resemble the ball-and-socket joints found in most tetrapod animals. A ball joint consists of a bearing stud and socket enclosed in a casing; all these parts are made of steel. The bearing stud is tapered and threaded, and fits into a tapered hole in the steering knuckle. A protective encasing prevents dirt from getting into the joint assembly. Usually, this is a rubber-like boot that allows movement and expansion of lubricant. Motion-control ball joints tend to be retained with an internal spring, which helps to prevent vibration problems in the linkage. The "offset" ball joint provides means of movement in systems where thermal expansion and contraction, shock, seismic motion, and torsional motions, and forces are present. Theory A ball joint is used for allowing free rotation in two planes at the same time while preventing translation in any direction, including rotating in those planes. Combining two such joints with control arms enables motion in all three planes, allowing the front end of an automobile to be steered and a spring and shock (damper) suspension to make the ride comfortable. A simple kingpin suspension requires that the upper and lower control arms (wishbones) have pivot axes that are parallel, and in strict geometric relationship to the kingpin, or the top and bottom trunnions, which connect the kingpin to the control arms, would be severely stressed and the bearings would suffer severe wear. In practice, many vehicles had elastomeric bearings in the horizontal pivots of the trunnions, which allowed some small amount of flexibility, however this was insufficient to allow much adjustment of caster to be made, and also introduced compliance where the suspension designer may not have desired it in his quest for optimum handling. Camber angle could generally be adjusted by moving both inner pivots of either the upper or lower control arm inwards or outwards by an exactly equal amount. But compliance of the control arm inner pivots, typically due to the use of elastomeric bearings, would again cause the trunnions to be stressed. The suspension designer's freedom was limited, it was necessary to have some compliance where it might not be wanted, and very little where more would have been useful in absorbing the fore and aft impact loading from bumps. The introduction of ball joints top and bottom allowed 3-axis articulation and so removed all the constraints on the control arm axes being exactly parallel, so caster could be freely adjusted, typically by asymmetric adjustment of the position of the control arm inner pivots, while camber was adjusted by the symmetric adjustment of these same pivots. The arrangements for adjusting the toe angle are not changed by introducing ball joints in the suspension, although the steering linkage itself must use 4 or more pivots, also usually ball joints, and in almost every vehicle ever made, some of these have been adjustable by having a threaded end and locknut, to enable the toe to be set precisely. This ability to fine-tune ball-jointed suspension allows manufacturers to make the automobile more stable and easier to steer, compared to the older kingpin style suspension. It may also be quieter and more comfortable, because lateral and fore and aft compliance in the suspension can be introduced in controlled amounts at the control arm inner pivots without compromising the integrity of the steering axis pivots, which are now ball joints instead of a king pin and trunnions. The smoother ride may also increase tire tread life, since the ball-joint suspension allows better control of suspension geometry and so can provide better tire-to-road contact. Purpose On modern vehicles, joints are the pivot between the wheels and the suspension of an automobile. They are today almost universally used in the front suspension, having replaced the kingpin/link pin or kingpin/trunnion arrangement, but can also be found in the rear suspension of a few higher-performance autos. Ball joints play a critical role in the safe operation of an automobile's steering and suspension. Many currently manufactured automobiles worldwide use MacPherson strut suspension, which utilises one ball joint per side, between the lower end of the strut and the control arm, with the necessary small amount of articulation at the top of the strut being usually provided by an elastomeric bearing, within which is a ball bearing to allow free rotation about the steering axis. So, there are commonly only two ball joints in the suspension, however there will be at least four (track rod ends and rack ends) in the steering linkage. In non-MacPherson strut automobile suspension, the two ball joints are called the "upper ball joint" and "lower ball joint". Lower ball joints are sometimes larger and may wear out faster, because the fore and aft loads, primarily due to braking, are higher at the bottom ball joint. (Torque reaction and drag add at the bottom joint, and partly cancel at the top joint.) Also, lateral cornering loads are higher at the bottom joint. Depending on the suspension design, the vertical load from the suspension spring may be handled entirely by the top ball joint, or entirely by the bottom ball joint. The damper load, (which is low in normal conditions, zero when stationary, but in peak bump or rebound rate may be almost as large as the spring load) is usually, but not always, taken on the same ball joint as the spring load. The anti-roll bar loading is often, but not always, taken on the bottom ball joint. It may be taken by the top ball joint, or directly from the steering knuckle by ball-jointed drop links. If one of the ball joints does not carry spring load, it may be fitted with an internal anti-rattle spring to keep the ball preferentially in contact with one seat. This was the case in the BMC Mini of 1959 and its many derivatives, where the lower control arm carried no vertical loading, so the joint needed an anti-rattle spring, while the top joint, comprising identical parts, was always in compression due to spring (rubber cone) and damper loads, and so was not fitted with a spring. Other vehicles of the 1960s era, including some Vauxhalls, had lower ball joints with considerable end float, because the joint was always in tension as the spring and damper loads were applied via the lower control arm and were always non-zero. Another example is the Ford Focus, which uses MacPherson struts, and the anti-roll bar is connected directly to the strut, so the lower ball joint is only carrying fore and aft traction/braking and lateral cornering loads. Front-wheel drive Unlike a kingpin, which requires an assembly in the center of the wheel in order to pivot, joints connect to the upper and lower end of the spindle (steering knuckle), to the control arms. This leaves the center section open to allow the use of front-wheel drive. Older kingpin designs can only be used in a rear-wheel-drive configuration. Lubrication Sealed ball joints do not require lubrication as they are "lubed for life". Formerly most ball joints had grease fitting (sometimes called a grease zerk) and were designed for periodic addition of a lubricant, however almost all modern cars use sealed ball joints to minimise maintenance requirements. The lubricant was usually a very high-viscosity lubricant. It is commonly believed that standard ball joints will outlive sealed ones because eventually the seal will break, causing the joint to dry out and rust. Additionally, the act of adding new lubricant pushes out old and dry lubricant, extending the life of the joint. This was supposed to be done at intervals of 1000 to 2000 miles on many vehicles, which is incompatible with the service interval on modern cars, often 12000 miles or more, and in any case was rarely attended to by owners, resulting in severe wear and possible ball joint failure, which can result in serious accidents. For this reason, almost all ball joints on modern European or Far Eastern cars are the sealed for life type. New technology especially applied to the internal bearing design has allowed ball joints to meet these longer service intervals. The special designs incorporate sintered metal bearings which replace OEM sealed polymer/plastic version and improved dust boot seals that work much better at retaining the grease. Spherical rolling joint A spherical rolling joint is a high-precision ball joint consisting of a spherical outer and inner race separated by ball bearings. The ball bearings are housed in a spherical retainer and roll along both the inner and outer surfaces. This design allows the joint to have very low friction while maintaining a large range of motion and backlash as low as 1 μm. SRJs are often used in parallel robotics applications like a Stewart platform, where high rigidity and low backlash are essential. Most SRJs are designed with an offset housing, allowing for higher compressive loads in a smaller space. Alternatively, the joint can be assembled backwards for higher tensile load capability but less range of motion. An alternative to the SRJ is the universal joint, which consists of two revolute joints. By using spherical rolling joints instead of universal, designers can reduce the number of joints to achieve the same result. Using a spherical joint as opposed to a universal joint also eliminates the problematic possibility of a kinematic singularity. Plain spherical bearings can be used in place of SRJs at the cost of increased friction, but offer an opportunity to preload the joint further. Failure While there is no exact lifespan that can be put on sealed ball joints, they can fail as early as in modern vehicles, and much sooner in older vehicles. Signs of a failing ball joint may start with a sudden burst sound as a result of ball joint dismantling. Then it keeps on with clicking, popping or snapping sound when the wheel is turned and eventually turn into a squeaking sound at the end of a stop, when the gas pedal is used and/or also when hitting bumps. Another symptom could be 'thud' noises coming from front suspension when going over bumps. Dry ball joints have dramatically increased friction and can cause the steering to stick or be more difficult. If a ball joint fails, the results can be dangerous as the wheel's angle becomes unconstrained, causing loss of control. Because the tire will be at an unintended angle, the vehicle will come to an abrupt halt, damaging the tires. Also, during failure, debris can damage other parts of the vehicle. Other uses While in automotive parlance the term "ball joint" usually refers to the primary ball joint connections at the ends of the control arms, this type of joint is used in other parts as well, including tie rod ends. In these other applications, they are typically called tie rod ends or, when they are an inner tie rod end on a rack-and-pinion steering system, they are called inner socket assemblies. These joints are also used in a number of other non-automotive applications, from the joints of dolls to other mechanical linkages for a variety of devices, or any place where a degree of rotation in movement is desired.
Technology
Mechanisms
null
13215001
https://en.wikipedia.org/wiki/Conveyor%20system
Conveyor system
A conveyor system is a common piece of mechanical handling equipment that moves materials from one location to another. Conveyors are especially useful in applications involving the transport of heavy or bulky materials. Conveyor systems allow quick and efficient transport for a wide variety of materials, which make them very popular in the material handling and packaging industries. They also have popular consumer applications, as they are often found in supermarkets and airports, constituting the final leg of item/ bag delivery to customers. Many kinds of conveying systems are available and are used according to the various needs of different industries. There are chain conveyors (floor and overhead) as well. Chain conveyors consist of enclosed tracks, I-Beam, towline, power & free, and hand pushed trolleys. Industries where used Conveyor systems are used widespread across a range of industries due to the numerous benefits they provide. Conveyors are able to safely transport materials from one level to another, which when done by human labor would be strenuous and expensive. They can be installed almost anywhere, and are much safer than using a forklift or other machine to move materials. They can move loads of all shapes, sizes and weights. Also, many have advanced safety features that help prevent accidents. There are a variety of options available for running conveying systems, including the hydraulic, mechanical and fully automated systems, which are equipped to fit individual needs. Conveyor systems are commonly used in many industries, including the Mining, automotive, agricultural, computer, electronic, food processing, aerospace, pharmaceutical, chemical, bottling and canning, print finishing and packaging. Although a wide variety of materials can be conveyed, some of the most common include food items such as beans and nuts, bottles and cans, automotive components, scrap metal, pills and powders, wood and furniture and grain and animal feed. Many factors are important in the accurate selection of a conveyor system. It is important to know how the conveyor system will be used beforehand. Some individual areas that are helpful to consider are the required conveyor operations, such as transport, accumulation and sorting, the material sizes, weights and shapes and where the loading and pickup points need to be. Care and maintenance A conveyor system is often the lifeline to a company's ability to effectively move its product in a timely fashion. The steps that a company can take to ensure that it performs at peak capacity, include regular inspections and system audits, close monitoring of motors and reducers, keeping key parts in stock, and proper training of personnel. Increasing the service life of a conveyor system involves: choosing the right conveyor type, the right system design and paying attention to regular maintenance practices. A conveyor system that is designed properly will last a long time with proper maintenance. Overhead conveyor systems have been used in numerous applications from shop displays, assembly lines to paint finishing plants and more. Impact and wear-resistant materials used in manufacturing Conveyor systems require materials suited to the displacement of heavy loads and the wear-resistance to hold-up over time without seizing due to deformation. Where static control is a factor, special materials designed to either dissipate or conduct electrical charges are used. Examples of conveyor handling materials include UHMW, nylon, Nylatron NSM, HDPE, Tivar, Tivar ESd, and polyurethane. Growth in various industries As far as growth is concerned the material handling and conveyor system makers are getting utmost exposure in the industries like automotive, pharmaceutical, packaging and different production plants. The portable conveyors are likewise growing fast in the construction sector and by the year 2014 the purchase rate for conveyor systems in North America, Europe and Asia is likely to grow even further. The most commonly purchased types of conveyors are line-shaft roller conveyors, chain conveyors and conveyor belts at packaging factories and industrial plants where usually product finishing and monitoring are carried. Commercial and civil sectors are increasingly implementing conveyors at airports, shopping malls, etc. Types Aero-mechanical conveyor Automotive conveyor Belt conveyor Belt-driven live roller conveyor Bucket conveyor Chain conveyor Chain-driven live roller conveyor Drag conveyor Dust-proof conveyor Electric track vehicle system Flexible conveyor Gravity conveyor Gravity skate-wheel conveyor Lineshaft roller conveyor Motorized-drive roller conveyor Overhead I-beam conveyor Overland conveyor Pharmaceutical conveyor Plastic belt conveyor Pneumatic conveyor Screw or auger conveyor Spiral conveyor Tube chain conveyor Tubular Gallery conveyor Vacuum conveyor Vertical conveyor Vibrating conveyor Walking Beam Wire mesh conveyor Pneumatic Every pneumatic system uses pipes or ducts called transport lines that carry a mixture of materials and a stream of air. These materials are free flowing powdery materials like cement and fly ash. Products are moved through tubes by air pressure. Pneumatic conveyors are either carrier systems or dilute-phase systems; carrier systems simply push items from one entry point to one exit point, such as the money-exchanging pneumatic tubes used at a bank drive-through window. Dilute-phase systems use push-pull pressure to guide materials through various entry and exit points. Air compressors or blowers can be used to generate the air flow. Three systems used to generate high-velocity air stream: Suction or vacuum systems, utilizing a vacuum created in the pipeline to draw the material with the surrounding air. The system operated at a low pressure, which is practically 0.4–0.5 atm below atmosphere, and is utilized mainly in conveying light free flowing materials. Pressure-type systems, in which a positive pressure is used to push material from one point to the next. The system is ideal for conveying material from one loading point to a number of unloading points. It operates at a pressure of 6 atm and upwards. Combination systems, in which a suction system is used to convey material from a number of loading points and a pressure system is employed to deliver it to a number of unloading points. Vibrating A vibrating conveyor is a machine with a solid conveying surface which is turned up on the side to form a trough. They are used extensively in food-grade applications to convey dry bulk solids where sanitation, washdown, and low maintenance are essential. Vibrating conveyors are also suitable for harsh, very hot, dirty, or corrosive environments. They can be used to convey newly-cast metal parts which may reach upwards of . Due to the fixed nature of the conveying pans vibrating conveyors can also perform tasks such as sorting, screening, classifying and orienting parts. Vibrating conveyors have been built to convey material at angles exceeding 45° from horizontal using special pan shapes. Flat pans will convey most materials at a 5° incline from horizontal line. Flexible The flexible conveyor is based on a conveyor beam in aluminum or stainless steel, with low-friction slide rails guiding a plastic multi-flexing chain. Products to be conveyed travel directly on the conveyor, or on pallets/carriers. These conveyors can be worked around obstacles and keep production lines flowing. They are made at varying levels and can work in multiple environments. They are used in food packaging, case packing, and pharmaceutical industries and also in large retail stores such as Wal-Mart and Kmart. Spiral Like vertical conveyors, spiral conveyors raise and lower materials to different levels of a facility. In contrast, spiral conveyors are able to transport material loads in a continuous flow. A helical spiral or screw rotates within a sealed tube and the speed makes the product in the conveyor rotate with the screw. The tumbling effect provides a homogeneous mix of particles in the conveyor, which is essential when feeding pre-mixed ingredients and maintaining mixing integrity. Industries that require a higher output of materials - food and beverage, retail case packaging, pharmaceuticals - typically incorporate these conveyors into their systems over standard vertical conveyors due to their ability to facilitate high throughput. Most spiral conveyors also have a lower angle of incline or decline (11 degrees or less) to prevent sliding and tumbling during operation. Vertical Vertical conveyors, also commonly referred to as freight lifts and material lifts, are conveyor systems used to raise or lower materials to different levels of a facility during the handling process. Examples of these conveyors applied in the industrial assembly process include transporting materials to different floors. While similar in look to freight elevators, vertical conveyors are not equipped to transport people, only materials. Vertical lift conveyors contain two adjacent, parallel conveyors for simultaneous upward movement of adjacent surfaces of the parallel conveyors. One of the conveyors normally has spaced apart flights (pans) for transporting bulk food items. The dual conveyors rotate in opposite directions, but are operated from one gear box to ensure equal belt speed. One of the conveyors is pivotally hinged to the other conveyor for swinging the attached conveyor away from the remaining conveyor for access to the facing surfaces of the parallel conveyors. Vertical lift conveyors can be manually or automatically loaded and controlled. Almost all vertical conveyors can be systematically integrated with horizontal conveyors, since both of these conveyor systems work in tandem to create a cohesive material handling assembly line. Like spiral conveyors, vertical conveyors that use forks can transport material loads in a continuous flow. With these forks the load can be taken from one horizontal conveyor and put down on another horizontal conveyor on a different level. By adding more forks, more products can be lifted at the same time. Conventional vertical conveyors must have input and output of material loads moving in the same direction. By using forks many combinations of different input- and output- levels in different directions are possible. A vertical conveyor with forks can even be used as a vertical sorter. Compared to a spiral conveyor a vertical conveyor - with or without forks - takes up less space. Vertical reciprocating conveyors (or VRC) are another type of unit handling system. Typical applications include moving unit loads between floor levels, working with multiple accumulation conveyors, and interfacing overhead conveyors line. Common material to be conveyed includes pallets, sacks, custom fixtures or product racks and more. Motorized Drive Roller (MDR) Motorized Drive Roller (MDR) conveyor utilize drive rollers that have a Brushless DC (BLDC) motor embedded within a conveyor roller tube. A single motorized roller tube is then mechanically linked to a small number of non-powered rollers to create a controllable zone of powered conveyor. A linear collection of these individually powered zones are arranged end to end to form a line of contiguous conveyor. The mechanical performance (torque, speed, efficiency, etc.) of drive rollers equipped with BLDC motors is right in the range of that needed for roller conveyor zones when they need to convey general use carton boxes of the size and weight seen in typical modern warehouse and distribution applications. A typical motorized roller conveyor zone can handle carton items weighing up to approximately 35 kg (75 lbs.). Heavy-duty roller Heavy-duty roller conveyors are used for moving items that weigh at least . This type of conveyor makes the handling of such heavy equipment/products easier and more time effective. Many of the heavy duty roller conveyors can move as fast as . Other types of heavy-duty roller conveyors are gravity roller conveyors, chain-driven live roller conveyors, pallet accumulation conveyors, multi-strand chain conveyors, and chain and roller transfers. Gravity roller conveyors are easy to use and are used in many different types of industries such as automotive and retail. Chain-driven live roller conveyors are used for single or bi-directional material handling. Large, heavy loads are moved by chain driven live roller conveyors. Pallet accumulation conveyors are powered through a mechanical clutch. This is used instead of individually powered and controlled sections of conveyors. Multi-strand chain conveyors are used for double-pitch roller chains. Products that cannot be moved on traditional roller conveyors can be moved by a multi-strand chain conveyor. Chain and roller conveyors are short runs of two or more strands of double-pitch chain conveyors built into a chain-driven line roller conveyor. These pop up under the load and move the load off of the conveyor. Walking Beam It usually consists of two fluid power cylinders or also can use a motor driven cam. For the cylinder driven fluid power type, one axis is for vertical motion and the other for horizontal. Both cam and fluid power types require nests at each station to retain the part that is being moved. The beam is raised, raising the part from its station nest and holding the part in a nest on the walking beam, then moved horizontally, transporting the part to the next nest, then lowered vertically, placing the part in the next station's nest. The beam is then returned to its home position while it is in the lowered position out of the way of the parts. This type of conveying system is useful for parts that need to be accurately physically located or relatively heavy parts. All stations are equidistance and require a nest to retain the part.
Technology
Industry: General
null
13222289
https://en.wikipedia.org/wiki/Roman%20concrete
Roman concrete
Roman concrete, also called , was used in construction in ancient Rome. Like its modern equivalent, Roman concrete was based on a hydraulic-setting cement added to an aggregate. Many buildings and structures still standing today, such as bridges, reservoirs and aqueducts, were built with this material, which attests to both its versatility and its durability. Its strength was sometimes enhanced by the incorporation of pozzolanic ash where available (particularly in the Bay of Naples). The addition of ash prevented cracks from spreading. Recent research has shown that the incorporation of mixtures of different types of lime, forming conglomerate "clasts" allowed the concrete to self-repair cracks. Roman concrete was in widespread use from about 150 BC; some scholars believe it was developed a century before that. It was often used in combination with facings and other supports, and interiors were further decorated by stucco, fresco paintings, or coloured marble. Further innovative developments in the material, part of the so-called concrete revolution, contributed to structurally complicated forms. The most prominent example of these is the Pantheon dome, the world's largest and oldest unreinforced concrete dome. Roman concrete differs from modern concrete in that the aggregates often included larger components; hence, it was laid rather than poured. Roman concretes, like any hydraulic concrete, were usually able to set underwater, which was useful for bridges and other waterside construction. Historic references Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of materials appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana ( in Latin), the volcanic sand from the beds of Pozzuoli, which are brownish-yellow-gray in colour in that area around Naples, and reddish-brown near Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for mortar used in buildings and a 1:2 ratio for underwater work. The Romans first used hydraulic concrete in coastal underwater structures, probably in the harbours around Baiae before the end of the 2nd century BC. The harbour of Caesarea is an example (22-15 BC) of the use of underwater Roman concrete technology on a large scale, for which enormous quantities of pozzolana were imported from Puteoli. For rebuilding Rome after the fire in 64 AD which destroyed large portions of the city, Nero's new building code largely called for brick-faced concrete. This appears to have encouraged the development of the brick and concrete industries. Material properties Roman concrete, like any concrete, consists of an aggregate and hydraulic mortar, a binder mixed with water that hardens over time. The composition of the aggregate varied, and included pieces of rock, ceramic tile, lime clasts, and brick rubble from the remains of previously demolished buildings. In Rome, readily available tuff was often used as an aggregate. Gypsum and quicklime were used as binders. Volcanic dusts, called pozzolana or "pit sand", were favoured where they could be obtained. Pozzolana makes the concrete more resistant to salt water than modern-day concrete. Pozzolanic mortar had a high content of alumina and silica. Research in 2023 found that lime clasts, previously considered a sign of poor aggregation technique, react with water seeping into any cracks. This produces reactive calcium, which allows new calcium carbonate crystals to form and reseal the cracks. These lime clasts have a brittle structure that was most likely created in a "hot-mixing" technique with quicklime rather than traditional slaked lime, causing cracks to preferentially move through the lime clasts, thus potentially playing a critical role in the self-healing mechanism. Concrete and, in particular, the hydraulic mortar responsible for its cohesion, was a type of structural ceramic whose utility derived largely from its rheological plasticity in the paste state. The setting and hardening of hydraulic cements derived from hydration of materials and the subsequent chemical and physical interaction of these hydration products. This differed from the setting of slaked lime mortars, the most common cements of the pre-Roman world. Once set, Roman concrete exhibited little plasticity, although it retained some resistance to tensile stresses.The setting of pozzolanic cements has much in common with setting of their modern counterpart, Portland cement. The high silica composition of Roman pozzolana cements is very close to that of modern cement to which blast furnace slag, fly ash, or silica fume have been added. The strength and longevity of Roman 'marine' concrete is understood to benefit from a reaction of seawater with a mixture of volcanic ash and quicklime to create a rare crystal called tobermorite, which may resist fracturing. As seawater percolated within the tiny cracks in the Roman concrete, it reacted with phillipsite naturally found in the volcanic rock and created aluminous tobermorite crystals. The result is a candidate for "the most durable building material in human history". In contrast, modern concrete exposed to saltwater deteriorates within decades. The Roman concrete at the Tomb of Caecilia Metella is another variation higher in potassium that triggered changes that "reinforce interfacial zones and potentially contribute to improved mechanical performance". Seismic technology For an environment as prone to earthquakes as the Italian peninsula, interruptions and internal constructions within walls and domes created discontinuities in the concrete mass. Portions of the building could then shift slightly when there was movement of the earth to accommodate such stresses, enhancing the overall strength of the structure. It was in this sense that bricks and concrete were flexible. It may have been precisely for this reason that, although many buildings sustained serious cracking from a variety of causes, they continue to stand to this day. Another technology used to improve the strength and stability of concrete was its gradation in domes. One example is the Pantheon, where the aggregate of the upper dome region consists of alternating layers of light tuff and pumice, giving the concrete a density of . The foundation of the structure used travertine as an aggregate, having a much higher density of . Modern use Scientific studies of Roman concrete since 2010 have attracted both media and industry attention. Because of its unusual durability, longevity, and lessened environmental footprint, corporations and municipalities are starting to explore the use of Roman-style concrete in North America. This involves replacing the volcanic ash with coal fly ash that has similar properties. Proponents say that concrete made with fly ash can cost up to 60% less, because it requires less cement. It also has a reduced environmental footprint, due to its lower cooking temperature and much longer lifespan. Usable examples of Roman concrete exposed to harsh marine environments have been found to be 2000 years old with little or no wear. In 2013, the University of California Berkeley published an article that described for the first time the mechanism by which the suprastable calcium-aluminium-silicate-hydrate compound binds the material together. During its production, less carbon dioxide is released into the atmosphere than any modern concrete production process. It is no coincidence that the walls of Roman buildings are thicker than those of modern buildings. However, Roman concrete was still gaining its strength for several decades after construction had been completed.
Technology
Building materials
null
966760
https://en.wikipedia.org/wiki/Pastoralism
Pastoralism
Pastoralism is a form of animal husbandry where domesticated animals (known as "livestock") are released onto large vegetated outdoor lands (pastures) for grazing, historically by nomadic people who moved around with their herds. The animal species involved include cattle, camels, goats, yaks, llamas, reindeer, horses, and sheep. Pastoralism occurs in many variations throughout the world, generally where environmentally effected characteristics such as aridity, poor soils, cold or hot temperatures, and lack of water make crop-growing difficult or impossible. Operating in more extreme environments with more marginal lands means that pastoral communities are very vulnerable to the effects of global warming. Pastoralism remains a way of life in many geographic areas, including Africa, the Tibetan plateau, the Eurasian steppes, the Andes, Patagonia, the Pampas, Australia and many other places. , between 200 million and 500 million people globally practiced pastoralism, and 75% of all countries had pastoral communities. Pastoral communities have different levels of mobility. The enclosure of common lands has led to Sedentary pastoralism becoming more common as the hardening of political borders, land tenures, expansion of crop farming, and construction of fences and dedicated agricultural buildings all reduce the ability to move livestock around freely, leading to the rise of pastoral farming on established grazing-zones (sometimes called "ranches"). Sedentary pastoralists may also raise crops and livestock together in the form of mixed farming, for the purpose of diversifying productivity, obtaining manure for organic farming, and improving pasture conditions for their livestock. Mobile pastoralism includes moving herds locally across short distances in search of fresh forage and water (something that can occur daily or even within a few hours); as well as transhumance, where herders routinely move animals between different seasonal pastures across regions; and nomadism, where nomadic pastoralists and their families move with the animals in search for any available grazing-grounds—without much long-term planning. Grazing in woodlands and forests may be referred to as silvopastoralism. Those who practice pastoralism are called "pastoralists". Pastoralist herds interact with their environment, and mediate human relations with the environment as a way of turning uncultivated plants (like wild grass) into food. In many places, grazing herds on savannas and in woodlands can help maintain the biodiversity of such landscapes and prevent them from evolving into dense shrublands or forests. Grazing and browsing at the appropriate levels often can increase biodiversity in Mediterranean climate regions. Pastoralists shape ecosystems in different ways: some communities use fire to make ecosystems more suitable for grazing and browsing animals. Origins One theory suggests that pastoralism developed from mixed farming. Bates and Lees proposed that the incorporation of irrigation into farming resulted in specialization. Advantages of mixed farming include reducing risk of failure, spreading labour, and re-utilizing resources. The importance of these advantages and disadvantages to different farmers or farming societies differs according to the sociocultural preferences of the farmers and the biophysical conditions as determined by rainfall, radiation, soil type, and disease. The increased productivity of irrigation agriculture led to an increase in population and an added impact on resources. Bordering areas of land remained in use for animal breeding. This meant that large distances had to be covered by herds to collect sufficient forage. Specialization occurred as a result of the increasing importance of both intensive agriculture and pastoralism. Both agriculture and pastoralism developed alongside each other, with continuous interactions. A different theory suggests that pastoralism evolved from the hunting and gathering. Hunters of wild goats and sheep were knowledgeable about herd mobility and the needs of the animals. Such hunters were mobile and followed the herds on their seasonal rounds. Undomesticated herds were chosen to become more controllable for the proto-pastoralist nomadic hunter and gatherer groups by taming and domesticating them. Hunter-gatherers' strategies in the past have been very diverse and contingent upon the local environmental conditions, like those of mixed farmers. Foraging strategies have included hunting or trapping big game and smaller animals, fishing, collecting shellfish or insects, and gathering wild-plant foods such as fruits, seeds, and nuts. These diverse strategies for survival amongst the migratory herds could also provide an evolutionary route towards nomadic pastoralism. Resources Pastoralism occurs in uncultivated areas. Wild animals eat the forage from the marginal lands and humans survive from milk, blood, and often meat of the herds and often trade by-products like wool and milk for money and food. Pastoralists do not exist at basic subsistence. Pastoralists often compile wealth and participate in international trade. Pastoralists have trade relations with agriculturalists, horticulturalists, and other groups. Pastoralists are not extensively dependent on milk, blood, and meat of their herd. McCabe noted that when common property institutions are created, in long-lived communities, resource sustainability is much higher, which is evident in the East African grasslands of pastoralist populations. However, the property rights structure is only one of the many different parameters that affect the sustainability of resources, and common or private property per se, does not necessarily lead to sustainability. Some pastoralists supplement herding with hunting and gathering, fishing and/or small-scale farming or pastoral farming. Mobility Mobility allows pastoralists to adapt to the environment, which opens up the possibility for both fertile and infertile regions to support human existence. Important components of pastoralism include low population density, mobility, vitality, and intricate information systems. The system is transformed to fit the environment rather than adjusting the environment to support the "food production system." Mobile pastoralists can often cover a radius of a hundred to five hundred kilometers. Pastoralists and their livestock have impacted the environment. Lands long used for pastoralism have transformed under the forces of grazing livestock and anthropogenic fire. Fire was a method of revitalizing pastureland and preventing forest regrowth. The collective environmental weights of fire and livestock browsing have transformed landscapes in many parts of the world. Fire has permitted pastoralists to tend the land for their livestock. Political boundaries are based on environmental boundaries. The Maquis shrublands of the Mediterranean region are dominated by pyrophytic plants that thrive under conditions of anthropogenic fire and livestock grazing. Nomadic pastoralists have a global food-producing strategy depending on the management of herd animals for meat, skin, wool, milk, blood, manure, and transport. Nomadic pastoralism is practiced in different climates and environments with daily movement and seasonal migration. Pastoralists are among the most flexible populations. Pastoralist societies have had field armed men protect their livestock and their people and then to return into a disorganized pattern of foraging. The products of the herd animals are the most important resources, although the use of other resources, including domesticated and wild plants, hunted animals, and goods accessible in a market economy are not excluded. The boundaries between states impact the viability of subsistence and trade relations with cultivators. Pastoralist strategies typify effective adaptation to the environment. Precipitation differences are evaluated by pastoralists. In East Africa, different animals are taken to specific regions throughout the year that corresponds to the seasonal patterns of precipitation. Transhumance is the migration of livestock and pastoralists between seasonal pastures. In the Himalayas, pastoralists have often historically and traditionally depended on rangelands lying across international borders. The Himalayas contain several international borders, such as those between India and China, India and Nepal, Bhutan and China, India and Pakistan, and Pakistan and China. With the growth of nation states in Asia since the mid-twentieth century, mobility across the international borders in these countries have tended to be more and more restricted and regulated. As a consequence, the old, customary arrangements of trans-border pastoralism have generally tended to disintegrate, and trans-border pastoralism has declined. Within these countries, pastoralism is often at conflict these days with new modes of community forestry, such as Van Panchayats (Uttarakhand) and Community Forest User Groups (Nepal), which tend to benefit settled agricultural communities more. Frictions have also tended to arise between pastoralists and development projects such as dam-building and the creation of protected areas. Some pastoralists are constantly moving, which may put them at odds with sedentary people of towns and cities. The resulting conflicts can result in war for disputed lands. These disputes are recorded in ancient times in the Middle East, as well as for East Asia. Other pastoralists are able to remain in the same location which results in longer-standing housing. Different mobility patterns can be observed: Somali pastoralists keep their animals in one of the harshest environments but they have evolved over the centuries. Somalis have well-developed pastoral culture where complete system of life and governance has been refined. Somali poetry depicts humans interactions, pastoral animals, beasts on the prowl, and other natural things such the rain, celestial events and historic events of significance. Wise sage Guled Haji coined a proverb that encapsulates the centrality of water in pastoral life: Mobility was an important strategy for the Ariaal; however with the loss of grazing land impacted by the growth in population, severe drought, the expansion of agriculture, and the expansion of commercial ranches and game parks, mobility was lost. The poorest families were driven out of pastoralism and into towns to take jobs. Few Ariaal families benefited from education, healthcare, and income earning. The flexibility of pastoralists to respond to environmental change was reduced by colonization. For example, mobility was limited in the Sahel region of Africa with settlement being encouraged. The population tripled and sanitation and medical treatment were improved. Environment knowledge Pastoralists have mental maps of the value of specific environments at different times of year. Pastoralists have an understanding of ecological processes and the environment. Information sharing is vital for creating knowledge through the networks of linked societies. Pastoralists produce food in the world's harshest environments, and pastoral production supports the livelihoods of rural populations on almost half of the world's land. Several hundred million people are pastoralists, mostly in Africa and Asia. ReliefWeb reported that "Several hundred million people practice pastoralism—the use of extensive grazing on rangelands for livestock production, in over 100 countries worldwide. The African Union estimated that Africa has about 268 million pastoralists—over a quarter of the total population—living on about 43 percent of the continent's total land mass." Pastoralists manage rangelands covering about a third of the Earth's terrestrial surface and are able to produce food where crop production is not possible. Pastoralism has been shown, "based on a review of many studies, to be between 2 and 10 times more productive per unit of land than the capital intensive alternatives that have been put forward". However, many of these benefits go unmeasured and are frequently squandered by policies and investments that seek to replace pastoralism with more capital intensive modes of production. They have traditionally suffered from poor understanding, marginalization and exclusion from dialogue. The Pastoralist Knowledge Hub, managed by the Food and Agriculture Organization of the UN serves as a knowledge repository on technical excellence on pastoralism as well as "a neutral forum for exchange and alliance building among pastoralists and stakeholders working on pastoralist issues". The Afar pastoralists in Ethiopia uses an indigenous communication method called dagu for information. This helps them in getting crucial information about climate and availability of pastures at various locations. Farm animal genetic resource There is a variation in genetic makeup of the farm animals driven mainly by natural and human based selection. For example, pastoralists in large parts of Sub Saharan Africa are preferring livestock breeds which are adapted to their environment and able to tolerate drought and diseases. However, in other animal production systems these breeds are discouraged and more productive exotic ones are favored. This situation could not be left unaddressed due to the changes in market preferences and climate all over the world, which could lead to changes in livestock diseases occurrence and decline forage quality and availability. Hence pastoralists can maintain farm animal genetic resources by conserving local livestock breeds. Generally conserving farm animal genetic resources under pastoralism is advantageous in terms of reliability and associated cost. Tragedy of the commons Hardin's Tragedy of the Commons (1968) described how common property resources, such as the land shared by pastoralists, eventually become overused and ruined. According to Hardin's paper, the pastoralist land use strategy was unstable and a cause of environmental degradation. One of Hardin's conditions for a "tragedy of the commons" is that people cannot communicate with each other or make agreements and contracts. Many scholars have pointed out that this is implausible, and yet it is applied in development projects around the globe, motivating the destruction of community and other governance systems that have managed sustainable pastoral systems for thousands of years. The outcomes have often been disastrous. In her book Governing the Commons, Elinor Ostrom showed that communities were not trapped and helpless amid diminishing commons. She argued that a common-pool resource, such as grazing lands used for pastoralism, can be managed more sustainably through community groups and cooperatives than through privatization or total governmental control. Ostrom was awarded a Nobel Memorial Prize in Economic Sciences for her work. Pastoralists in the Sahel zone in Africa were held responsible for the depletion of resources. The depletion of resources was actually triggered by a prior interference and punitive climate conditions. Hardin's paper suggested a solution to the problems, offering a coherent basis for privatization of land, which stimulates the transfer of land from tribal peoples to the state or to individuals. The privatized programs impact the livelihood of the pastoralist societies while weakening the environment. Settlement programs often serve the needs of the state in reducing the autonomy and livelihoods of pastoral people. The violent herder–farmer conflicts in Nigeria, Mali, Sudan, Ethiopia and other countries in the Sahel and Horn of Africa regions have been exacerbated by climate change, land degradation, and population growth. It has also been shown that pastoralism supports human existence in harsh environments and often represents a sustainable approach to land use.
Technology
Agriculture_2
null
967488
https://en.wikipedia.org/wiki/Messier%2077
Messier 77
Messier 77 (M77), also known as NGC 1068 or the Squid Galaxy, is a barred spiral galaxy in the constellation Cetus. It is about away from Earth, and was discovered by Pierre Méchain in 1780, who originally described it as a nebula. Méchain then communicated his discovery to Charles Messier, who subsequently listed the object in his catalog. Both Messier and William Herschel described this galaxy as a star cluster. Today, however, the object is known to be a galaxy. It is one of the brightest Seyfert galaxies visible from Earth and has a D25 isophotal diameter of about . The morphological classification of NGC 1068 in the De Vaucouleurs system is (R)SA(rs)b, where the '(R)' indicates an outer ring-like structure, 'SA' denotes a non-barred spiral, '(rs)' means a transitional inner ring/spiral structure, and 'b' says the spiral arms are moderately wound. Ann et al. (2015) gave it a class of SAa, suggesting tightly wound arms. However, infrared images of the inner part of the galaxy reveal a prominent bar not seen in visual light, and for this reason it is now considered a barred spiral. Messier 77 is an active galaxy with an active galactic nucleus (AGN), which is obscured from view by astronomical dust at visible wavelengths. The diameter of the molecular disk and hot plasma associated with the obscuring material was first measured at radio wavelengths by the VLBA and VLA. The hot dust around the nucleus was subsequently measured in the mid-infrared by the MIDI instrument at the VLTI. It is the brightest and one of the closest and best-studied type 2 Seyfert galaxies, forming a prototype of this class. X-ray source 1H 0244+001 in Cetus has been identified as Messier 77. Only one supernova has been detected in Messier 77. The supernova, named SN 2018ivc, was discovered on 24 November 2018 by the DLT40 Survey. It is a type II supernova, and at discovery it was 15th magnitude and brightening. It has a radio jet consisting of a northeast and a southwest region, caused by interactions with the interstellar medium. In February 2022 astronomers reported a cloud of cosmic dust, detected through infrared interferometry observations, located at the centre of Messier 77 that is hiding a supermassive black hole. In November 2022, the IceCube collaboration announced the detection of a neutrino source emitted by the active galactic nucleus of Messier 77. It is the second detection by IceCube after TXS 0506+056, and only the fourth known source including SN1987A and solar neutrinos.
Physical sciences
Notable galaxies
Astronomy
967508
https://en.wikipedia.org/wiki/Pumpkinseed
Pumpkinseed
The pumpkinseed (Lepomis gibbosus), also referred to as sun perch, pond perch, common sunfish, punkie, sunfish, sunny, and kivver, is a small to medium–sized freshwater fish of the genus Lepomis (true sunfishes), from the sunfish family (Centrarchidae) in the order Centrarchiformes. It is endemic to eastern North America. Distribution and habitat The pumpkinseed's natural range in North America is from New Brunswick down the east coast to South Carolina. It then runs inland to the middle of North America, and extends through Iowa and back through Pennsylvania. Pumpkinseed sunfish have however been introduced throughout most of North America. They can now be found from Washington and Oregon on the Pacific Coast to Georgia on the Atlantic Coast. Yet they are primarily found in the northeastern United States and more rarely in the south-central or southwestern region of the continent. In Europe, the pumpkinseed is considered an invasive species. They were introduced to European waters, and could outcompete native fish. This species is included since 2019 in the list of Invasive Alien Species of Union concern (the Union list). It cannot be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union. The pumpkinseed has also been introduced to the United Kingdom, having arrived in the country around the same time as the populations in Continental Europe. Its range is believed to be restricted to Southern England and the West Country, with stable populations found in East Sussex, West Sussex and Somerset, though the species may potentially be present in the vicinity of London as well. Description Pumpkinseeds have a body shaped much like a pumpkin seed (thus the common name), typically about but up to in length. They typically weigh less than , with the world record being caught by Robert Warne while fishing Honeoye Lake, Upstate New York in 2016. They are orange, green, yellow or blue in color, with speckles over their sides and back and a yellow-orange breast and belly, and the coloration of the ctenoid scales of the pumpkinseed is one of the most vibrant of any freshwater fish and can range from an olive-green or brown to bright orange and blue. The sides are covered with vertical bars that are a faint green or blue, which are typically more prevalent in female pumpkinseeds. Orange spots may cover the dorsal, anal, and caudal fins and the cheeks have blue lines across them. The pumpkinseed is noted for the orange-red spot on the margin of its black gill cover. The pectoral fins of a pumpkinseed can be amber or clear, while the dorsal spines are black. They have a small mouth with an upper jaw stopping right under the eye. Pumpkinseeds are very similar to the larger bluegill, and are often found in the same habitats. One difference between the two species is their opercular flap, which is black in both species but the pumpkinseed has a crimson spot in the shape of a half moon on the back portion. Pumpkinseeds have seven or eight vertical, irregular bands on their sides that are duller in color compared to the bluegill. Habitat Pumpkinseeds typically live in warm, calm lakes, ponds, and pools of creeks and small rivers with plenty of vegetation. They prefer clear water where they can find shelter to hide. They tend to stay near the shore and can be found in numbers within shallow and protected areas. They will feed at all water levels from the surface to the bottom in the daylight, and their heaviest feeding will be in the afternoon. Pumpkinseed sunfish usually travel together in schools that can also include bluegills and other sunfish. Pumpkinseeds are more tolerant of low oxygen levels than bluegills are, but less tolerant of warm water. Groups of young fish school close to shore, but adults tend to travel in groups of two to four in slightly deeper yet still covered waters. Pumpkinseeds are active throughout the day, but they rest at night near the bottom or in shelter areas in rocks or near submerged logs. Dietary habits Pumpkinseeds are carnivorous and feed on a variety of small prey both at the water surface and at the bottom. Among their favorites are insects, small molluscs and crustaceans (such as small crawfish), worms, minnow fry, small frogs or tadpoles, and even cannibalizing other smaller pumpkinseeds. They are effective at destroying mosquito larvae and even occasionally consume small pieces of aquatic vegetation and detritus. They also will readily consume human food scraps, most notably bread which is commonly used for bait. The pumpkinseed sunfish has a terminal mouth, allowing it to open at the anterior end of the snout. Pumpkinseed sunfish that live in waters with larger gastropods have larger mouths and associated jaw muscles to crack the shells. Sport fishing The pumpkinseed sunfish are typically very likely to bite on a worm bait, which makes them easy to catch while angling. Many fishermen consider the pumpkinseed to be a nuisance fish, as it bites so easily and frequently when the fisherman is attempting to catch something else. The pumpkinseeds are very popular with young fishermen due to their willingness to bite, their abundance and close locations to the shore. Although many people consider the meat of a pumpkinseed to be good-tasting, it is typically not a popular sport fish due to its small size. Because pumpkinseeds tend to remain in the shallows and feed all day, pumpkinseeds are relatively easy to catch via bank fishing. They will bite at most bait – including garden worms, insects, leeches, or bits of fish meat. They will also take small lures and can be fished for with a fly rod with wet or dry flies. They will also hit at grubs early in the winter, but are less active from mid- to late winter. They may be easy to catch and popular with the youngest anglers, but pumpkinseeds are often sought by adults as well. The fish do put up an aggressive fight on line, and they have an excellent flavor and are low in fat and high in protein. The IGFA world record for the species stands at , caught near Honeoye, New York, in 2016. Conservation status The pumpkinseed sunfish is very common and is not listed by CITES. It is considered Least Concern (not threatened) by the IUCN. Spawning grounds of the pumpkinseeds can be disturbed by shoreline development and shoreline erosion from heavy lake use. Their susceptibility to silt and pollution makes the pumpkinseed a good indicator of the cleanliness and health of water. Reproduction and life cycle Once water temperatures reach in the late spring or early summer, the male pumpkinseeds will begin to build nests. Nesting sites are typically in shallow water on sand or gravel lake bottoms. The males will use their caudal fins to sweep out shallow, oval-shaped nesting holes that stretch about twice the length of the pumpkinseed itself. The fish will remove debris and large rocks from their nests with their mouths. Nests are arranged in colonies consisting of about three to 15 nests each. Often, pumpkinseeds build their nests near bluegill colonies, and the two species interbreed. Male pumpkinseeds are vigorous and aggressive, and defend their nests by spreading their opercula. Because of this aggressive behavior, pumpkinseeds tend to maintain larger territories than bluegills. Females arrive after the nests are completed, coming in from deeper waters. The male then releases milt and the female releases eggs. Females may spawn in more than one nest, and more than one female may use the same nest. Also, more than one female will spawn with a male in one nest simultaneously. Females are able produce 1,500 to 1,700 eggs, depending on their size and age. Once released, the eggs stick to gravel, sand, or other debris in the nest, and they hatch in as few as three days. Females leave the nest immediately after spawning, but males remain and guard their offspring. The male guards them for about the first 11 days, returning them to the nest in his mouth if they stray from the nesting site. The young fish stay on or near the shallow breeding area and grow to about in their first year. Sexual maturity is usually achieved by age two. Pumpkinseeds have lived to be 12 years old in captivity, but in nature most do not exceed six to eight years old. Adaptations The pumpkinseed sunfish has adapted in many ways to the surroundings where it lives. Its skin reflects camouflage for its habitat. The pattern that appears on the pumpkinseed resembles that of the sunlight patterns that reflect on the shallow water of bays and river beds. The pumpkinseed sunfish has developed a specific method of protection. Along the dorsal fin are 10 to 11 spines, and three additional spines on the anal fin. These spines are very sharp, which aid the fish in defense. The pumpkinseed has the ability to anticipate approaching predators (or prey) via a lateral line system, allowing it to detect changes or movements in the water using different mechanical receptors. The brightly colored gill plates of the pumpkinseed sunfish also serve as a method of protection and dominance. Also known as an eye spot, the dark patch at the posterior of the gill plate provides the illusion that the eye of the fish is larger and positioned further back on the body, thus making the fish seem up to four times larger than it actually is. When a pumpkinseed feels threatened by a predator, it flares its gills to make it seem larger in size, and shows off the flashy red coloration. Males of the species also flare their gills in the spring spawning season in a show of dominance and territoriality. In the southernmost regions of its distribution, the pumpkinseed has developed a larger mouth opening and abnormally large jaw muscles to aid in feeding; its forage is small crustaceans and mollusks. The larger bite radius and enhanced jaw muscles allow the pumpkinseed to crack the shells of their prey to attain the soft flesh within, thus providing one common name of 'shellcracker'. Etymology Lepomis, in Greek, means 'scaled gill cover' and gibbosus means 'humped'. The defining characteristic of a pumpkinseed sunfish is the bright red spot at the tip of the ear flap. The pumpkinseed sunfish is widely recognized by its shape of a pumpkin seed, from which its common name comes.
Biology and health sciences
Acanthomorpha
Animals
969122
https://en.wikipedia.org/wiki/Cookiecutter%20shark
Cookiecutter shark
The cookiecutter shark (Isistius brasiliensis), also called the cigar shark, is a species of small squaliform shark in the family Dalatiidae. This shark lives in warm, oceanic waters worldwide, particularly near islands, and has been recorded as deep as . It migrates vertically up to every day, approaching the surface at dusk and descending with the dawn. Reaching only in length, the cookiecutter shark has a long, cylindrical body with a short, blunt snout, large eyes, two tiny spineless dorsal fins, and a large caudal fin. It is dark brown, with light-emitting photophores covering its underside except for a dark "collar" around its throat and gill slits. The name "cookiecutter shark" refers to its feeding method of gouging round plugs, as if cut out with a cookie cutter, out of larger animals. Marks made by cookiecutter sharks have been found on a wide variety of marine mammals and fishes, and on submarines, undersea cables, and human bodies. It also consumes whole smaller prey, such as squid. Cookiecutter sharks have adaptations for hovering in the water column, and likely rely on stealth and subterfuge to capture more active prey. Its dark collar seems to mimic the silhouette of a small fish, while the rest of its body blends into the downwelling light via its ventral photophores. When a would-be predator approaches the lure, the shark attaches itself using its suctorial lips and specialized pharynx and neatly excises a chunk of the flesh using its bandsaw-like set of lower teeth. This species has been known to travel in schools. Though rarely encountered because of its oceanic habitat, a handful of documented attacks on humans were apparently caused by cookiecutter sharks. Nevertheless, this diminutive shark is not regarded as dangerous to humans. The International Union for Conservation of Nature has listed the cookiecutter shark under least concern, as it is widely distributed, has no commercial value, and is not particularly susceptible to fisheries. Taxonomy French naturalists Jean René Constant Quoy and Joseph Paul Gaimard originally described the cookiecutter shark during the 1817–1820 exploratory voyage of the corvette Uranie under Louis de Freycinet, giving it the name Scymnus brasiliensis because the type specimen was caught off Brazil. In 1824, their account was published as part of Voyage autour du monde...sur les corvettes de S.M. l'Uranie et la Physicienne, Louis de Freycinet's 13 volume report on the voyage. In 1865, American ichthyologist Theodore Nicholas Gill coined the new genus Isistius for this species, after Isis, the Egyptian goddess of light. One of the earliest accounts of the wounds left by the cookiecutter shark on various animals is in ancient Samoan legend, which held that atu (skipjack tuna) entering Palauli Bay would leave behind pieces of their flesh as a sacrifice to Tautunu, the community chief. In later centuries, various other explanations for the wounds were advanced, including lampreys, bacteria, and invertebrate parasites. In 1971, Everet Jones of the U.S. Bureau of Commercial Fisheries (a predecessor of the National Marine Fisheries Service) discovered the cigar shark, as the cookiecutter shark was then generally known, was responsible. Shark expert Stewart Springer thus popularized the name "cookiecutter shark" for this species (though he originally called them "demon whale-biters"). Other common names used for this shark include luminous shark, smalltooth cookiecutter shark, and smooth cookiecutter shark. Description The cookiecutter shark has an elongated, cigar-shaped body with a short, bulbously rounded snout. The nostrils have a very short flap of skin in front. The large, oval, green eyes are placed forward on the head, though not so that binocular vision is extensive. Behind the eyes are large spiracles, positioned on the upper surface of the head. The mouth is short, forming a nearly transverse line, and is surrounded by enlarged, fleshy, suctorial lips. The upper jaw has 30–37 rows of teeth, and the lower jaw has 25–31, increasing with body size. The upper and lower teeth are extremely different; the upper teeth are small, narrow, and upright, tapering to a single, smooth-edged cusp. The lower teeth are also smooth-edged, but much larger, broader, and knife-like, with their bases interlocking to form a single saw-like cutting edge. The five pairs of gill slits are small. The pectoral fins are short and roughly trapezoidal in shape. Two spineless dorsal fins are placed far back on the body, the first originating just ahead of the pelvic fins and the second located just behind. The second dorsal fin is slightly larger than the first, and the pelvic fins are larger than either. The anal fin is absent. The caudal fin is broad, with the lower lobe almost as large as the upper, which has a prominent ventral notch. The dermal denticles are squarish and flattened, with a slight central concavity and raised corners. The cookiecutter shark is chocolate brown in color, becoming subtly lighter below, and a dark "collar" wraps around the gill region. The fins have translucent margins, except for the caudal fin, which has a darker margin. Complex, light-producing organs called photophores densely cover the entire underside, except for the collar, and produce a vivid green glow. The maximum recorded length for this species is for males and for females. Distribution and habitat Inhabiting all of the world's major tropical and warm-temperate oceanic basins, the cookiecutter shark is most common between the latitudes of 20°N and 20°S, where the surface water temperature is . In the Atlantic, it has been reported off the Bahamas and southern Brazil in the west, Cape Verde, Guinea to Sierra Leone, southern Angola, and South Africa in the east, and Ascension Island in the south. In the Indo-Pacific region, it has been caught from Mauritius to New Guinea, Australia, and New Zealand, including Tasmania and Lord Howe Island, as well as off Japan. In the central and eastern Pacific, it occurs from Fiji north to the Hawaiian Islands, and east to the Galápagos, Easter, and Guadalupe Islands. Fresh wounds observed on marine mammals suggest this shark may range as far as California in warm years. Based on catch records, the cookiecutter shark appears to conduct a diel vertical migration up to each way. It spends the day at a depth of , and at night it rises into the upper water column, usually remaining below , but on rare occasions venturing to the surface. This species may be more tolerant of low dissolved oxygen levels than sharks in the related genera Euprotomicrus and Squaliolus. It is frequently found near islands, perhaps for reproductive purposes or because they hold congregations of large prey animals. In the northeastern Atlantic, most adults are found between 11°N and 16°N, with the smallest and largest individuals being found in lower and higher latitudes, respectively. There is no evidence of sex segregation. Biology and ecology Best known for biting neat round chunks of tissue from marine mammals and large fish, the cookiecutter shark is considered a facultative ectoparasite, as it also wholly ingests smaller prey. It has a wide gape and a very strong bite, by virtue of heavily calcified cranial and labial cartilages. With small fins and weak muscles, this ambush predator spends much of its time hovering in the water column. Its liver, which can comprise some 35% of its weight, is rich in low-density lipids, which enables it to maintain neutral buoyancy. This species has higher skeletal density than Euprotomicrus or Squaliolus, and its body cavity and liver are proportionately much larger, with much higher oil content. Its large caudal fin allows it to make a quick burst of speed to catch larger, faster prey that come in range. The cookiecutter shark regularly replaces its teeth like other sharks, but sheds its lower teeth in entire rows rather than one at a time. A cookiecutter shark has been calculated to have shed 15 sets of lower teeth, totaling 435–465 teeth, from when it was long to when it reached , a significant investment of resources. The shark swallows its old sets of teeth, enabling it to recycle the calcium content. Unlike other sharks, the retina of the cookiecutter shark has ganglion cells concentrated in a concentric area rather than in a horizontal streak across the visual field; this may help to focus on prey in front of the shark. This shark has been known to travel in schools, which may increase the effectiveness of its lure (see below), and discourage attacks by much larger predators. Bioluminescence The intrinsic green luminescence of the cookiecutter shark is the strongest known of any shark, and has been reported to persist for three hours after it has been taken out of water. The ventrally positioned photophores serve to disrupt its silhouette from below by matching the downwelling light, a strategy known as counter-illumination, that is common among bioluminescent organisms of the mesopelagic zone. The individual photophores are set around the denticles and are small enough that they cannot be discerned by the naked eye, suggesting they have evolved to fool animals with high visual acuity and/or at close distances. Set apart from the glowing underside, the darker, nonluminescent collar tapers at both sides of the throat, and has been hypothesized to serve as a lure by mimicking the silhouette of a small fish from below. The appeal of the lure would be multiplied in a school of sharks. If the collar does function in this way, the cookiecutter shark would be the only known case of bioluminescence in which the absence of light attracts prey, while its photophores serve to inhibit detection by predators. As the shark can only match a limited range of light intensities, it has been suggested that its vertical movements might serve to preserve the effectiveness of its disguise across various times of day and weather conditions. Feeding Virtually every type of medium- to large-sized oceanic animal sharing the habitat of the cookiecutter shark is open to attack; bite scars have been found on cetaceans (including porpoises, orcas, dolphins, beaked whales, sperm whales and baleen whales), pinnipeds (including fur seals, leopard seals and elephant seals), dugongs, larger sharks (including blue sharks, goblin sharks, basking sharks, great white sharks, megamouth sharks and smalltooth sand tiger sharks), stingrays (including deepwater stingrays, pelagic stingrays and sixgill stingrays), and bony fishes (including billfishes, tunas, dolphinfishes, jacks, escolars, opahs, and pomfrets). The cookiecutter shark also regularly hunts and eats entire squid with a mantle length of , comparable in size to the shark itself, as well as bristlemouths, copepods, and other smaller prey. Parasitic attacks by the cookiecutter shark leave a round "crater wound", averaging across and deep. The prevalence of these attacks can be high: off Hawaii, nearly every adult spinner dolphin bears scars from this species. Diseased or otherwise weakened animals appear to be more susceptible, and in the western Atlantic observations have been made of emaciated beached melon-headed whales with dozens to hundreds of recent and healing cookiecutter shark wounds, while such wounds are rare on non-emaciated beached whales. The impact of parasitism on prey species, in terms of resources diverted from growth or reproduction, is uncertain. The cookiecutter shark exhibits a number of specializations to its mouth and pharynx for its parasitic lifestyle. The shark first secures itself to the body surface of its prey by closing its spiracles and retracting its basihyal (tongue) to create pressure lower than that of the surroundings; its suctorial lips ensure a tight seal. It then bites, using its narrow upper teeth as anchors while its razor sharp lower teeth slice into the prey. Finally, the shark twists and rotates its body to complete a circular cut, quite possibly aided by the initial forward momentum and subsequent struggles of its prey. The action of the lower teeth may also be assisted by back-and-forth vibrations of the jaw, a mechanism akin to that of an electric carving knife. This shark's ability to create strong suction into its mouth probably also helps in capturing smaller prey such as squid. Life history Like other dogfish sharks, the cookiecutter shark is aplacental viviparous, with the developing embryos being sustained by yolk until birth. Females have two functional uteri and give birth to litters of 6 to 12 pups. A case has been recorded of a female carrying 9 embryos long; though they were close to the birth size, they still had well-developed yolk sacs, suggesting a slow rate of yolk absorption and a long gestation period. The embryos had developed brown pigmentation, but not the dark collar or differentiated dentition. Newborn cookiecutter sharks are long. Males attain sexual maturity at a length of , and females at a length of . Human interactions Favoring offshore waters and thus seldom encountered by humans, the cookiecutter shark is not considered dangerous because of its small size. However, it has been implicated in a few attacks on humans; in one case, a school of 30-cm (12 in) long fish with blunt snouts attacked an underwater photographer on an open-ocean dive. Similar reports have come from shipwreck survivors, of suffering small, clean, deep bites during the night. In March 2009, Maui resident Mike Spalding was bitten by a cookiecutter shark while swimming across Alenuihaha Channel. Swimmer Eric Schall was bitten by a cookiecutter shark on March 31, 2019 while crossing the Kaiwi Channel, and suffered a large laceration to his stomach. A second cookiecutter attack occurred in the same spot three weeks later: Isaiah Mojica was attempting the channel swim on April 6, 2019 as part of the Oceans Seven challenge when he was bitten on the left shoulder. A third person attempting to complete the swim was bitten in nearly the same area of the channel: Adherbal Treidler de Oliveira was attempting the swim on July 29, 2019, when he was bitten on the stomach and on the left thigh. Two of the three swimmers were using electrical shark deterrents, but they did not deter the sharks. In 2017, a seven-year-old boy, Jack Tolley, was bitten in the leg while wading in Alma Bay in North Queensland with his family. The shark caused a 7.3 cm wound that was nearly down to the bone. On February 9, 2022, a deep-water swimmer off Kailua-Kona, Hawaii was bitten on the right foot and calf. In March 2023, Andy Walberer was attacked by two cookiecutter sharks while swimming the Molokai channel. He was able to grab and throw both sharks before serious injury was inflicted. There are several records of human bodies recovered from the water with post-mortem cookiecutter shark bites. During the 1970s, several U.S. Navy submarines were forced back to base to repair damage caused by cookiecutter shark bites to the neoprene boots of their AN/BQR-19 sonar domes, which caused the sound-transmitting oil inside to leak and impaired navigation. An unknown enemy weapon was initially feared, before this shark was identified as the culprit; the problem was solved by installing fiberglass covers around the domes. In the 1980s, some 30 U.S. Navy submarines were damaged by cookiecutter shark bites, mostly to the rubber-sheathed electric cable leading to the sounding probe used to ensure safety when surfacing in shipping zones. Again, the solution was to apply a fiberglass coating. Oceanographic equipment and telecommunications cables have also been damaged by this species. The harm inflicted by cookiecutter sharks on fishing nets and economically important species may have a minor detrimental effect on commercial fisheries. The shark itself is too small to be of value, and is only infrequently taken, as bycatch, on pelagic longlines and in midwater trawls and plankton nets. The lack of significant population threats, coupled with a worldwide distribution, has led the IUCN to assess the cookiecutter shark as of least concern. In June 2018 the New Zealand Department of Conservation classified the cookiecutter shark as "Not Threatened" with the qualifier "Secure Overseas" under the New Zealand Threat Classification System.
Biology and health sciences
Sharks
Animals
969126
https://en.wikipedia.org/wiki/Protein%20structure
Protein structure
Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers specifically polypeptides formed from sequences of amino acids, which are the monomers of the polymer. A single amino acid monomer may also be called a residue, which indicates a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions, such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo-electron microscopy (cryo-EM) and dual polarisation interferometry, to determine the structure of proteins. Protein structures range in size from tens to several thousand amino acids. By physical size, proteins are classified as nanoparticles, between 1–100 nm. Very large protein complexes can be formed from protein subunits. For example, many thousands of actin molecules assemble into a microfilament. A protein usually undergoes reversible structural changes in performing its biological function. The alternative structures of the same protein are referred to as different conformations, and transitions between them are called conformational changes. Levels of protein structure There are four distinct levels of protein structure. Primary structure The primary structure of a protein refers to the sequence of amino acids in the polypeptide chain. The primary structure is held together by peptide bonds that are made during the process of protein biosynthesis. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is not involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of amino acids in insulin was discovered by Frederick Sanger, establishing that proteins have defining amino acid sequences. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often, however, it is read directly from the sequence of the gene using the genetic code. It is strictly recommended to use the words "amino acid residues" when discussing proteins because when a peptide bond is formed, a water molecule is lost, and therefore proteins are made up of amino acid residues. Post-translational modifications such as phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene. For example, insulin is composed of 51 amino acids in 2 chains. One chain has 31 amino acids, and the other has 20 amino acids. Secondary structure Secondary structure refers to highly regular local sub-structures on the actual polypeptide backbone chain. Two main types of secondary structure, the α-helix and the β-strand or β-sheets, were suggested in 1951 by Linus Pauling. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the α-helix and the β-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be confused with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit". Tertiary structure Tertiary structure refers to the three-dimensional structure created by a single protein molecule (a single polypeptide chain). It may include one or several domains. The α-helices and β-pleated-sheets are folded into a compact globular structure. The folding is driven by the non-specific hydrophobic interactions, the burial of hydrophobic residues from water, but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol (intracellular fluid) is generally a reducing environment. Quaternary structure Quaternary structure is the three-dimensional structure consisting of the aggregation of two or more individual polypeptide chains (subunits) that operate as a single functional unit (multimer). The resulting multimer is stabilized by the same non-covalent interactions and disulfide bonds as in tertiary structure. There are many possible quaternary structure organisations. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits, and so forth. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin. Homomers An assemblage of multiple copies of a particular polypeptide chain can be described as a homomer, multimer or oligomer. Bertolini et al. in 2021 presented evidence that homomer formation may be driven by interaction between nascent polypeptide chains as they are translated from mRNA by nearby adjacent ribosomes. Hundreds of proteins have been identified as being assembled into homomers in human cells. The process of assembly is often initiated by the interaction of the N-terminal region of polypeptide chains. Evidence that numerous gene products form homomers (multimers) in a variety of organisms based on intragenic complementation evidence was reviewed in 1965. Domains, motifs, and folds in protein structure Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds. Despite the fact that there are about 100,000 different proteins expressed in eukaryotic systems, there are many fewer different domains, structural motifs and folds. Structural domain A structural domain is an element of the protein's overall structure that is self-stabilizing and often folds independently of the rest of the protein chain. Many domains are not unique to the protein products of one gene or one gene family but instead appear in a variety of proteins. Domains often are named and singled out because they figure prominently in the biological function of the protein they belong to; for example, the "calcium-binding domain of calmodulin". Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimera proteins. A conservative combination of several domains that occur in different proteins, such as protein tyrosine phosphatase domain and C2 domain pair, was called "a superdomain" that may evolve as a single unit. Structural and sequence motifs The structural and sequence motifs refer to short segments of protein three-dimensional structure or amino acid sequence that were found in a large number of different proteins Supersecondary structure Tertiary protein structures can have multiple secondary elements on the same polypeptide chain. The supersecondary structure refers to a specific combination of secondary structure elements, such as β-α-β units or a helix-turn-helix motif. Some of them may be also referred to as structural motifs. Protein fold A protein fold refers to the general protein architecture, like a helix bundle, β-barrel, Rossmann fold or different "folds" provided in the Structural Classification of Proteins database. A related concept is protein topology. Protein dynamics and conformational ensembles Proteins are not static objects, but rather populate ensembles of conformational states. Transitions between these states typically occur on nanoscales, and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. Protein dynamics and conformational changes allow proteins to function as nanoscale biological machines within cells, often in the form of multi-protein complexes. Examples include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. " Proteins are often thought of as relatively stable tertiary structures that experience conformational changes after being affected by interactions with other proteins or as a part of enzymatic activity. However, proteins may have varying degrees of stability, and some of the less stable variants are intrinsically disordered proteins. These proteins exist and function in a relatively 'disordered' state lacking a stable tertiary structure. As a result, they are difficult to describe by a single fixed tertiary structure. Conformational ensembles have been devised as a way to provide a more accurate and 'dynamic' representation of the conformational state of intrinsically disordered proteins. Protein ensemble files are a representation of a protein that can be considered to have a flexible structure. Creating these files requires determining which of the various theoretically possible protein conformations actually exist. One approach is to apply computational algorithms to the protein data in order to try to determine the most likely set of conformations for an ensemble file. There are multiple methods for preparing data for the Protein Ensemble Database that fall into two general methodologies – pool and molecular dynamics (MD) approaches (diagrammed in the figure). The pool based approach uses the protein's amino acid sequence to create a massive pool of random conformations. This pool is then subjected to more computational processing that creates a set of theoretical parameters for each conformation based on the structure. Conformational subsets from this pool whose average theoretical parameters closely match known experimental data for this protein are selected. The alternative molecular dynamics approach takes multiple random conformations at a time and subjects all of them to experimental data. Here the experimental data is serving as limitations to be placed on the conformations (e.g. known distances between atoms). Only conformations that manage to remain within the limits set by the experimental data are accepted. This approach often applies large amounts of experimental data to the conformations which is a very computationally demanding task. The conformational ensembles were generated for a number of highly dynamic and partially unfolded proteins, such as Sic1/Cdc4, p15 PAF, MKK7, Beta-synuclein and P27 Protein folding As it is translated, polypeptides exit the ribosome mostly as a random coil and folds into its native state. The final structure of the protein chain is generally assumed to be determined by its amino acid sequence (Anfinsen's dogma). Protein stability Thermodynamic stability of proteins represents the free energy difference between the folded and unfolded protein states. This free energy difference is very sensitive to temperature, hence a change in temperature may result in unfolding or denaturation. Protein denaturation may result in loss of function, and loss of native state. The free energy of stabilization of soluble globular proteins typically does not exceed 50 kJ/mol. Taking into consideration the large number of hydrogen bonds that take place for the stabilization of secondary structures, and the stabilization of the inner core through hydrophobic interactions, the free energy of stabilization emerges as small difference between large numbers. Protein structure determination Around 90% of the protein structures available in the Protein Data Bank have been determined by X-ray crystallography. This method allows one to measure the three-dimensional (3-D) density distribution of electrons in the protein, in the crystallized state, and thereby infer the 3-D coordinates of all the atoms to be determined to a certain resolution. Roughly 7% of the known protein structures have been obtained by nuclear magnetic resonance (NMR) techniques. For larger protein complexes, cryo-electron microscopy can determine protein structures. The resolution is typically lower than that of X-ray crystallography, or NMR, but the maximum resolution is steadily increasing. This technique is still a particularly valuable for very large protein complexes such as virus coat proteins and amyloid fibers. General secondary structure composition can be determined via circular dichroism. Vibrational spectroscopy can also be used to characterize the conformation of peptides, polypeptides, and proteins. Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure. Protein structure databases A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein. Structural classifications of proteins Protein structures can be grouped based on their structural similarity, topological class or a common evolutionary origin. The Structural Classification of Proteins database and CATH database provide two different structural classifications of proteins. When the structural similarity is large the two proteins have possibly diverged from a common ancestor, and shared structure between proteins is considered evidence of homology. Structure similarity can then be used to group proteins together into protein superfamilies. If shared structure is significant but the fraction shared is small, the fragment shared may be the consequence of a more dramatic evolutionary event such as horizontal gene transfer, and joining proteins sharing these fragments into protein superfamilies is no longer justified. Topology of a protein can be used to classify proteins as well. Knot theory and circuit topology are two topology frameworks developed for classification of protein folds based on chain crossing and intrachain contacts respectively. Computational prediction of protein structure The generation of a protein sequence is much easier than the determination of a protein structure. However, the structure of a protein gives much more insight in the function of the protein than its sequence. Therefore, a number of methods for the computational prediction of protein structure from its sequence have been developed. Ab initio prediction methods use just the sequence of the protein. Threading and homology modeling methods can build a 3-D model for a protein of unknown structure from experimental structures of evolutionarily-related proteins, called a protein family.
Biology and health sciences
Proteins
Biology
969684
https://en.wikipedia.org/wiki/Herd
Herd
A herd is a social group of certain animals of the same species, either wild or domestic. The form of collective animal behavior associated with this is called herding. These animals are known as gregarious animals. The term herd is generally applied to mammals, and most particularly to the grazing ungulates that classically display this behaviour. Different terms are used for similar groupings in other species; in the case of birds, for example, the word is flocking, but flock may also be used for mammals, particularly sheep or goats. Large groups of carnivores are usually called packs, and in nature a herd is classically subject to predation from pack hunters. Special collective nouns may be used for particular taxa (for example a flock of geese, if not in flight, is sometimes called a gaggle) but for theoretical discussions of behavioural ecology, the generic term herd can be used for all such kinds of assemblage. The word herd, as a noun, can also refer to one who controls, possesses and has care for such groups of animals when they are domesticated. Examples of herds in this sense include shepherds (who tend to sheep), goatherds (who tend to goats), and cowherds (who tend to cattle). The structure and size of herds When an association of animals (or, by extension, people) is described as a herd, the implication is that the group tends to act together (for example, all moving in the same direction at a given time), but that this does not occur as a result of planning or coordination. Rather, each individual is choosing behaviour in correspondence with most other members, possibly through imitation or possibly because all are responding to the same external circumstances. A herd can be contrasted with a coordinated group where individuals have distinct roles. Many human groupings, such as army detachments or sports teams, show such coordination and differentiation of roles, but so do some animal groupings such as those of eusocial insects, which are coordinated through pheromones and other forms of animal communication. A herd is, by definition, relatively unstructured. However, there may be two or a few animals which tend to be imitated by the bulk of the herd more than others. An animal in this role is called a "control animal", since its behaviour will predict that of the herd as a whole. It cannot be assumed, however, that the control animal is deliberately taking a leadership role; control animals are not necessarily socially dominant in conflict situations, though they often are. Group size is an important characteristic of the social environment of gregarious species. Costs and benefits of animals in groups The reason why animals form herds can not always be stated easily, since the underlying mechanisms are diverse and complex. Understanding the social behaviour of animals and the formation of groups has been a fundamental goal in the field of sociobiology and behavioural ecology. Theoretical framework is focused on the costs and benefits associated with living in groups in terms of the fitness of each individual compared to living solitarily. Living in groups evolved independently multiple times in various taxa and can only occur if its benefits outweigh the costs within an evolutionary timescale. Thus, animals form groups whenever this increases their fitness compared to living in solitary. The following includes an outline about some of the major effects determining the trade-offs for living in groups. Dilution effect Perhaps the most studied effect of herds is the so-called dilution effect. The key argument is that the risk of being preyed upon for any particular individual is smaller within a larger group, strictly because a predator has to decide which individual to attack. Although the dilution effect is influenced by so-called selfish herding, it is primarily a direct effect of group size instead of the position within a herd. Greater group sizes result in higher visibility and detection rates for predators, but this relation is not directly proportional and saturates at some point, while the risk of being attacked for an individual is directly proportional to group size. Thus, the net effect for an individual in a group concerning its predation risk is beneficial. Whenever groups, such as shoals of fish, synchronize their movements, it becomes harder for predators to focus on particular individuals. However, animals that are weak and slower or on the periphery are preferred by predators, so that certain positions within the group are better than others (see selfish herd theory). For fit animals, being in a group with such vulnerable individuals may thus decrease the chance of being preyed upon even further. Collective vigilance The effect of collective vigilance in social groups has been widely studied within the framework of optimal foraging theory and animal decision making. While animals under the risk of predation are feeding or resting, they have to stay vigilant and watch for predators. It could be shown in many studies (especially for birds) that with increase in group size individual animals are less attentive, while the overall vigilance suffers little (many eyes effect). This means food intake and other activities related to fitness are optimized in terms of time allocation when animals stay in groups. However, some details about this concepts remain unclear. Being the first to detect predators and react accordingly can be advantageous, implying individuals may not fully be able to rely only on the group. Moreover, the competition for food can lead to the misuse of warning calls, as was observed for great tits: If food is scarce or monopolized by dominant birds, other birds (mainly subordinates) use antipredatory warning calls to induce an interruption of feeding and gain access to resources. Another study concerning a flock of geese suggested that the benefits of lower vigilance concerned only those in central positions, due to the fact that the possibly more vulnerable individuals in the flock's periphery have a greater need to stay attentive. This implies that the decrease in overall vigilance arises simply because the geese on the edge of the flock comprise a smaller group when groups get large. A special case of collective vigilance in groups is that of sentinels. Individuals take turn in keeping guard, while all others participate in other activities. Thus, the strength of social bonds and trust within these groups have to be much higher than in the former cases. Foraging Hunting together enables group-living predators, such as wolves and wild dogs, to catch large prey, which they are unable to achieve when hunting alone. Working together significantly improves foraging efficiency, meaning the net energy gain of each individual is increased when animals are feeding collectively. As an example, a group of Spinner dolphins is able to corral fish into a smaller volume, which makes catching them easier, as there is less opportunity for the fish to escape. Furthermore, large groups are able to monopolize resources and defend them against solitary animals or smaller groups of the same or different species. It has been shown that larger groups of lions tend to be more successful in protecting prey from hyenas than smaller ones. Being able to communicate the location and type of food to other group members may increase the chance for each individual to find profitable food sources, a mechanism which is known to be used by both bees (via a Waggle dance) and several species of birds (using specific vocalisations to indicate food). In terms of Optimal foraging theory, animals always try to maximize their net energy gain when feeding, because this is positively correlated to their fitness. If their energy requirement is fixed and additional energy is not increasing fitness, they will use as little time for foraging as possible (time minimizers). If on the other hand time allocated to foraging is fixed, an animal's gain in fitness is related to the quantity and quality of resources it feeds on (Energy maximizers). Since foraging may be energetically costly (searching, hunting, handling, etc.) and may induce risk of predation, animals in groups may have an advantage, since their combined effort in locating and handling food will reduce time needed to forage sufficiently. Thus, animals in groups may have shorter searching and handling times as well as an increased chance of finding (or monopolizing) highly profitable food, which makes foraging in groups beneficial for time minimizers and energy maximizers alike. The obvious disadvantage of foraging in groups is (scramble or direct) competition with other group members. In general, it is clear that the amount of resources available for each individual decreases with group size. If the resource availability is critical, competition within the group may get so intense, that animals no longer experience benefits from living in groups. However, only the relative importance of within- and between-group competition determines the optimal group size and ultimately the decision of each individual whether or not to stay in the group. Diseases and parasites Since animals in groups stay near each other and interact frequently, infectious diseases and parasites spread much easier between them compared to solitary animals. Studies have shown a positive correlation between herd size and intensity of infections, but the extent to which this sometimes drastic reduction in fitness governs group size and structure is still unclear. However, some animals have found countermeasures such as propolis in beehives or grooming in social animals. Energetic advantages Staying together in groups often brings energetic advantages. Birds flying together in a flock use aerodynamic effects to reduce energetic costs, e.g. by positioning themselves in a V-shaped formation. A similar effect can be observed when fish swim together in fixed formations. Another benefit of group living occurs when climate is harsh and cold: By staying close together animals experience better thermoregulation, because their overall surface to volume ratio is reduced. Consequently, maintaining adequate body temperatures becomes less energetically costly. Antipredatory behaviour The collective force of a group mobbing predators can reduce risk of predation significantly. Flocks of raven are able to actively defend themselves against eagles and baboons collectively mob lions, which is impossible for individuals alone. This behaviour may be based on reciprocal altruism, meaning animals are more likely to help each other if their conspecifics did so earlier. Mating Animals living in groups are more likely to find mates than those living in solitary and are also able to compare potential partners in order to optimize genetic quality for their offspring. Domestic herds Domestic animal herds are assembled by humans for practicality in raising them and controlling them. Their behaviour may be quite different from that of wild herds of the same or related species, since both their composition (in terms of the distribution of age and sex within the herd) and their history (in terms of when and how the individuals joined the herd) are likely to be very different. Human parallels The term herd is also applied metaphorically to human beings in social psychology, with the concept of herd behaviour. However both the term and concepts that underlie its use are controversial. The term has acquired a semi-technical usage in behavioral finance to describe the largest group of market investors or market speculators who tend to "move with the market", or "follow the general market trend". This is at least a plausible example of genuine herding, though according to some researchers it results from rational decisions through processes such as information cascade and rational expectations. Other researchers, however, ascribe it to non-rational process such as mimicry, fear and greed contagion. "Contrarians" or contrarian investors are those who deliberately choose to invest or speculate counter to the "herd".
Biology and health sciences
Ethology
Biology
969943
https://en.wikipedia.org/wiki/Common%20toad
Common toad
The common toad, European toad, or in Anglophone parts of Europe, simply the toad (Bufo bufo, from Latin bufo "toad"), is a toad found throughout most of Europe (with the exception of Ireland, Iceland, parts of Scandinavia, and some Mediterranean islands), in the western part of North Asia, and in a small portion of Northwest Africa. It is one of a group of closely related animals that are descended from a common ancestral line of toads and which form a species complex. The toad is an inconspicuous animal as it usually lies hidden during the day. It becomes active at dusk and spends the night hunting for the invertebrates on which it feeds. It moves with a slow, ungainly walk or short jumps, and has greyish-brown skin covered with wart-like lumps. Although toads are usually solitary animals, in the breeding season, large numbers of toads converge on certain breeding ponds, where the males compete to mate with the females. Eggs are laid in gelatinous strings in the water and later hatch out into tadpoles. After several months of growth and development, these sprout limbs and undergo metamorphosis into tiny toads. The juveniles emerge from the water and remain largely terrestrial for the rest of their lives. The common toad seems to be in decline in part of its range, but overall is listed as being of "least concern" in the IUCN Red List of Threatened Species. It is threatened by habitat loss, especially by drainage of its breeding sites, and some toads get killed on the roads as they make their annual migrations. It has long been associated in popular culture and literature with witchcraft. Taxonomy The common toad was first given the name Rana bufo by the Swedish biologist Carl Linnaeus in the 10th edition of Systema Naturae in 1758. In this work, he placed all the frogs and toads in the single genus Rana. It later became apparent that this genus should be divided, and in 1768, the Austrian naturalist Josephus Nicolaus Laurenti placed the common toad in the genus Bufo, naming it Bufo bufo. The toads in this genus are included in the family Bufonidae, the true toads. Various subspecies of B. bufo have been recognized over the years. The Caucasian toad is found in the mountainous regions of the Caucasus and was at one time classified as B. b. verrucosissima. It has a larger genome and differs from B. bufo morphologically and is now accepted as Bufo verrucosissimus. The spiny toad was classified as B. b. spinosus. It is found in France, the Iberian Peninsula and the Maghreb and grows to a larger size and has a spinier skin than its more northern counterparts with which it intergrades. It is now accepted as Bufo spinosus. The Gredos toad, B. b. gredosicola, is restricted to the Sierra de Gredos, a mountain range in central Spain. It has exceptionally large paratoid glands and its colour tends to be blotched rather than uniform. It is now considered to be a synonym of Bufo spinosus. B. bufo is part of a species complex, a group of closely related species which cannot be clearly demarcated. Several modern species are believed to form an ancient group of related taxa from preglacial times. These are the spiny toad (B. spinosus), the Caucasian toad (B. verrucosissimus) and the Japanese common toad (B. japonicus). The European common toad (Bufo bufo) seems to have arisen more recently. It is believed that the range of the ancestral form extended into Asia but that isolation between the eastern and western species complexes occurred as a result of the development of the Central Asian Deserts during the Middle Miocene. The exact taxonomic relationships between these species remains unclear. A serological investigation into toad populations in Turkey undertaken in 2001 examined the blood serum proteins of Bufo verrucosissimus and Bufo spinosus. It found that the differences between the two were not significant and that therefore the former should be synonymized with the latter. A study published in 2012 examined the phylogenetic relationships between the Eurasian and North African species in the Bufo bufo group and indicated a long evolutionary history for the group. Nine to thirteen million years ago, Bufo eichwaldi, a recently described species from south Azerbaijan and Iran, split from the main lineage. Further divisions occurred with Bufo spinosus splitting off about five million years ago when the Pyrenees were being uplifted, an event which isolated the populations in the Iberian Peninsula from those in the rest of Europe. The remaining European lineage split into Bufo bufo and Bufo verrucosissimus less than three million years ago during the Pleistocene. Very occasionally the common toad hybridizes with the natterjack toad (Bufo calamita) or the European green toad (Bufo viridis). Description The common toad can reach about in length. Females are normally stouter than males and southern specimens tend to be larger than northern ones. The head is broad with a wide mouth below the terminal snout which has two small nostrils. There are no teeth. The bulbous, protruding eyes have yellow or copper coloured irises and horizontal slit-shaped pupils. Just behind the eyes are two bulging regions, the paratoid glands, which are positioned obliquely. They contain a noxious substance, bufotoxin, which is used to deter potential predators. The head joins the body without a noticeable neck and there is no external vocal sac. The body is broad and squat and positioned close to the ground. The fore limbs are short with the toes of the fore feet turning inwards. At breeding time, the male develops nuptial pads on the first three fingers. He uses these to grasp the female when mating. The hind legs are short relative to other frogs' legs and the hind feet have long, unwebbed toes. There is no tail. The skin is dry and covered with small wart-like lumps. The colour is a fairly uniform shade of brown, olive-brown or greyish-brown, sometimes partly blotched or banded with a darker shade. The common toad tends to be sexually dimorphic with the females being browner and the males greyer. The underside is a dirty white speckled with grey and black patches. Other species with which the common toad could be confused include the natterjack toad (Bufo calamita) and the European green toad (Bufo viridis). The former is usually smaller and has a yellow band running down its back while the latter has a distinctive mottled pattern. The paratoid glands of both are parallel rather than slanting as in the common toad. The common frog (Rana temporaria) is also similar in appearance but it has a less rounded snout, damp smooth skin, and usually moves by leaping. Common toads can live for many years and have survived for fifty years in captivity. In the wild, common toads are thought to live for about ten to twelve years. Their age can be determined by counting the number of annual growth rings in the bones of their phalanges. Distribution and habitat After the common frog (Rana temporaria), the edible frog (Pelophylax esculentus) and the smooth newt (Lissotriton vulgaris), the common toad is the fourth most common amphibian in Europe. It is found throughout the continent with the exception of Iceland, the cold northern parts of Scandinavia, Ireland and a number of Mediterranean islands. These include Malta, Crete, Corsica, Sardinia and the Balearic Islands. Its easterly range extends to Irkutsk in Siberia and its southerly range includes parts of northwestern Africa in the northern mountain ranges of Morocco, Algeria and Tunisia. A closely related variant lives in eastern Asia including Japan. The common toad is found at altitudes of up to in the southern part of its range. It is largely found in forested areas with coniferous, deciduous and mixed woodland, especially in wet locations. It also inhabits open countryside, fields, copses, parks and gardens, and often occurs in dry areas well away from standing water. Behaviour and lifecycle The common toad usually moves by walking rather slowly or in short shuffling jumps involving all four legs. It spends the day concealed in a lair that it has hollowed out under foliage or beneath a root or a stone where its colouring makes it inconspicuous. It emerges at dusk and may travel some distance in the dark while hunting. It is most active in wet weather. By morning it has returned to its base and may occupy the same place for several months. It is voracious and eats woodlice, slugs, beetles, caterpillars, flies, ants, spiders, earthworms and even small mice. Small, fast moving prey may be caught by a flick of the tongue while larger items are grabbed with the jaws. Having no teeth, it swallows food whole in a series of gulps. It does not recognise its prey as such but will try to consume any small, dark coloured, moving object it encounters at night. A research study showed that it would snap at a moving piece of black paper as if it were prey but would disregard a larger moving piece. Toads seem to use visual cues for feeding and can see their prey at low light intensities where humans are unable to discern anything. Periodically, the common toad sheds its skin. This comes away in tattered pieces and is then consumed. In 2007, researchers using a remotely operated underwater vehicle to survey Loch Ness, Scotland, observed a common toad moving along the bottom of the lake at a depth of . They were surprised to find that an air-breathing animal could survive in such a location. The annual life cycle of the common toad is divided into three periods: the winter sleep, the time of mating and feeding period. Predators and parasites When attacked, the common toad adopts a characteristic stance, inflating its body and standing with its hindquarters raised and its head lowered. Its chief means of defence lies in the foul tasting secretion that is produced by its paratoid glands and other glands on its skin. This contains a toxin called bufagin and is enough to deter many predators although grass snakes seem to be unaffected by it. Other predators of adult toads include hedgehogs, rats, mink, and even domestic cats. Birds that feed on toads include herons, crows and birds of prey. Crows have been observed to puncture the skin with their beak and then peck out the toad's liver, thus avoiding the toxin. The tadpoles also exude noxious substances which deter fish from eating them but not the great crested newt. Aquatic invertebrates that feed on toad tadpoles include dragonfly larvae, diving beetles and water boatmen. These usually avoid the noxious secretion by puncturing the tadpole's skin and sucking out its juices. A parasitic fly, Lucilia bufonivora, attacks adult common toads. It lays its eggs on the toad's skin and when these hatch, the larvae crawl into the toad's nostrils and eat its flesh internally with lethal consequences. The European fingernail clam (Sphaerium corneum) is unusual in that it can climb up water plants and move around on its muscular foot. It sometimes clings to the toe of a common toad and this is believed to be one of the means by which it disperses to new locations. Reproduction The common toad emerges from hibernation in spring and there is a mass migration towards the breeding sites. The toads converge on certain ponds that they favour while avoiding other stretches of water that seem eminently suitable. Adults use the same location year after year and over 80% of males marked as juveniles have been found to return to the pond at which they were spawned. They find their way to these by using a suite of orientation cues, including olfactory and magnetic cues, but also visual cues help guide their journeys. Toads experimentally moved elsewhere and fitted with tracking devices have been found to be able to locate their chosen breeding pond when the displacement exceeded three kilometres (two miles). The males arrive first and remain in the location for several weeks while the females only stay long enough to mate and spawn. Rather than fighting for the right to mate with a female, male toads may settle disputes by means of the pitch of their voice. Croaking provides a reliable sign of body size and hence of prowess. Nevertheless, fights occur in some instances. In a study at one pond where males outnumbered females by four or five to one, it was found that 38% of the males won the right to mate by defeating rivals in combat or by displacing other males already mounted on females. Male toads generally outnumber female toads at breeding ponds. A Swedish study found that female mortality was higher than that of males and that 41% of females did not come to the breeding pond in the spring and missed a year before reproducing again. The males mount the females' backs, grasping them with their fore limbs under the armpits in a grip that is known as amplexus. The males are enthusiastic, will try to grasp fish or inanimate objects and often mount the backs of other males. Sometimes several toads form a heap, each male trying to grasp the female at the base. It is a stressful period and mortality is high among breeding toads. A successful male stays in amplexus for several days and, as the female lays a long, double string of small black eggs, he fertilises them with his sperm. As the pair wander piggyback around the shallow edges of the pond, the gelatinous egg strings, which may contain 1,500 to 6,000 eggs and be in length, get tangled in plant stalks. The strings of eggs absorb water and swell in size, and small tadpoles hatch out after 10 days. At first they cling to the remains of the strings and feed on the jelly. They later attach themselves to the underside of the leaves of water weed before becoming free swimming. The tadpoles at first look similar to those of the common frog (Rana temporaria) but they are a darker colour, being blackish above and dark grey below. They can be distinguished from the tadpoles of other species by the fact that the mouth is the same width as the space between the eyes, and this is twice as large as the distance between the nostrils. Over the course of a few weeks their legs develop and their tail gradually gets reabsorbed. By twelve weeks of age they are miniature toads measuring about long and ready to leave the pond. Development and growth The common toad reaches maturity at three to seven years old but there is great variability between populations. Juveniles are often parasitised by the lung nematode Rhabdias bufonis. This slows growth rates and reduces stamina and fitness. Larger juveniles at metamorphosis always outgrow smaller ones that have been reared in more crowded ponds. Even when they have heavy worm burdens, large juveniles grow faster than smaller individuals with light worm burdens. After several months of heavy worm infection, some juveniles in a study were only half as heavy as control juveniles. Their parasite-induced anorexia caused a decrease in food intake and some died. Another study investigated whether the use of nitrogenous fertilisers affects the development of common toad tadpoles. The toadlets were kept in dilute solutions of ammonium nitrate of various strengths. It was found that at certain concentrations, which were well above any normally found in the field, growth was increased and metamorphosis accelerated, but at others, there was no significant difference between the experimental tadpoles and controls. Nevertheless, certain unusual swimming patterns and a few deformities were found among the experimental animals. A comparison was made between the growth rate of newly metamorphosed juveniles from different altitudes and latitudes, the specimens studied being from Norway, Germany, Switzerland, the Netherlands and France. At first the growth rates for males and females was identical. By the time they became mature their growth rate had slowed down to about 21% of the initial rate and they had reached 95% of their expected adult size. Some females that were on a biennial breeding cycle carried on growing rapidly for a longer time. Adjusting for differences in temperature and the length of the growing season, the toads grew and matured at much the same rate from the four colder localities. These juveniles reached maturity after 1.09 years for males and 1.55 years for females. However, the young toads from lowland France grew faster and longer to a much greater size taking an average 1.77 years for males and 2.49 years for females before reaching maturity. Winter sleep Common toads winter in various holes in the ground, sometimes in basements, often in droves with other amphibians. Rarely they spend the winter in flowing waters with the common frogs and green frogs. Sperm senescence The post-meiotic intra-testicular sperm of B. bufo undergoes senescence over time as measured by sperm motility. This type of sperm senescence does not occur at a genetically fixed rate, but rather is influenced by environmental conditions that include availability of mating partners and temperature. Conservation The IUCN Red List of Threatened Species considers the common toad as being of "least concern". This is because it has a wide distribution and is, over most of its range, a common species. It is not particularly threatened by habitat loss because it is adaptable and is found in deciduous and coniferous forests, scrubland, meadows, parks and gardens. It prefers damp areas with dense foliage. The major threats it faces include loss of habitat locally, the drainage of wetlands where it breeds, agricultural activities, pollution, and mortality on roads. Chytridiomycosis, an infectious disease of amphibians, has been reported in common toads in Spain and the United Kingdom and may affect some populations. There are parts of its range where the common toad seems to be in decline. In Spain, increased aridity and habitat loss have led to a diminution in numbers and it is regarded as "near threatened". A population in the Sierra de Gredos mountain range is facing predation by otters and increased competition from the frog Pelophylax perezi. Both otter and frog seem to be extending their ranges to higher altitudes. The common toad cannot be legally sold or traded in the United Kingdom but there is a slow decline in toad numbers and it has therefore been declared a Biodiversity Action Plan priority species. In Russia, it is considered to be a "Rare Species" in the Bashkortostan Republic, the Tatarstan Republic, the Yamalo-Nenets Autonomous Okrug, and the Irkutsk Oblast, but during the 1990s, it became more abundant in Moscow Oblast. It has been found that urban populations of common toad occupying small areas and isolated by development show a lower level of genetic diversity and reduced fitness as compared to nearby rural populations. The researchers demonstrated this by genetic analysis and by noting the greater number of physical abnormalities among urban as against rural tadpoles when raised in a controlled environment. It was considered that long term depletion in numbers and habitat fragmentation can reduce population persistence in such urban environments. Roadkill Many toads are killed by traffic while migrating to their breeding grounds. In Europe they have the highest rate of mortality from roadkill among amphibians. Many of the deaths take place on stretches of road where streams flow underneath showing that migration routes often follow water courses. In some places in Germany, Belgium, the Netherlands, Great Britain, Northern Italy and Poland, special tunnels have been constructed so that toads can cross under roads in safety. In other places, local wildlife groups run "toad patrols", carrying the amphibians across roads at busy crossing points in buckets. The toads start moving at dusk and for them to travel far, the temperature needs to remain above . On a warm wet night they may continue moving all night but if it cools down, they may stop earlier. An estimate was made of the significance of roadkill in toad populations in the Netherlands. The number of females killed in the spring migration on a quiet country road (ten vehicles per hour) was compared with the number of strings of eggs laid in nearby fens. A 30% mortality rate was found, with the rate for deaths among males likely to be of a similar order. Bufotoxin The main substance found in the parotoid gland and skin of the common toad is called bufotoxin. It was first isolated by Heinrich Wieland and his colleagues in 1922, and they succeeded in identifying its structure about 20 years later. Meanwhile, other researchers succeeded in isolating the same compound (and its parent steroid, bufotalin) from the Japanese toad, Bufo japonicus. By 1986, researchers at Arizona State University had succeeded in synthesizing the toad toxin constituents bufotalin, bufalitoxin and bufotoxin. The chemical formula of bufotoxin is C40H60N4O10. Its physical effects resemble those of digoxin, which, in small doses, increases the strength with which the heart muscle contracts; synthesized from foxglove plants (Digitalis purpurea), digoxin is used in the treatment of congestive heart failure. The skin of the South American cane toad contains enough similar toxin to cause serious symptoms (or even death) in animals, including humans. Clinical effects include severe irritation and pain to eyes, mouth, nose and throat, cardiovascular and respiratory symptoms, paralysis and seizures, increased salivation, vomiting, hyperkalemia, cyanosis and hallucinations. There is no known anti-venom. Treatment consists of supporting respiratory and cardiovascular functions, prevention of absorption and electrocardiography to monitor the condition. Atropine, phenytoin, cholestyramine and lidocaine may prove useful in its management. Cultural significance The toad has long been considered to be an animal of ill omen or a connection to a spirit world. This may have its origins in the fact that it is at home both on land and in the water. It may cause repugnance because of its drab, wart-like skin, its slow movements and the way it emerges from some dark hole. In Europe in the Middle Ages, the toad was associated with the Devil, for whom a coat-of-arms was invented emblazoned with three toads. It was known that the toad could poison people and, as the witch's familiar, it was thought to possess magical powers. Even ordinary people made use of dried toads, their bile, faeces and blood. In some areas, the finding of a toad in a house was considered evidence that a witch was present. In the Basque Country, the familiars were believed to be toads wearing elegant robes. These were herded by children who were being trained as witches. Between 1610 and 1612, the Spanish inquisitor Alonso de Salazar Frías investigated witchcraft in the region and searched the houses of suspected witches for dressed toads. He found none. These witches were reputed to use undomesticated toads as ingredients in their liniments and brews. An English folk tale tells how an old woman, a supposed witch, cursed her landlord and all his possessions when he demanded the unpaid rent for her cottage. Soon afterwards, a large toad fell on his wife and caused her to collapse. The toad was thrown into the fire but escaped with severe burns. Meanwhile, the old witch's cottage had caught fire and she was badly burnt. By next day, both toad and witch had died, and it was found that the woman's burns exactly mirrored those of the toad. The saliva of the toad was considered poisonous and was known as "sweltered venom" and it was believed that it could spit or vomit poisonous fire. Toads were associated with devils and demons and in Paradise Lost, John Milton depicted Satan as a toad when he poured poison into Eve's ear. The First Witch in Shakespeare's Macbeth gave instructions on using a toad in the concoction of spells: It was also believed that there was a jewel inside a toad's head, a "toadstone", that when worn as a necklace or ring would warn the wearer of attempts to poison them. Shakespeare mentioned this in As You Like It: Mr. Toad is one of the main characters in the children's novel The Wind in the Willows, by Kenneth Grahame. This has been dramatized by several authors including A. A. Milne who called his play Toad of Toad Hall. Mr. Toad is a conceited, anthropomorphic toad and in the book he composes a ditty in his own praise which starts like this: George Orwell in his essay Some Thoughts on the Common Toad described the emergence of the common toad from hibernation as one of the most moving signs of spring.
Biology and health sciences
Frogs and toads
Animals
970554
https://en.wikipedia.org/wiki/Centaurus%20A/M83%20Group
Centaurus A/M83 Group
The Centaurus A/M83 Group is a complex group of galaxies in the constellations Hydra, Centaurus, and Virgo. The group may be roughly divided into two subgroups. The Cen A Subgroup, at a distance of 11.9 Mly (3.66 Mpc), is centered on Centaurus A, a nearby radio galaxy. The M83 Subgroup, at a distance of 14.9 Mly (4.56 Mpc), is centered on the Messier 83 (M83), a face-on spiral galaxy. This group is sometimes identified as one group and sometimes identified as two groups. Hence, some references will refer to two objects named the Centaurus A Group and the M83 Group. However, the galaxies around Centaurus A and the galaxies around M83 are physically close to each other, and both subgroups appear not to be moving relative to each other. The Centaurus A/M83 Group is part of the Virgo Supercluster, the local supercluster of which the Local Group is an outlying member. Members Member identification The brightest group members were frequently identified in early galaxy group identification surveys. However, many of the dwarf galaxies in the group were only identified in more intensive studies. One of the first of these identified 145 faint objects on optical images from the UK Schmidt Telescope and followed these up in hydrogen line emission with the Parkes Radio Telescope and in the hydrogen-alpha spectral line with the Siding Spring 2.3 m Telescope. This identified 20 dwarf galaxies as members of the group. The HIPASS survey, which was a blind radio survey for hydrogen spectral line emission, found five uncatalogued galaxies in the group and also identified five previously-catalogued galaxies as members. An additional dwarf galaxy was identified as a group member in the HIDEEP survey, which was a more intensive radio survey for hydrogen emission within a smaller region of the sky. Several optical surveys later identified 20 more candidate objects to the group. In 2007, the Cen A group membership of NGC 5011C was established. While this galaxy is a well-known stellar system listed with a NGC number, its true identity remained hidden because of coordinate confusion and wrong redshifts in the literature. From 2015 to 2017 a full optical survey was conducted using the Dark Energy Camera, covering 550 square degrees in the sky and doubling the number of known dwarf galaxies in this group. Another deep but spatially limited survey around Centaurus A revealed numerous new dwarfs. The dwarf spheroidal galaxies of the Centaurus A group have been studied and have been found to have old, metal-poor stellar populations similar to those in the Local Group, and follow a similar metallicity–luminosity relation. One dwarf galaxy, KK98 203 (LEDA 166167), has an extended ring of Hα emission. Member list The table below lists galaxies that have been identified as associated with the Centaurus A/M83 Group by I. D. Karachentsev and collaborators. Note that Karachentsev divides this group into two subgroups centered on Centaurus A and Messier 83. Additionally, ESO 219-010, PGC 39032, and PGC 51659 are listed as possibly being members of the Centaurus A Subgroup, and ESO 381-018, NGC 5408, and PGC 43048 are listed as possibly being members of the M83 Subgroup. Although HIPASS J1337-39 is only listed as a possible member of the M83 Subgroup in the later list published by Karachentsev, later analyses indicate that this galaxy is within the subgroup. Saviane and Jerjen found that NGC 5011C has an optical redshift of 647 km/s and thus is a member of the Cen A group rather than of the distant Centaurus galaxy cluster as believed since 1983.
Physical sciences
Notable galaxy clusters
Astronomy
970755
https://en.wikipedia.org/wiki/LinkedIn
LinkedIn
LinkedIn () is a business and employment-focused social media platform that works through websites and mobile apps. It was launched on May 5, 2003 by Reid Hoffman and Eric Ly. Since December 2016, LinkedIn has been a wholly owned subsidiary of Microsoft. The platform is primarily used for professional networking and career development, and allows jobseekers to post their CVs and employers to post jobs. From 2015, most of the company's revenue came from selling access to information about its members to recruiters and sales professionals and has also introduced their own ad portal named LinkedIn Ads to let companies advertise in their platform. LinkedIn has more than 1 billion registered members from over 200 countries and territories. LinkedIn allows members (both employees and employers) to create profiles and connect with each other in an online social network which may represent real-world professional relationships. Members can invite anyone (whether an existing member or not) to become a connection. LinkedIn can also be used to organize offline events, join groups, write articles, publish job postings, post photos and videos, and more. Company overview Founded in Mountain View, California, LinkedIn is currently headquartered in Mountain View, with 36 global offices as of February 11, 2024. In February 2024, the company had around 18,500 employees. LinkedIn's current CEO is Ryan Roslansky. Jeff Weiner, previously CEO of LinkedIn, is now serving as the Executive Chairman. Reid Hoffman, founder of LinkedIn, is chairman of the board. It was funded by Sequoia Capital, Greylock, Bain Capital Ventures, Bessemer Venture Partners and the European Founders Fund. LinkedIn reached profitability in March 2006. Since January 2011, the company had received a total of $103 million (about $ in ) of investment. LinkedIn filed for an initial public offering in January 2011 and traded its first shares in May, under the NYSE symbol "LNKD". History Founding from 2002 to 2011 The company was founded in December 2002 by Reid Hoffman and the founding team members from PayPal and Socialnet.com (Allen Blue, Eric Ly, Jean-Luc Vaillant, Lee Hower, Konstantin Guericke, Stephen Beitzel, David Eves, Ian McNish, Yan Pujante, Chris Saccheri). In late 2003, Sequoia Capital led the Series A investment in the company. In August 2004, LinkedIn reached 1 million users. In March 2006, LinkedIn achieved its first month of profitability. In April 2007, LinkedIn reached 10 million users. In February 2008, LinkedIn launched a mobile version of the site. In June 2008, Sequoia Capital, Greylock Partners, and other venture capital firms purchased a 5% stake in the company for $53 million, giving the company a post-money valuation of approximately $1 billion. In November 2009, LinkedIn opened its office in Mumbai and soon thereafter in Sydney, as it started its Asia-Pacific team expansion. In 2010, LinkedIn opened an International Headquarters in Dublin, Ireland, received a $20 million investment from Tiger Global Management LLC at a valuation of approximately $2 billion, announced its first acquisition, Mspoke, and improved its 1% premium subscription ratio. In October of that year, Silicon Valley Insider ranked the company No. 10 on its Top 100 List of most valuable startups. By December, the company was valued at $1.575 billion in private markets. LinkedIn started its India operations in 2009 and a major part of the first year was dedicated to understanding professionals in India and educating members to leverage LinkedIn for career development. 2011 to present LinkedIn filed for an initial public offering in January 2011. The company traded its first shares on May 19, 2011, under the NYSE symbol "LNKD", at $45 (~$ in ) per share. Shares of LinkedIn rose as much as 171% on their first day of trade on the New York Stock Exchange and closed at $94.25, more than 109% above IPO price. Shortly after the IPO, the site's underlying infrastructure was revised to allow accelerated revision-release cycles. In 2011, LinkedIn earned $154.6 million in advertising revenue alone, surpassing Twitter, which earned $139.5 million. LinkedIn's fourth-quarter 2011, earnings soared because of the company's increase in success in the social media world. By this point, LinkedIn had about 2,100 full-time employees compared to the 500 that it had in 2010. In April 2014, LinkedIn announced that it had leased 222 Second Street, a 26-story building under construction in San Francisco's SoMa district, to accommodate up to 2,500 of its employees, with the lease covering 10 years. The goal was to join all San Francisco-based staff (1,250 as of January 2016) in one building, bringing sales and marketing employees together with the research and development team. In March 2016 they started to move in. In February 2016 following an earnings report, LinkedIn's shares dropped 43.6% within a single day, down to $108.38 per share. LinkedIn lost $10 billion of its market capitalization that day. In 2016, access to LinkedIn was blocked by Russian authorities for non-compliance with the 2015 national legislation that requires social media networks to store citizens' personal data on servers located in Russia. In June 2016, Microsoft announced that it would acquire LinkedIn for $196 a share, a total value of $26.2 billion. It was the largest acquisition made by Microsoft, until the acquisition of Activision Blizzard in 2022. The acquisition would be an all-cash, debt-financed transaction. Microsoft would allow LinkedIn to "retain its distinct brand, culture and independence", with Weiner to remain as CEO, who would then report to Microsoft CEO Satya Nadella. Analysts believed Microsoft saw the opportunity to integrate LinkedIn with its Office product suite to help better integrate the professional network system with its products. The deal was completed on December 8, 2016. In late 2016, LinkedIn announced a planned increase of 200 new positions in its Dublin office, which would bring the total employee count to 1,200. Since 2017 94% of B2B marketers use LinkedIn to distribute content. Soon after LinkedIn's acquisition by Microsoft, LinkedIn's new desktop version was introduced. The new version was meant to make the user experience similar across mobile and desktop. Some changes were made according to the feedback received from the previously launched mobile app. Features that were not heavily used were removed. For example, the contact tagging and filtering features are not supported anymore. Following the launch of the new user interface (UI), some users complained about the missing features which were there in the older version, slowness, and bugs in it. The issues were faced by free and premium users and with both the desktop and mobile versions of the site. In 2019, LinkedIn launched globally the feature Open for Business that enables freelancers to be discovered on the platform. LinkedIn Events was launched in the same year. In June 2020, Jeff Weiner stepped down as CEO and become executive chairman after 11 years in the role. Ryan Roslansky stepped up as CEO from his previous position as the senior vice president of product. In late July 2020, LinkedIn announced it laid off 960 employees, about 6 percent of the total workforce, from the talent acquisition and global sales teams. In an email to all employees, CEO Ryan Roslansky said the cuts were due to effects of the global COVID-19 pandemic. In April 2021, CyberNews claimed that 500 million LinkedIn's accounts have leaked online. However, LinkedIn stated that "We have investigated an alleged set of LinkedIn data that has been posted for sale and have determined that it is actually an aggregation of data from a number of websites and companies". In June 2021, PrivacySharks claimed that more than 700 million LinkedIn records were on sale on a hacker forum. LinkedIn later stated that this is not a breach, but scraped data which is also a violation of their Terms of Service. Microsoft ended LinkedIn operations in China in October 2021. In 2022, LinkedIn earned $13.8 billion in revenue, compared to $10.3 billion in 2021. In May 2023, LinkedIn cut 716 positions from its 20,000 workforce. The move, according to a letter from the company's CEO Ryan Roslansky, was made to streamline the business's operations. Roslansky further stated that this decision would result in the creation of 250 job opportunities. Additionally, LinkedIn also announced the discontinuance of its China local job apps. In June 2024, Axios reported LinkedIn was testing a new AI assistant for its paid Premium users. In September 2024, LinkedIn suspended its use of UK user data for AI model training after concerns were raised by the Information Commissioner's Office (ICO). The platform had quietly opted in users globally for data use in AI training. However, following ICO feedback, LinkedIn paused this practice for UK users. A company spokesperson stated that LinkedIn has always allowed users to control how their data is used and has now provided UK users with an opt-out option. In November 2024, Linkedin challenged Australian legislation which sought to ban under-16's from social media platforms on the grounds that it does 'not have content interesting and appealing to minors.' Acquisitions In July 2012, LinkedIn acquired 15 key Digg patents for $4 million including a "click a button to vote up a story" patent. Perkins lawsuit In 2013, a class action lawsuit entitled Perkins vs. LinkedIn Corp was filed against the company, accusing it of automatically sending invitations to contacts in a member's email address book without permission. The court agreed with LinkedIn that permission had in fact been given for invitations to be sent, but not for the two further reminder emails. LinkedIn settled the lawsuit in 2015 for $13 million (~$ in ). Many members should have received a notice in their email with the subject line "Legal Notice of Settlement of Class Action". The Case No. is 13-CV-04303-LHK. hiQ Labs v. LinkedIn In May 2017, LinkedIn sent a Cease-And-Desist letter to hiQ Labs, a Silicon Valley startup that collects data from public profiles and provides analysis of this data to its customers. The letter demanded that hiQ immediately cease "scraping" data from LinkedIn's servers, claiming violations of the CFAA (Computer Fraud and Abuse Act) and the DMCA (Digital Millennium Copyright Act). In response hiQ sued LinkedIn in the Northern District of California in San Francisco, asking the court to prohibit LinkedIn from blocking its access to public profiles while the court considered the merits of its request. The court served a preliminary injunction against LinkedIn, which was then forced to allow hiQ to continue to collect public data. LinkedIn appealed this ruling; in September 2019, the appeals court rejected LinkedIn's arguments and the preliminary injunction was upheld. The dispute is ongoing. Membership In 2015, LinkedIn had more than 400 million members in over 200 countries and territories, which was significantly more than competitor Viadeo (50 million as of 2013.) In 2011, its membership grew by approximately two new members every second. In 2020, LinkedIn's membership grew to over 690 million LinkedIn members. As of September 2021, LinkedIn had 774+ million registered members from over 200 countries and territories. In November 2023, LinkedIn reached a member count of one billion. Platform and features User profile network Basic functionality The basic functionality of LinkedIn allows users to create profiles, which for employees typically consist of a curriculum vitae describing their work experience, education and training, skills, and a personal photo. Employers can list jobs and search for potential candidates. Users can find jobs, people and business opportunities recommended by someone in one's contact network. Users can save jobs that they would like to apply for. Users also have the ability to follow different companies. The site also enables members to make "connections" to each other in an online social network which may represent real-world professional relationships. Members can invite anyone to become a connection. Users can obtain introductions to the connections of connections (termed second-degree connections) and connections of second-degree connections (termed third-degree connections). A member's list of connections can be used in a number of ways. For example, users can search for second-degree connections who work at a company they are interested in, and then ask a specific first-degree connection in common for an introduction. The "gated-access approach" (where contact with any professional requires either an existing relationship, or the intervention of a contact of theirs) is intended to build trust among the service's users. LinkedIn participated in the EU's International Safe Harbor Privacy Principles. Users can interact with each other in a variety of ways: Connections can interact by choosing to "like" posts and "congratulate" others on updates such as birthdays, anniversaries and new positions, as well as by direct messaging. Users can share video with text and filters with the introduction of LinkedIn Video. Users can write posts and articles within the LinkedIn platform to share with their network. Since September 2012, LinkedIn has enabled users to "endorse" each other's skills. However, there is no way of flagging anything other than positive content. LinkedIn solicits endorsements using algorithms that generate skills members might have. Members cannot opt out of such solicitations, with the result that it sometimes appears that a member is soliciting an endorsement for a non-existent skill. Applications LinkedIn 'applications' often refer to external third-party applications that interact with LinkedIn's developer API. However, in some cases, it could refer to sanctioned applications featured on a user's profile page. External, third party applications In February 2015, LinkedIn released an updated terms of use for their developer API. The developer API allows both companies and individuals the ability to interact with LinkedIn's data through creation of managed third-party applications. Applications must go through a review process and request permission from the user before accessing a user's data. Normal use of the API is outlined in LinkedIn's developer documents, including: Sign into external services using LinkedIn Add items or attributes to a user profile Share items or articles to user's timeline Embedded in profile In October 2008, LinkedIn enabled an "applications platform" which allows external online services to be embedded within a member's profile page. Among the initial applications were an Amazon Reading List that allows LinkedIn members to display books they are reading, a connection to Tripit, and a Six Apart, WordPress and TypePad application that allows members to display their latest blog postings within their LinkedIn profile. In November 2010, LinkedIn allowed businesses to list products and services on company profile pages; it also permitted LinkedIn members to "recommend" products and services and write reviews. Shortly after, some of the external services were no longer supported, including Amazon's Reading List. Mobile A mobile version of the site was launched in February 2008 and made available in six languages: Chinese, English, French, German, Japanese and Spanish. In January 2011, LinkedIn acquired CardMunch, a mobile app maker that scans business cards and converts into contacts. In June 2013, CardMunch was noted as an available LinkedIn app. In October 2013, LinkedIn announced a service for iPhone users called "Intro", which inserts a thumbnail of a person's LinkedIn profile in correspondence with that person when reading mail messages in the native iOS Mail program. This is accomplished by re-routing all emails from and to the iPhone through LinkedIn servers, which security firm Bishop Fox asserts has serious privacy implications, violates many organizations' security policies, and resembles a man-in-the-middle attack. Groups LinkedIn also supports daily the formation of interest groups. In 2012, there were 1,248,019 such groups whose membership varies from 1 to 744,662. Groups support a limited form of discussion area, moderated by the group owners and managers. Groups may be private, accessible to members only or may be open to Internet users in general to read, though they must join in order to post messages. Since groups offer the functionality to reach a wide audience without so easily falling foul of anti-spam solutions, there is a constant stream of spam postings, and there now exists a range of firms who offer a spamming service for this very purpose. LinkedIn has devised a few mechanisms to reduce the volume of spam, but recently decided to remove the ability of group owners to inspect the email address of new members in order to determine if they were spammers. Groups also keep their members informed through emails with updates to the group, including most talked about discussions within your professional circles. In December 2011, LinkedIn announced that they are rolling out polls to groups. In November 2013, LinkedIn announced the addition of Showcase Pages to the platform. In 2014, LinkedIn announced they were going to be removing Product and Services Pages paving the way for a greater focus on Showcase Pages. Knowledge graph LinkedIn maintains an internal knowledge graph of entities (people, organizations, groups) that helps it connect everyone working in a field or at an organization or network. This can be used to query the neighborhood around each entity to find updates that might be related to it. This also lets them train machine learning models that can infer new properties about an entity or further information that may apply to it for both summary views and analytics. Discontinued features In January 2013, LinkedIn dropped support for LinkedIn Answers and cited a new 'focus on development of new and more engaging ways to share and discuss professional topics across LinkedIn' as the reason for the retirement of the feature. The feature had been launched in 2007 and allowed users to post questions to their network and allowed users to rank answers. In 2014, LinkedIn retired InMaps, a feature which allowed you to visualize your professional network. The feature had been in use since January 2011. According to the company's website, LinkedIn Referrals will no longer be available after May 2018. In September 2021, LinkedIn discontinued LinkedIn stories, a feature that was rolled out worldwide in October 2020. Usage Personal branding LinkedIn is particularly well-suited for personal branding, which, according to Sandra Long, entails "actively managing one's image and unique value" to position oneself for career opportunities. LinkedIn has evolved from being a mere platform for job searchers into a social network which allows users a chance to create a personal brand. Career coach Pamela Green describes a personal brand as the "emotional experience you want people to have as a result of interacting with you," and a LinkedIn profile is an aspect of that. A contrasting report suggests that a personal brand is "a public-facing persona, exhibited on LinkedIn, Twitter and other networks, that showcases expertise and fosters new connections." LinkedIn allows professionals to build exposure for their brand within the site itself and on the World Wide Web as a whole. With a tool that LinkedIn dubs a Profile Strength Meter, the site encourages users to offer enough information in their profile to optimize visibility by search engines. It can strengthen a user's LinkedIn presence if they belong to professional groups on the site. The site enables users to add a video to their profiles. Some users hire a professional photographer for their profile photo. Video presentations can be added to one's profile. LinkedIn's capabilities have been expanding so rapidly that a cottage industry of outside consultants has grown up to help users navigate the system. A particular emphasis is helping users with their LinkedIn profiles. In October 2012, LinkedIn launched the LinkedIn Influencers program, which features global thought leaders who share their professional insights with LinkedIn's members. As of May 2016, there are 750+ Influencers. The program is invite-only and features leaders from a range of industries, including Richard Branson, Narendra Modi, Arianna Huffington, Greg McKeown, Rahm Emanuel, Jamie Dimon, Martha Stewart, Deepak Chopra, Jack Welch, and Bill Gates. Job seeking Job seekers and employers widely use LinkedIn. According to Jack Meyer, the site has become the "premier digital platform" for professionals to network online. In Australia, which has approximately twelve million working professionals, ten million of them are on LinkedIn, according to Anastasia Santoreneos, suggesting that the probability was high that one's "future employer is probably on the site." According to one estimate based on worldwide figures, 122 million users got job interviews via LinkedIn and 35 million were hired by a LinkedIn online connection. LinkedIn also allows users to research companies, non-profit organizations, and governments they may be interested in working for. Typing the name of a company or organization in the search box causes pop-up data about the company or organization to appear. Such data may include the ratio of female to male employees, the percentage of the most common titles/positions held within the company, the location of the company's headquarters and offices, and a list of present and former employees. In July 2011, LinkedIn launched a new feature allowing companies to include an "Apply with LinkedIn" button on job listing pages. The new plugin allowed potential employees to apply for positions using their LinkedIn profiles as resumes. LinkedIn can help small businesses connect with customers. In the site's parlance, two users have a "first-degree connection" when one accepts an invitation from another. People connected to each of them are "second-degree connections" and persons connected to the second-degree connections are "third-degree connections." This forms a user's internal LinkedIn network, making the user's profile more likely to appear in searches. LinkedIn's Profinder is a marketplace where freelancers can (for a monthly subscription fee) bid for project proposals submitted by individuals and small businesses. In 2017, it had around 60,000 freelancers in more than 140 service areas, such as headshot photography, bookkeeping or tax filing. The premise for connecting with someone has shifted significantly in recent years. Before the 2017 new interface was launched, LinkedIn encouraged connections between people who'd already worked, studied, done business, or the like. Since 2017, that step has been removed from the connection request process - and users are allowed to connect with up to 30,000 people. This change means LinkedIn is a more proactive networking site for job applicants trying to secure a career move or for salespeople wanting to generate new client leads. Top Companies LinkedIn Top Companies is a series of lists published by LinkedIn, identifying companies in the United States, Australia, Brazil, Canada, China, France, Germany, India, Japan, Mexico, South Korea, Spain, and the United Kingdom that are attracting the most intense interest from job candidates. The 2019 lists identified Google's parent company, Alphabet, as the most sought-after U.S. company, with Facebook ranked second and Amazon ranked third. The lists are based on more than one billion actions by LinkedIn members worldwide. The Top Companies lists were started in 2016 and are published annually. The 2021 top list identified Amazon as the top company, with Alphabet ranked second and JPMorgan & Chase Co. ranked third. Top Voices and other rankings Since 2015, LinkedIn has published annual rankings of Top Voices on the platform, recognizing "members that generated the most engagement and interaction with their posts." The 2020 lists included 14 industry categories, ranging from data science to sports, as well as 14 country lists, extending from Australia to Italy. LinkedIn also publishes data-driven annual rankings of the Top Startups in more than a dozen countries, based on "employment growth, job interest from potential candidates, engagement, and attraction of top talent." Advertising and for-pay research In 2008, LinkedIn launched LinkedIn DirectAds as a form of sponsored advertising. In October 2008, LinkedIn revealed plans to open its social network of 30 million professionals globally as a potential sample for business-to-business research. It is testing a potential social network revenue model – research that, to some, appears more promising than advertising. On July 23, 2013, LinkedIn announced its Sponsored Updates ad service. Individuals and companies can now pay a fee to have LinkedIn sponsor their content and spread it to their user base. This is a common way for social media sites such as LinkedIn to generate revenue. LinkedIn launched its carousel ads feature in 2018, making it the newest addition to the platform's advertising options. With carousel ads, businesses can showcase their products or services through a series of swipeable cards, each with its unique image, headline, and description. They can be used for various marketing objectives, such as promoting a new product launch, driving website traffic, generating leads, or building brand awareness. Business Manager LinkedIn today announced the creation of Business Manager. The new Business Manager is a centralized platform designed to make it easier for large companies and agencies to manage people, ad accounts, and business pages. Publishing platform In 2015, LinkedIn added an analytics tool to its publishing platform. The tool allows authors to better track the traffic that their posts receive. In relation to this functionality, LinkedIn has gained more users over the years in the interest of clearly monitoring users' posts through post-performance analytics Future plans Economic graph Inspired by Facebook's "social graph", LinkedIn CEO Jeff Weiner set a goal in 2012 to create an "economic graph" within a decade. The goal was to create a comprehensive digital map of the world economy and the connections within it. The economic graph was to be built on the company's current platform with data nodes including companies, jobs, skills, volunteer opportunities, educational institutions, and content. They have been hoping to include all the job listings in the world, all the skills required to get those jobs, all the professionals who could fill them, and all the companies (nonprofit and for-profit) at which they work. The ultimate goal is to make the world economy and job market more efficient through increased transparency. In June 2014, the company announced its "Galene" search architecture to give users access to the economic graph's data with more thorough filtering of data, via user searches like "Engineers with Hadoop experience in Brazil." LinkedIn has published blog posts using economic graph data to research several topics on the job market, including popular destination cities of recent college graduates, areas with high concentrations of technology skills, and common career transitions. LinkedIn provided the City of New York with data from economic graph showing "in-demand" tech skills for the city's "Tech Talent Pipeline" project. Role in networking LinkedIn has been described by online trade publication TechRepublic as having "become the de facto tool for professional networking". LinkedIn has also been praised for its usefulness in fostering business relationships. "LinkedIn is, far and away, the most advantageous social networking tool available to job seekers and business professionals today," according to Forbes. LinkedIn has inspired the creation of specialised professional networking opportunities, such as co-founder Eddie Lou's Chicago startup, Shiftgig (released in 2012 as a platform for hourly workers). Criticism and controversies Controversial design choices Endorsement feature The feature that allows LinkedIn members to "endorse" each other's skills and experience has been criticized as meaningless, since the endorsements are not necessarily accurate or given by people who have familiarity with the member's skills. In October 2016, LinkedIn acknowledged that it "really does matter who endorsed you" and began highlighting endorsements from "coworkers and other mutual connections" to address the criticism. Use of e-mail accounts of members for spam sending LinkedIn sends "invite emails" to Outlook contacts from its members' email accounts, without obtaining their consent. The "invitations" give the impression that the e-mail holder themself has sent the invitation. If there is no response, the answer will be repeated several times ("You have not yet answered XY's invitation.") LinkedIn was sued in the United States on charges of hijacking e-mail accounts and spamming. The company argued with the right to freedom of expression. In addition, the users concerned would be supported in building a network. The sign-up process includes users entering their email password (there is an opt-out feature). LinkedIn will then offer to send out contact invitations to all members in that address book or that the user has had email conversations with. When the member's email address book is opened, it is opened with all email addresses selected, and the member is advised invitations will be sent to "selected" email addresses, or to all. LinkedIn was sued for sending out another two follow-up invitations to each contact from members to link to friends who had ignored the initial, authorized invitation. In November 2014, LinkedIn lost a motion to dismiss the lawsuit, in a ruling that the invitations were advertisements not broadly protected by free speech rights that would otherwise permit use of people's names and images without authorization. The lawsuit was eventually settled in 2015 in favor of LinkedIn members. Moving emails to LinkedIn servers At the end of 2013 it was announced that the LinkedIn app intercepted users' emails and quietly moved them to LinkedIn servers for full access. LinkedIn used man-in-the-middle attacks. Security incidents 2012 hack In June 2012, cryptographic hashes of approximately 6.4 million LinkedIn user passwords were stolen by Yevgeniy Nikulin and other hackers who then published the stolen hashes online. This action is known as the 2012 LinkedIn hack. In response to the incident, LinkedIn asked its users to change their passwords. Security experts criticized LinkedIn for not salting their password file and for using a single iteration of SHA-1. On May 31, 2013, LinkedIn added two-factor authentication, an important security enhancement for preventing hackers from gaining access to accounts. In May 2016, 117 million LinkedIn usernames and passwords were offered for sale online for the equivalent of $2,200 (~$ in ). These account details are believed to be sourced from the original 2012 LinkedIn hack, in which the number of user IDs stolen had been underestimated. To handle the large volume of emails sent to its users every day with notifications for messages, profile views, important happenings in their network, and other things, LinkedIn uses the Momentum email platform from Message Systems. 2021 breaches A breach disclosed in April 2021 affected 500 million users. A breach disclosed in June 2021 was thought to have affected 92% of users, exposing contact information, employment information. LinkedIn asserted that the data was aggregated via web scraping from LinkedIn as well as several other sites, and noted that "only information that people listed publicly in their profiles" was included. Malicious behavior on LinkedIn Phishing In what is known as Operation Socialist, documents released by Edward Snowden in the 2013 global surveillance disclosures revealed that British Government Communications Headquarters (GCHQ) (an intelligence and security organisation) infiltrated the Belgian telecommunications network Belgacom by luring employees to a false LinkedIn page. In 2014, Dell SecureWorks Counter Threat Unit (CTU) discovered that Threat Group-2889, an Iran-based group, created 25 fake LinkedIn accounts. The accounts were either fully developed personas or supporting personas. They use spearphishing and malicious websites against their victims. According to reporting by Le Figaro, France's General Directorate for Internal Security and Directorate-General for External Security believe that Chinese spies have used LinkedIn to target thousands of business and government officials as potential sources of information. In 2017, Germany's Federal Office for the Protection of the Constitution (BfV) published information alleging that Chinese intelligence services had created fake social media profiles on sites such as LinkedIn, using them to gather information on German politicians and government officials. In 2022, the company ranked first in a list of brands most likely to be imitated in phishing attempts. In August 2023, several Linkedin users were targeted by hackers in hijacking and phishing bid. Users were locked out of their accounts and threatened with permanent account deletion if they did not pay a ransom. False and misleading information LinkedIn has come under scrutiny for its handling of misinformation and disinformation. The platform has struggled to deal with fake profiles and falsehoods about COVID-19 and the 2020 US presidential election. Policies Privacy policy The German Stiftung Warentest has criticized that the balance of rights between users and LinkedIn is disproportionate, restricting users' rights excessively while granting the company far-reaching rights. It has also been claimed that LinkedIn does not respond to consumer protection center requests. Research on labor market effects In 2010, Social Science Computer Review published research by economists Ralf Caers and Vanessa Castelyns who sent an online questionnaire to 398 and 353 LinkedIn and Facebook users respectively in Belgium and found that both sites had become tools for recruiting job applicants for professional occupations as well as additional information about applicants, and that it was being used by recruiters to decide which applicants would receive interviews. In May 2017, Research Policy published an analysis of PhD holders use of LinkedIn and found that PhD holders who move into industry were more likely to have LinkedIn accounts and to have larger networks of LinkedIn connections, were more likely to use LinkedIn if they had co-authors abroad, and to have wider networks if they moved abroad after obtaining their PhD. Also in 2017, sociologist Ofer Sharone conducted interviews with unemployed workers to research the effects of LinkedIn and Facebook as labor market intermediaries and found that social networking services (SNS) have had a filtration effect that has little to do with evaluations of merit, and that the SNS filtration effect has exerted new pressures on workers to manage their careers to conform to the logic of the SNS filtration effect. In October 2018, Foster School of Business professors Melissa Rhee, Elina Hwang, and Yong Tan performed an empirical analysis of whether the common professional networking tactic by job seekers of creating LinkedIn connections with professionals who work at a target company or in a target field is actually instrumental in obtaining referrals and found instead that job seekers were less likely to be referred by employees who were employed by the target company or in the target field due to job similarity and self-protection from competition. Rhee, Hwang, and Tan further found that referring employees in higher hierarchical positions than the job candidates were more likely to provide referrals and that gender homophily did not reduce the competition self-protection effect. In July 2019, sociologists Steve McDonald, Amanda K. Damarin, Jenelle Lawhorne, and Annika Wilcox performed qualitative interviews with 61 human resources recruiters in two metropolitan areas in the Southern United States and found that recruiters filling low- and general-skilled positions typically posted advertisements on online job boards while recruiters filling high-skilled or supervisor positions targeted passive candidates on LinkedIn (i.e. employed workers not actively seeking work but possibly willing to change positions), and concluded that this is resulting in a bifurcated winner-takes-all job market with recruiters focusing their efforts on poaching already employed high-skilled workers while active job seekers are relegated to hyper-competitive online job boards. In December 2001, the ACM SIGGROUP Bulletin published a study on the use of mobile phones by blue-collar workers that noted that research about tools for blue-collar workers to find work in the digital age was strangely absent and expressed concern that the absence of such research could lead to technology design choices that would concentrate greater power in the hands of managers rather than workers. In a September 2019 working paper, economists Laurel Wheeler, Robert Garlick, and RTI International scholars Eric Johnson, Patrick Shaw, and Marissa Gargano ran a randomized evaluation of training job seekers in South Africa to use LinkedIn as part of job readiness programs. The evaluation found that the training increased the job seekers employment by approximately 10 percent by reducing information frictions between job seekers and prospective employers, that the training had this effect for approximately 12 months, and that while the training may also have facilitated referrals, it did not reduce job search costs and the jobs for the treatment and control groups in the evaluation had equal probabilities of retention, promotion, and obtaining a permanent contract. In 2020, Applied Economics published research by economists Steffen Brenner, Sezen Aksin Sivrikaya, and Joachim Schwalbach using LinkedIn demonstrating that high status individuals self-select into professional networking services rather than workers unsatisfied with their career status adversely selecting into the services to receive networking benefits. International restrictions In February 2011, it was reported that LinkedIn was being blocked in China after calls for a "Jasmine Revolution". It was speculated to have been blocked because it is an easy way for dissidents to access Twitter, which had been blocked previously. After a day of being blocked, LinkedIn access was restored in China. In February 2014, LinkedIn launched its Simplified Chinese language version named "" (), officially extending their service in China. LinkedIn CEO Jeff Weiner acknowledged in a blog post that they would have to censor some of the content that users post on its website in order to comply with Chinese rules, but he also said the benefits of providing its online service to people in China outweighed those concerns. Since Autumn 2017 job postings from western countries for China aren't possible anymore. In 2016, a Moscow court ruled that LinkedIn must be blocked in Russia for violating a data retention law which requires the user data of Russian citizens to be stored on servers within the country. The relevant law had been in force there since 2014. This ban was upheld on November 10, 2016, and all Russian ISPs began blocking LinkedIn thereafter. LinkedIn's mobile app was also banned from Google Play Store and iOS App Store in Russia in January 2017. In July 2021 it was also blocked in Kazakhstan. In October 2021, after reports of several academicians and reporters who received notifications regarding their profiles will be blocked in China, Microsoft confirmed that LinkedIn will be shutting down in China and replaced with InJobs, a China exclusive app, citing difficulties in operating environments and increasing compliance requirements. In May 2023, LinkedIn announced that it would be phasing out the app by 9 August 2023. Account banning Without giving its users any prior notice, Linkedin has been removing accounts that do not follow its criteria since 2022. Open-source contributions Since 2010, LinkedIn has contributed several internal technologies, tools, and software products to the open source domain. Notable among these projects is Apache Kafka, which was built and open sourced at LinkedIn in 2011. Research using data from the platform Massive amounts of data from LinkedIn allow scientists and machine learning researchers to extract insights and build product features. For example, this data can help to shape patterns of deception in resumes. Findings suggested that people commonly lie about their hobbies rather than their work experience on online resumes.
Technology
Social network and blogging
null
14363037
https://en.wikipedia.org/wiki/Trans-European%20high-speed%20rail%20network
Trans-European high-speed rail network
The Trans-European high-speed rail network (TEN-R), together with the Trans-European conventional rail network, make up the Trans-European Rail network, which in turn is one of a number of the European Union's Trans-European transport networks (TEN-T). It was defined by the Council Directive 96/48/EC of 23 July 1996. The European Union council decision 2002/735/EC defines technical standards for interoperability of the system. Description The aim of this EU Directive is to achieve the interoperability of the European high-speed train network at the various stages of its design, construction and operation. The network is defined as a system consisting of a set of infrastructures, fixed installations, logistic equipment and rolling stock. By definition of the EC decision, a high-speed line must have one of these three infrastructure characteristics: specially built high-speed lines equipped for speeds generally equal to or greater than specially upgraded high-speed lines equipped for speeds of the order of specially upgraded high-speed lines which have special features as a result of topographical, relief or town-planning constraints, on which the speed must be adapted to each case. The rolling stock used on these lines must be compatible with the characteristics of the infrastructure. Along important listed rail routes (TEN-T), the railway shall be of high speed type, either when new parts are built, or when upgrades are made. This creates a quality requirement on these projects. Corridors Corridor 1 – Berlin–Palermo Corridor 2 – London, Paris, Amsterdam and Cologne to Brussels Corridor 3 – Lisbon–Madrid Corridor 4 – LGV Est Corridor 6 – Lyon–Budapest Corridor 7 – Paris–Bratislava
Technology
Ground transportation networks
null
7108409
https://en.wikipedia.org/wiki/Trojan%20%28celestial%20body%29
Trojan (celestial body)
In astronomy, a trojan is a small celestial body (mostly asteroids) that shares the orbit of a larger body, remaining in a stable orbit approximately 60° ahead of or behind the main body near one of its Lagrangian points and . Trojans can share the orbits of planets or of large moons. Trojans are one type of co-orbital object. In this arrangement, a star and a planet orbit about their common barycenter, which is close to the center of the star because it is usually much more massive than the orbiting planet. In turn, a much smaller mass than both the star and the planet, located at one of the Lagrangian points of the star–planet system, is subject to a combined gravitational force that acts through this barycenter. Hence the smallest object orbits around the barycenter with the same orbital period as the planet, and the arrangement can remain stable over time. In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 31 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn probably does not have any primordial trojans. The same arrangement can appear when the primary object is a planet and the secondary is one of its moons, whereby much smaller trojan moons can share its orbit. All known trojan moons are part of the Saturn system. Telesto and Calypso are trojans of Tethys, and Helene and Polydeuces of Dione. Trojan minor planets In 1772, the Italian–French mathematician and astronomer Joseph-Louis Lagrange obtained two constant-pattern solutions (collinear and equilateral) of the general three-body problem. In the restricted three-body problem, with one mass negligible (which Lagrange did not consider), the five possible positions of that mass are now termed Lagrange points. The term "trojan" originally referred to the "trojan asteroids" (Jovian trojans) that orbit close to the Lagrangian points of Jupiter. These have long been named for figures from the Trojan War of Greek mythology. By convention, the asteroids orbiting near the point of Jupiter are named for the characters from the Greek side of the war, whereas those orbiting near the of Jupiter are from the Trojan side. There are two exceptions, named before the convention was adopted: 624 Hektor in the L4 group, and 617 Patroclus in the L5 group. Astronomers estimate that the Jovian trojans are about as numerous as the asteroids of the asteroid belt. Later on, objects were found orbiting near the Lagrangian points of Neptune, Mars, Earth, Uranus, and Venus. Minor planets at the Lagrangian points of planets other than Jupiter may be called Lagrangian minor planets. Four Martian trojans are known: 5261 Eureka, , , and – the only Trojan body in the leading "cloud" at , There seem to be, also, , , and , but these have not yet been accepted by the Minor Planet Center. There are 28 known Neptunian trojans, but the large Neptunian trojans are expected to outnumber the large Jovian trojans by an order of magnitude. was confirmed to be the first known Earth trojan in 2011. It is located in the Lagrangian point, which lies ahead of the Earth. was found to be another Earth trojan in 2021. It is also at L4. was identified as the first Uranus trojan in 2013. It is located at the Lagrangian point. A second one, , was announced in 2017. is a temporary Venusian trojan, the first one to be identified. The large asteroids Ceres and Vesta have temporary trojans. Saturn has 1 known trojan in the L4 Lagrangian Point, 2019 UO14. Trojans by planet Stability Whether or not a system of star, planet, and trojan is stable depends on how large the perturbations are to which it is subject. If, for example, the planet is the mass of Earth, and there is also a Jupiter-mass object orbiting that star, the trojan's orbit would be much less stable than if the second planet had the mass of Pluto. As a rule of thumb, the system is likely to be long-lived if m1 > 100m2 > 10,000m3 (in which m1, m2, and m3 are the masses of the star, planet, and trojan). More formally, in a three-body system with circular orbits, the stability condition is 27(m1m2 + m2m3 + m3m1) < (m1 + m2 + m3)2. So the trojan being a mote of dust, m3→0, imposes a lower bound on of ≈ 24.9599. And if the star were hyper-massive, m1→+∞, then under Newtonian gravity, the system is stable whatever the planet and trojan masses. And if = , then both must exceed 13+√168 ≈ 25.9615. However, this all assumes a three-body system; once other bodies are introduced, even if distant and small, stability of the system requires even larger ratios.
Physical sciences
Planetary science
Astronomy
154237
https://en.wikipedia.org/wiki/Ringtail
Ringtail
The ringtail (Bassariscus astutus) is a mammal of the raccoon family native to arid regions of North America. It is widely distributed and well-adapted to its distributed areas. It has been legally trapped for its fur. Globally, it is listed as Least Concern on the IUCN Red List but is a Conservation Strategy Species in Oregon and Fully Protected in California The species is known by a variety of names, such as ring-tailed cat, miner's cat, civet cat, and cacomistle (or cacomixtle), though the last of these can refer to B. sumichrasti. The ringtail is the state mammal of Arizona. Description The ringtail is black to dark brown in color with pale underparts. The animal has a pointed muzzle with long whiskers, similar to that of a fox (its Latin name means 'clever little fox') and its body resembles that of a cat. The ringtail's face resembles a mask as dark brown and black hair surround its eyes. These animals are characterized by a long black and white "ringed" tail with 14–16 stripes, which is about the same length as its body. Ringtails are primarily nocturnal, with large eyes and upright ears that make it easier for them to navigate and forage in the dark. An adept climber, it uses its long tail for balance. The rings on its tail can also act as a distraction for predators. The white rings act as a target, so when the tail rather than the body is caught, the ringtail has a greater chance of escaping. The claws are short, straight, and semi-retractable, well-suited for climbing. Smaller than a house cat, it is one of the smallest extant procyonids (only the smallest in the olingo species group average smaller). Its body alone measures and its tail averages from its base. It typically weighs around . Its dental formula is = 40. The ankle joint is flexible and is able to rotate over 180 degrees, making the animal an agile climber. The long tail provides balance for negotiating narrow ledges and limbs, even allowing individuals to reverse directions by performing a cartwheel. Ringtails also can ascend narrow passages by stemming (pressing all feet on one wall and their back against the other or pressing both right feet on one wall and both left feet on the other), and wider cracks or openings by ricocheting between the walls. As adults, these mammals lead solitary lives, generally coming together only to mate. A typical call is a very loud, plaintive bark. They produce a variety of sounds, including clicks and chatters reminiscent of raccoons. Ringtails have been reported to exhibit fecal marking behavior as a form of intraspecific communication to define territory boundaries or attract potential mates. It has been suggested that ringtails use feces as a way to mark territory. In 2003, a study in Mexico City found that ringtails tended to defecate in similar areas in a seemingly nonrandom pattern, mimicking that of other carnivores that utilized excretions to mark territories. Ringtails prefer a solitary existence but may share a den or be found mutually grooming one another. They exhibit limited interaction except during the breeding season, which occurs in the early spring. Ringtails can survive for long periods on water derived from food alone, and have urine which is more concentrated than any other mammal studied, an adaptation that allows for maximum water retention. Reproduction Ringtails mate in the spring. The gestation period is 45–50 days, during which the male will procure food for the female. There will be 2–4 kits in a litter. The cubs open their eyes after one month, and will hunt for themselves after four months. They reach sexual maturity at 10 months. The ringtail's lifespan in the wild is about seven years. Range and habitat The ringtail is commonly found in rocky desert habitats, where it nests in the hollows of trees or abandoned wooden structures. It has been found throughout the Great Basin Desert, which stretches over several states (Nevada, Utah, California, Idaho, and Oregon) as well as the Sonoran Desert in Arizona, and the Chihuahuan Desert in New Mexico, Texas, and northern Mexico. The ringtail also prefers rocky habitats associated with water, such as the riparian canyons, caves, or mine shafts. In areas with a bountiful source of water, as many as 50 ringtails/sq. mile (20/km2) have been found. Ranging from , the territories of male ringtails occasionally intersect with several females. The ringtail is found in the Southwestern United States in southern Oregon, California, eastern Kansas, Oklahoma, Arizona, New Mexico, Colorado, southern Nevada, Utah, Louisiana and Texas. In Mexico it ranges from the northern desert state of Baja California to Oaxaca. Its distribution overlaps that of B. sumichrasti in the Mexican states of Guerrero, Oaxaca, and Veracruz. Fossils assigned to B. astutus dating back to the early Pliocene epoch have been found as far north as Washington. Diet Small vertebrates such as passerine birds, rats, mice, squirrels, rabbits, snakes, lizards, frogs, and toads are the most important foods during winters. However, the ringtail is omnivorous, as are all procyonids. Berries and insects are important in the diet year-round, and become the primary part of the diet in spring and summer, along with other fruit. As an omnivore the ringtail enjoys a variety of foods in its diet, the majority of which is made up of animal matter. Insects and small mammals such as rabbits, mice, rats and ground squirrels are some examples of the ringtail's carnivorous tendencies. Occasionally the ringtail will also eat fish, lizards, birds, snakes and carrion. The ringtail also enjoys juniper, hack and black berries, persimmon, prickly pear, and fruit in general. They have even been observed partaking from birdseed feeders, hummingbird feeders, sweet nectar or sweetened water. The results of a study of scat from ringtails on Isla San José, Baja California Sur, showed that the ringtail tended to prey on whatever was most abundant during each respective season. During the spring the ringtail's diet consisted largely of insects, showing up in about 50% of the analyzed feces. Small rodents, snakes, and some lizards were also present. Plant matter was presented in large amounts, around 59% of the collected feces contained some type of plant, with fruits of Phaulothamnus, Lycium, and Solanum most common. The large amount of ironwood seeds and leaves demonstrated that these fleshy fruits were an obvious favorite of the ringtail. Ecology Foxes, coyotes, raccoons, bobcats, hawks, and owls opportunistically prey upon ringtails of all ages, though predominantly on younger, more vulnerable specimens. Also occasional prey to coatis, lynxes, and mountain lions, the ringtail is rather adept at avoiding predators. The ringtail's success in deterring potential predators is largely attributed to its ability to excrete musk when startled or threatened. The main predators of the ringtail are the great horned owl and the red-tailed hawk. Ringtails have occasionally been hunted for their pelts, but the fur is not especially valuable. Fur trapping has slowed down considerably, but current population sizes and growth rates remain unclear. Tameability Ringtail are said to be easily tamed / habituated to humans, and can make an affectionate pet and effective mouser. Miners and settlers once kept pet ringtails to keep their cabins free of vermin; hence, the common name of "miner's cat".
Biology and health sciences
Procyonidae
Animals
154406
https://en.wikipedia.org/wiki/Procyonidae
Procyonidae
Procyonidae ( ) is a New World family of the order Carnivora. It includes the raccoons, ringtails, cacomistles, coatis, kinkajous, olingos, and olinguitos. Procyonids inhabit a wide range of environments and are generally omnivorous. Characteristics Procyonids are relatively small animals, with generally slender bodies and long tails, though the common raccoon tends to be bulky. Because of their general build, the Procyonidae are often popularly viewed as smaller cousins of the bear family. This is apparent in their German name, Kleinbären (small bears), including the names of the species: a raccoon is called a Waschbär (washing bear, as it "washes" its food before eating), a coati is a Nasenbär (nose-bear), while a kinkajou is a Honigbär (honey-bear). Dutch follows suit, calling the animals wasbeer, neusbeer and rolstaartbeer (curl-tail bear) respectively. However, it is now believed that procyonids are more closely related to mustelids than to bears. Procyonids share common morphological characteristics including a shortened rostrum, absent alisphenoid canals, and a relatively flat mandibular fossa. Kinkajous have unique morphological characteristics consistent with their arboreally adapted locomotion, including a prehensile tail and unique femoral structure. Due to their omnivorous diet, procyonids have lost some of the adaptations for flesh-eating found in their carnivorous relatives. While they do have carnassial teeth, these are poorly developed in most species, especially the raccoons. Apart from the kinkajou, procyonids have the dental formula: for a total of 40 teeth. The kinkajou has one fewer premolar in each row: for a total of 36 teeth. Most members of Procyonidae are solitary; however, some species form groups. Coati females will form bands of 4 to 24 individuals that forage together, while kinkajous have been found to form social groups of two males and one female. Certain procyonids give birth to one offspring like ringtails, olingos, and kinkajous while raccoons and coatis give birth to litters that range in size from 2 to 6 offspring. Evolution Procyonid fossils once believed to belong to the genus Bassariscus, which includes the modern ringtail and cacomistle, have been identified from the Miocene epoch, around 20 million years (Ma) ago. It has been suggested that early procyonids were an offshoot of the canids that adapted to a more omnivorous diet. The recent evolution of procyonids has been centered on Central America (where their diversity is greatest); they entered the formerly isolated South America as part of the Great American Interchange, beginning about 7.3 Ma ago in the late Miocene, with the appearance of Cyonasua. Some fossil procyonids such as Stromeriella were also present in the Old World, before going extinct in the Pliocene. Genetic studies have shown that kinkajous are a sister group to all other extant procyonids; they split off about 22.6 Ma ago. The clades leading to coatis and olingos on one branch, and to ringtails and raccoons on the other, separated about 17.7 Ma ago. The divergence between olingos and coatis is estimated to have occurred about 10.2 Ma ago, at about the same time that ringtails and raccoons parted ways. The separation between coatis and mountain coatis is estimated to have occurred 7.7 Ma ago. Classification There has been considerable historical uncertainty over the correct classification of several members. The red panda was previously classified in this family, but it is now classified in its own family, the Ailuridae, based on molecular biology studies. The status of the various olingos was disputed: some regarded them all as subspecies of Bassaricyon gabbii before DNA sequence data demonstrated otherwise. The traditional classification scheme shown below on the left predates the recent revolution in our understanding of procyonid phylogeny based on genetic sequence analysis. This outdated classification groups kinkajous and olingos together on the basis of similarities in morphology that are now known to be an example of parallel evolution; similarly, coatis are shown as being most closely related to raccoons, when in fact they are closest to olingos. Below right is a cladogram showing the results of molecular studies . Genus Nasuella was not included in these studies, but in a separate study was found to nest within Nasua. FAMILY PROCYONIDAE Subfamily Procyoninae (nine species in four genera) Tribe Procyonini Subtribe Procyonina Raccoons, Procyon Crab-eating raccoon, Procyon cancrivorus Cozumel raccoon, Procyon pygmaeus Common raccoon, Procyon lotor Subtribe Nasuina Nasua South American coati or ring-tailed coati, Nasua nasua White-nosed coati, Nasua narica Nasuella Western mountain coati, Nasuella olivacea Eastern mountain coati, Nasuella meridensis Tribe Bassariscini Bassariscus Ringtail, Bassariscus astutus Cacomistle, Bassariscus sumichrasti Subfamily Potosinae (five species in two genera) Potos Kinkajou, Potos flavus Bassaricyon Northern olingo or Gabbi's olingo, Bassaricyon gabbii Eastern lowland olingo, Bassaricyon alleni Western lowland olingo, Bassaricyon medius Olinguito, Bassaricyon neblina Phylogeny Several recent molecular studies have resolved the phylogenetic relationships between the procyonids, as illustrated in the cladogram below. Extinct taxa Below is a list of extinct taxa (many of which are fossil genera and species) compiled in alphabetical order under their respective subfamilies. Procyonidae J.E. Gray, 1825 †Broilianinae Dehm, 1950 †Broiliana Dehm, 1950 †B. dehmi Beaumont & Mein, 1973 †B. nobilis Dehm, 1950 †Stromeriella Dehm, 1950 †S. depressa Morlo, 1996 †S. franconica Dehm, 1950 Potosinae Trouessart, 1904 †Parapotos J.A. Baskin, 2003 †P. tedfordi J.A. Baskin, 2003 Procyoninae J.E. Gray, 1825 †Arctonasua J.A. Baskin, 1982 †A. eurybates J.A. Baskin, 1982 †A. fricki J.A. Baskin, 1982 †A. floridana J.A. Baskin, 1982 †A. gracilis J.A. Baskin, 1982 †A. minima J.A. Baskin, 1982 †Bassaricyonoides J.A. Baskin & Morea, 2003 †B. stewartae J.A. Baskin & Morea, 2003 †B. phyllismillerae J.A. Baskin & Morea, 2003 Bassariscus Coues, 1887 †B. antiquus Matthew & Cook, 1909 †B. casei Hibbard, 1952 †B. minimus J.A. Baskin, 2004 †B. ogallalae Hibbard, 1933 †B. parvus Hall, 1927 †Chapalmalania Ameghino, 1908 †C. altaefrontis Kraglievich & Olazábal, 1959 †C. ortognatha Ameghino, 1908 †Cyonasua Ameghino, 1885 [=Amphinasua Moreno & Mercerat, 1891; Brachynasua Ameghino & Kraglievich 1925; Pachynasua Ameghino, 1904] †C. argentina Ameghino 1885 †C. argentinus (Burmeister, 1891) †C. brevirostris (Moreno & Mercerat, 1891) [=Amphinasua brevirostris Moreno & Mercerat, 1891] †C. clausa (Ameghino, 1904) [=Pachynasua clausa Ameghino, 1904] †C. groeberi Kraglievich & Reig, 1954 [=Amphinasua groeberi Cabrera, 1936] †C. longirostris (Rovereto, 1914) †C. lutaria (Cabrera, 1936) [=Amphinasua lutaria Cabrera, 1936] †C. meranii (Ameghino & Kraglievich 1925) [=Brachynasua meranii Ameghino & Kraglievich 1925] †C. pascuali Linares, 1981 [=Amphinasua pascuali Linares, 1981] †C. robusta (Rovereto, 1914) †Edaphocyon Wilson, 1960 †E. lautus J.A. Baskin, 1982 †E. palmeri J.A. Baskin & Morea, 2003 †E. pointblankensis Wilson, 1960 Nasua Storr, 1780 †N. pronarica Dalquest, 1978 †N. mastodonta Emmert & Short, 2018 †N. nicaeensis Holl, 1829 †Parahyaenodon Ameghino, 1904 †P. argentinus Ameghino, 1904 †Paranasua J.A. Baskin, 1982 †P. biradica J.A. Baskin, 1982 †Probassariscus Merriam, 1911 †P. matthewi Merriam, 1911 Procyon Storr, 1780 †P. gipsoni Emmert & Short, 2018 †P. megalokolos Emmert & Short, 2018 †P. rexroadensis Hibbard, 1941 †Protoprocyon Linares, 1981 [=Lichnocyon J.A. Baskin, 1982] †P. savagei Linares, 1981 [=Lichnocyon savagei J.A. Baskin, 1982] †Tetraprothomo Ameghino, 1908 †T. argentinus Ameghino, 1908
Biology and health sciences
Carnivora
null
154456
https://en.wikipedia.org/wiki/Fathom
Fathom
A fathom is a unit of length in the imperial and the U.S. customary systems equal to , used especially for measuring the depth of water. The fathom is neither an international standard (SI) unit, nor an internationally accepted non-SI unit. Historically it was the maritime measure of depth in the English-speaking world but, apart from within the US, charts now use metres. There are two yards (6 feet) in an imperial fathom. Originally the span of a man's outstretched arms, the size of a fathom has varied slightly depending on whether it was defined as a thousandth of an (Admiralty) nautical mile or as a multiple of the imperial yard. Formerly, the term was used for any of several units of length varying around . Etymology The term (pronounced ) derives (via Middle English fathme) from the Old English fæðm, which is cognate with the Danish word favn (via the Vikings) and means "embracing arms" or "pair of outstretched arms". It is maybe also cognate with the Old High German word "fadum", which has the same meaning and also means "yarn (originally stretching between the outstretched fingertips)". Forms Ancient fathoms The Ancient Greek measure known as the orguia (, orgyiá, ."outstretched") is usually translated as "fathom". By the Byzantine period, this unit came in two forms: a "simple orguia" (, haplē orguiá) roughly equivalent to the old Greek fathom (6 Byzantine feet, m) and an "imperial" (, basilikē) or "geometric orguia" (, geōmetrikē orguiá) that was one-eighth longer (6 feet and a span, m). International fathom One international fathom is equal to: 1.8288 metres exactly (Official international definition of the fathom) British fathom The British Admiralty defined a fathom to be a thousandth of an imperial nautical mile (which was 6080 ft) or . In practice the "warship fathom" of exactly was used in Britain and the United States. No conflict between the definitions existed in practice, since depths on imperial nautical charts were indicated in feet if less than and in fathoms for depths greater than that. Until the 19th century in England, the length of the fathom was more variable: from  feet on merchant vessels to either on fishing vessels (from ). Other definitions Other definitions of fathom include: 1.828804 m (Obsolete measurement of the fathom based on the US survey foot, only for use of historical and legacy applications) 2 yards exactly 18 hands One metre is about 0.5468 fathoms In the international yard and pound agreement of 1959 the United States, Australia, Canada, New Zealand, South Africa, and the United Kingdom defined the length of the international yard to be exactly 0.9144 metre. In 1959 United States kept the US survey foot as definition for the fathom. In October 2019, the U.S. National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to retire the U.S. survey foot, with effect from the end of 2022. The fathom in U.S. Customary units is thereafter defined based on the International 1959 foot, giving the length of the fathom as exactly 1.8288 metres in the United States as well. Derived units At one time, a quarter meant one-quarter of a fathom. A cable length, based on the length of a ship's cable, has been variously reckoned as equal to 100 or 120 fathoms. Use of the fathom Water depth Most modern nautical charts indicate depth in metres. However, the U.S. Hydrographic Office uses feet and fathoms. A nautical chart will always explicitly indicate the units of depth used. To measure the depth of shallow waters, boatmen used a sounding line containing fathom points, some marked and others in between, called deeps, unmarked but estimated by the user. Water near the coast and not too deep to be fathomed by a hand sounding line was referred to as in soundings or on soundings. The area offshore beyond the 100 fathom line, too deep to be fathomed by a hand sounding line, was referred to as out of soundings or off soundings. A deep-sea lead, the heaviest of sounding leads, was used in water exceeding 100 fathoms in depth. This technique has been superseded by sonic depth finders for measuring mechanically the depth of water beneath a ship, one version of which is the Fathometer (trademark). The record made by such a device is a fathogram. A fathom line or fathom curve, a usually sinuous line on a nautical chart, joins all points having the same depth of water, thereby indicating the contour of the ocean floor. Some extensive flat areas of the sea bottom with constant depth are known by their fathom number, like the Broad Fourteens or the Long Forties, both in the North Sea. Line length The components of a commercial fisherman's setline were measured in fathoms. The rope called a groundline, used to form the main line of a setline, was usually provided in bundles of 300 fathoms. A single skein of this rope was referred to as a line. Especially in Pacific coast fisheries the setline was composed of units called skates, each consisting of several hundred fathoms of groundline, with gangions and hooks attached. A tuck seine or tuck net about long, and very deep in the middle, was used to take fish from a larger seine. A line attached to a whaling harpoon was about . A forerunner — a piece of cloth tied on a ship's log line some fathoms from the outboard end — marked the limit of drift line. A kite was a drag, towed under water at any depth up to about , which upon striking bottom, was upset and rose to the surface. A shot, one of the forged lengths of chain joined by shackles to form an anchor cable, was usually . A shackle, a length of cable or chain equal to . In 1949, the British navy redefined the shackle to be . The Finnish fathom (syli) is occasionally used: nautical mile or cable length. Burial A burial at sea (where the body is weighted to force it to the bottom) requires a minimum of six fathoms of water. This is the origin of the phrase "to deep six" as meaning to discard, or dispose of. The phrase is echoed in Shakespeare's The Tempest, where Ariel tells Ferdinand, "Full fathom five thy father lies". On land Until early in the 20th century, it was the unit used to measure the depth of mines (mineral extraction) in the United Kingdom. Miners also use it as a unit of area equal to 6 feet square (3.34 m2) in the plane of a vein. In Britain, it can mean the quantity of wood in a pile of any length measuring square in cross section. In Central Europe, the klafter was the corresponding unit of comparable length, as was the toise in France. In Hungary the square fathom ("négyszögöl") is still in use as an unofficial measure of land area, primarily for small lots suitable for construction.
Physical sciences
English
Basics and measurement
154473
https://en.wikipedia.org/wiki/Fermi%20level
Fermi level
The Fermi level of a solid-state body is the thermodynamic work required to add one electron to the body. It is a thermodynamic quantity usually denoted by μ or EF for brevity. The Fermi level does not include the work required to remove the electron from wherever it came from. A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties; how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In band structure theory, used in solid state physics to analyze the energy levels in a solid, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The position of the Fermi level in relation to the band energy levels is a crucial factor in determining electrical properties. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. Voltage measurement Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, electromagnetic induction, and thermal effects also play an important role. In fact, the quantity called voltage as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the total work transferred when a unit charge is allowed to move from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, VA − VB, the observed difference in voltage between two points, A and B, in an electronic circuit is exactly related to the corresponding chemical potential difference, μA − μB, in Fermi level by the formula where −e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high μ (low voltage) to low μ (high voltage) if a simple path is provided. This flow of electrons will cause the lower μ to increase (due to charging or other repulsion effects) and likewise cause the higher μ to decrease. Eventually, μ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. Band structure of solids In the band theory of solids, electrons occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution, , gives the probability that (at thermodynamic equilibrium) a state having energy ϵ is occupied by an electron: Here, T is the absolute temperature and kB is the Boltzmann constant. If there is a state at the Fermi level (ϵ = μ), then this state will have a 50% chance of being occupied. The distribution is plotted in the left figure. The closer f is to 1, the higher chance this state is occupied. The closer f is to 0, the higher chance this state is empty. The location of μ within a material's band structure is important in determining the electrical behaviour of the material. In an insulator, μ lies within a large band gap, far away from any states that are able to carry current. In a metal, semimetal or degenerate semiconductor, μ lies within a delocalized band. A large number of states nearby μ are thermally active and readily carry current. In an intrinsic or lightly doped semiconductor, μ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of μ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change μ which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential and the parameter ζ If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, ϵC, then in general we have We can define a parameter ζ that references the Fermi level with respect to the band edge:It follows that the Fermi–Dirac distribution function can be written asThe band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity ζ may be called the Fermi level, chemical potential, or electrochemical potential, leading to ambiguity with the globally-referenced Fermi level. In this article, the terms conduction-band referenced Fermi level or internal chemical potential are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter, ζ, could also be labelled the Fermi kinetic energy. Unlike μ, the parameter, ζ, is not a constant at equilibrium, but rather varies from location to location in a material due to variations in ϵC, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, ζ may even take on multiple values in a single location. For example, in a piece of aluminum there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy, ϵC, and a different ζ. The value of ζ at zero temperature is widely known as the Fermi energy, sometimes written ζ0. Confusingly (again), the name Fermi energy sometimes is used to refer to ζ at non-zero temperature. Temperature out of equilibrium The Fermi level, μ, and temperature, T, are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in quasi-equilibrium when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient of μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: If the system contains a chemical imbalance (as in a battery). If the system is exposed to changing electromagnetic fields (as in capacitors, inductors, and transformers). Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), When the temperature is not constant within the device (as in thermocouples), When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be non-thermalized. In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. Technicalities Nomenclature The term Fermi level is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the band-referenced Fermi level, μ − ϵC, called ζ above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in ϵC due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is always fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms, chemical potential and electrochemical potential. It is also important to note that Fermi level is not necessarily the same thing as Fermi energy. In the wider context of quantum mechanics, the term Fermi energy usually refers to the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas. This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, Fermi energy often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its μ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where the vacuum is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? When the body is able to exchange electrons and energy with an electrode (reservoir), it is described by the grand canonical ensemble. The value of chemical potential can be said to be fixed by the electrode, and the number of electrons on the body may fluctuate. In this case, the chemical potential of a body is the infinitesimal amount of work needed to increase the average number of electrons by an infinitesimal amount (even though the number of electrons at any time is an integer, the average number varies continuously.): where is the free energy function of the grand canonical ensemble. If the number of electrons in the body is fixed (but the body is still thermally connected to a heat bath), then it is in the canonical ensemble. We can define a "chemical potential" in this case literally as the work required to add one electron to a body that already has exactly electrons, where is the free energy function of the canonical ensemble, alternatively, These chemical potentials are not equivalent, , except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter, , (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron.
Physical sciences
Basics_2
Physics
154502
https://en.wikipedia.org/wiki/Type%202%20diabetes
Type 2 diabetes
Type 2 diabetes (T2D), formerly known as adult-onset diabetes, is a form of diabetes mellitus that is characterized by high blood sugar, insulin resistance, and relative lack of insulin. Common symptoms include increased thirst, frequent urination, fatigue and unexplained weight loss. Other symptoms include increased hunger, having a sensation of pins and needles, and sores (wounds) that heal slowly. Symptoms often develop slowly. Long-term complications from high blood sugar include heart disease, stroke, diabetic retinopathy, which can result in blindness, kidney failure, and poor blood flow in the lower-limbs, which may lead to amputations. The sudden onset of hyperosmolar hyperglycemic state may occur; however, ketoacidosis is uncommon. Type 2 diabetes primarily occurs as a result of obesity and lack of exercise. Some people are genetically more at risk than others. Type 2 diabetes makes up about 90% of cases of diabetes, with the other 10% due primarily to type 1 diabetes and gestational diabetes. In type 1 diabetes, there is a lower total level of insulin to control blood glucose, due to an autoimmune-induced loss of insulin-producing beta cells in the pancreas. Diagnosis of diabetes is by blood tests such as fasting plasma glucose, oral glucose tolerance test, or glycated hemoglobin (A1c). Type 2 diabetes is largely preventable by staying at a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fat). Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels (such as through a continuous glucose monitor) is advised; however, this may not be needed in those who are not on insulin therapy. Bariatric surgery often improves diabetes in those who are obese. Rates of type 2 diabetes have increased markedly since 1960 in parallel with obesity. As of 2015, there were approximately 392 million people diagnosed with the disease compared to around 30 million in 1985. Typically, it begins in middle or older age, although rates of type 2 diabetes are increasing in young people. Type 2 diabetes is associated with a ten-year-shorter life expectancy. Diabetes was one of the first diseases ever described, dating back to an Egyptian manuscript from  BCE. Type 1 and type 2 diabetes were identified as separate conditions in 400–500 CE with type 1 associated with youth and type 2 with being overweight. The importance of insulin in the disease was determined in the 1920s. Signs and symptoms The classic symptoms of diabetes are frequent urination (polyuria), increased thirst (polydipsia), increased hunger (polyphagia), and weight loss. Other symptoms that are commonly present at diagnosis include a history of blurred vision, itchiness, peripheral neuropathy, recurrent vaginal infections, and fatigue. Other symptoms may include loss of taste. Many people, however, have no symptoms during the first few years and are diagnosed on routine testing. A small number of people with type 2 diabetes can develop a hyperosmolar hyperglycemic state (a condition of very high blood sugar associated with a decreased level of consciousness and low blood pressure). Complications Type 2 diabetes is typically a chronic disease associated with a ten-year-shorter life expectancy. This is partly due to a number of complications with which it is associated, including: two to four times the risk of cardiovascular disease, including ischemic heart disease and stroke; a 20-fold increase in lower limb amputations, and increased rates of hospitalizations. In the developed world, and increasingly elsewhere, type 2 diabetes is the largest cause of nontraumatic blindness and kidney failure. It has also been associated with an increased risk of cognitive dysfunction and dementia through disease processes such as Alzheimer's disease and vascular dementia. Other complications include hyperpigmentation of skin (acanthosis nigricans), sexual dysfunction, diabetic ketoacidosis, and frequent infections. There is also an association between type 2 diabetes and mild hearing loss. Causes The development of type 2 diabetes is caused by a combination of lifestyle and genetic factors. While some of these factors are under personal control, such as diet and obesity, other factors are not, such as increasing age, female sex, and genetics. Generous consumption of alcohol is also a risk factor. Obesity is more common in women than men in many parts of Africa. The nutritional status of a mother during fetal development may also play a role. Lifestyle Lifestyle factors are important to the development of type 2 diabetes, including obesity and being overweight (defined by a body mass index of greater than 25), lack of physical activity, poor diet, psychological stress, and urbanization. Excess body fat is associated with 30% of cases in those of Chinese and Japanese descent, 60–80% of cases in those of European and African descent, and 100% of cases in Pima Indians and Pacific Islanders. Among those who are not obese, a high waist–hip ratio is often present. Smoking appears to increase the risk of type 2 diabetes. Lack of sleep has also been linked to type 2 diabetes. Laboratory studies have linked short-term sleep deprivations to changes in glucose metabolism, nervous system activity, or hormonal factors that may lead to diabetes. Dietary factors also influence the risk of developing type 2 diabetes. Consumption of sugar-sweetened drinks in excess is associated with an increased risk. The type of fats in the diet are important, with saturated fat and trans fatty acids increasing the risk, and polyunsaturated and monounsaturated fat decreasing the risk. Eating a lot of white rice appears to play a role in increasing risk. A lack of exercise is believed to cause 7% of cases. Sedentary lifestyle is another risk factor. Persistent organic pollutants may also play a role. Genetics Most cases of diabetes involve many genes, with each being a small contributor to an increased probability of becoming a type 2 diabetic. The proportion of diabetes that is inherited is estimated at 72%. More than 36 genes and 80 single nucleotide polymorphisms (SNPs) had been found that contribute to the risk of type 2 diabetes. All of these genes together still only account for 10% of the total heritable component of the disease. The TCF7L2 allele, for example, increases the risk of developing diabetes by 1.5 times and is the greatest risk of the common genetic variants. Most of the genes linked to diabetes are involved in pancreatic beta cell functions. There are a number of rare cases of diabetes that arise due to an abnormality in a single gene (known as monogenic forms of diabetes or "other specific types of diabetes"). These include maturity onset diabetes of the young (MODY), Donohue syndrome, and Rabson–Mendenhall syndrome, among others. Maturity onset diabetes of the young constitute 1–5% of all cases of diabetes in young people. Epigenetic regulation may have a role in type 2 diabetes. Medical conditions There are a number of medications and other health problems that can predispose to diabetes. Some of the medications include: glucocorticoids, thiazides, beta blockers, atypical antipsychotics, and statins. Those who have previously had gestational diabetes are at a higher risk of developing type 2 diabetes. Other health problems that are associated include: acromegaly, Cushing's syndrome, hyperthyroidism, pheochromocytoma, and certain cancers such as glucagonomas. Individuals with cancer may be at a higher risk of mortality if they also have diabetes. Testosterone deficiency is also associated with type 2 diabetes. Eating disorders may also interact with type 2 diabetes, with bulimia nervosa increasing the risk and anorexia nervosa decreasing it. Pathophysiology Type 2 diabetes is due to insufficient insulin production from beta cells in the setting of insulin resistance. Insulin resistance, which is the inability of cells to respond adequately to normal levels of insulin, occurs primarily within the muscles, liver, and fat tissue. In the liver, insulin normally suppresses glucose release. However, in the setting of insulin resistance, the liver inappropriately releases glucose into the blood. The proportion of insulin resistance versus beta cell dysfunction differs among individuals, with some having primarily insulin resistance and only a minor defect in insulin secretion and others with slight insulin resistance and primarily a lack of insulin secretion. Other potentially important mechanisms associated with type 2 diabetes and insulin resistance include: increased breakdown of lipids within fat cells, resistance to and lack of incretin, high glucagon levels in the blood, increased retention of salt and water by the kidneys, and inappropriate regulation of metabolism by the central nervous system. However, not all people with insulin resistance develop diabetes since an impairment of insulin secretion by pancreatic beta cells is also required. In the early stages of insulin resistance, the mass of beta cells expands, increasing the output of insulin to compensate for the insulin insensitivity, so that the disposition index remains constant. But when type 2 diabetes has become manifest, the person will have lost about half of their beta cells. The causes of the aging-related insulin resistance seen in obesity and in type 2 diabetes are uncertain. Effects of intracellular lipid metabolism and ATP production in liver and muscle cells may contribute to insulin resistance. Diagnosis The World Health Organization definition of diabetes (both type 1 and type 2) is for a single raised glucose reading with symptoms, otherwise raised values on two occasions, of either: fasting plasma glucose ≥ 7.0 mmol/L (126 mg/dL) or glucose tolerance test with two hours after the oral dose a plasma glucose ≥ 11.1 mmol/L (200 mg/dL) A random blood sugar of greater than 11.1 mmol/L (200 mg/dL) in association with typical symptoms or a glycated hemoglobin (HbA1c) of ≥ 48 mmol/mol (≥ 6.5 DCCT %) is another method of diagnosing diabetes. In 2009, an International Expert Committee that included representatives of the American Diabetes Association (ADA), the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) recommended that a HbA1c threshold of ≥ 48 mmol/mol (≥ 6.5 DCCT %) should be used to diagnose diabetes. This recommendation was adopted by the American Diabetes Association in 2010. Positive tests should be repeated unless the person presents with typical symptoms and blood sugar >11.1 mmol/L (>200 mg/dL). Threshold for diagnosis of diabetes is based on the relationship between results of glucose tolerance tests, fasting glucose or HbA1c and complications such as retinal problems. A fasting or random blood sugar is preferred over the glucose tolerance test, as they are more convenient for people. HbA1c has the advantages that fasting is not required and results are more stable but has the disadvantage that the test is more costly than measurement of blood glucose. It is estimated that 20% of people with diabetes in the United States do not realize that they have the disease. Type 2 diabetes is characterized by high blood glucose in the context of insulin resistance and relative insulin deficiency. This is in contrast to type 1 diabetes in which there is an absolute insulin deficiency due to destruction of islet cells in the pancreas and gestational diabetes that is a new onset of high blood sugars associated with pregnancy. Type 1 and type 2 diabetes can typically be distinguished based on the presenting circumstances. If the diagnosis is in doubt antibody testing may be useful to confirm type 1 diabetes and C-peptide levels may be useful to confirm type 2 diabetes, with C-peptide levels normal or high in type 2 diabetes, but low in type 1 diabetes. Screening Universal screening for diabetes in people without risk factors or symptoms is not recommended. The United States Preventive Services Task Force (USPSTF) recommended in 2021 screening for type 2 diabetes in adults aged 35 to 70 years old who are overweight (i.e. BMI over 25) or have obesity. For people of Asian descent, screening is recommended if they have a BMI over 23. Screening at an earlier age may be considered in people with a family history of diabetes; some ethnic groups, including Hispanics, African Americans, and Native Americans; a history of gestational diabetes; polycystic ovary syndrome. Screening can be repeated every 3 years. The American Diabetes Association (ADA) recommended in 2024 screening in all adults from the age of 35 years. ADA also recommends screening in adults of all ages with a BMI over 25 (or over 23 in Asian Americans) with another risk factor: first-degree relative with diabetes, ethnicity at high risk for diabetes, blood pressure ≥130/80 mmHg or on therapy for hypertension, history of cardiovascular disease, physical inactivity, polycystic ovary syndrome or severe obesity. ADA recommends repeat screening every 3 years at minimum. ADA recommends yearly tests in people with prediabetes. People with previous gestational diabetes or pancreatitis are also recommended screening. There is no evidence that screening changes the risk of death and any benefit of screening on adverse effects, incidence of type 2 diabetes, HbA1c or socioeconomic effects are not clear. In the UK, NICE guidelines suggest taking action to prevent diabetes for people with a body mass index (BMI) of 30 or more. For people of Black African, African-Caribbean, South Asian and Chinese descent the recommendation to start prevention starts at the BMI of 27,5. A study based on a large sample of people in England suggest even lower BMIs for certain ethnic groups for the start of prevention, for example 24 in South Asian and 21 in Bangladeshi populations. Prevention Onset of type 2 diabetes can be delayed or prevented through proper nutrition and regular exercise. Intensive lifestyle measures may reduce the risk by over half. The benefit of exercise occurs regardless of the person's initial weight or subsequent weight loss. High levels of physical activity reduce the risk of diabetes by about 28%. Evidence for the benefit of dietary changes alone, however, is limited, with some evidence for a diet high in green leafy vegetables and some for limiting the intake of sugary drinks. There is an association between higher intake of sugar-sweetened fruit juice and diabetes, but no evidence of an association with 100% fruit juice. A 2019 review found evidence of benefit from dietary fiber. A 2017 review found that, long term, lifestyle changes decreased the risk by 28%, while medication does not reduce risk after withdrawal. While low vitamin D levels are associated with an increased risk of diabetes, correcting the levels by supplementing vitamin D3 does not improve that risk. In those with prediabetes, diet in combination with physical activity delays or reduces the risk of type 2 diabetes, according to a 2017 Cochrane review. In those with prediabetes, metformin may delay or reduce the risk of developing type 2 diabetes compared to diet and exercise or a placebo intervention, but not compared to intensive diet and exercise, and there was not enough data on outcomes such as mortality and diabetic complications and health-related quality of life, according to a 2019 Cochrane review. In those with prediabetes, alpha-glucosidase inhibitors such as acarbose may delay or reduce the risk of type 2 diabetes when compared to placebo, however there was no conclusive evidence that acarbose improved cardiovascular mortality or cardiovascular events, according to a 2018 Cochrane review. In those with prediabetes, pioglitazone may delay or reduce the risk of developing type 2 diabetes compared to placebo or no intervention, but no difference was seen compared to metformin, and data were missing on mortality and complications and quality of life, according to a 2020 Cochrane review. In those with prediabetes, there was insufficient data to draw any conclusions on whether SGLT2 inhibitors may delay or reduce the risk of developing type 2 diabetes, according to a 2016 Cochrane review. Management Management of type 2 diabetes focuses on lifestyle interventions, lowering other cardiovascular risk factors, and maintaining blood glucose levels in the normal range. Self-monitoring of blood glucose for people with newly diagnosed type 2 diabetes may be used in combination with education, although the benefit of self-monitoring in those not using multi-dose insulin is questionable. In those who do not want to measure blood levels, measuring urine levels may be done. Managing other cardiovascular risk factors, such as hypertension, high cholesterol, and microalbuminuria, improves a person's life expectancy. Decreasing the systolic blood pressure to less than 140 mmHg is associated with a lower risk of death and better outcomes. Intensive blood pressure management (less than 130/80 mmHg) as opposed to standard blood pressure management (less than 140–160 mmHg systolic to 85–100 mmHg diastolic) results in a slight decrease in stroke risk but no effect on overall risk of death. Intensive blood sugar lowering (HbA1c < 6%) as opposed to standard blood sugar lowering (HbA1c of 7–7.9%) does not appear to change mortality. The goal of treatment is typically an HbA1c of 7 to 8% or a fasting glucose of less than 7.2 mmol/L (130 mg/dL); however these goals may be changed after professional clinical consultation, taking into account particular risks of hypoglycemia and life expectancy. Hypoglycemia is associated with adverse outcomes in older people with type 2 diabetes. Despite guidelines recommending that intensive blood sugar control be based on balancing immediate harms with long-term benefits, many people – for example people with a life expectancy of less than nine years who will not benefit, are over-treated. It is recommended that all people with type 2 diabetes get regular eye examinations. There is moderate evidence suggesting that treating gum disease by scaling and root planing results in an improvement in blood sugar levels for people with diabetes. Lifestyle Exercise A proper diet and regular exercise are foundations of diabetic care, with one review indicating that a greater amount of exercise improved outcomes. Regular exercise may improve blood sugar control, decrease body fat content, and decrease blood lipid levels. Diet Calorie restriction to promote weight loss is generally recommended. Around 80 percent of obese people with type 2 diabetes achieve complete remission with no need for medication if they sustain a weight loss of at least , but most patients are not able to achieve or sustain significant weight loss. Even modest weight loss can produce significant improvements in glycemic control and reduce the need for medication. Several diets may be effective such as the DASH diet, Mediterranean diet, low-fat diet, or monitored carbohydrate diets such as a low carbohydrate diet. Other recommendations include emphasizing intake of fruits, vegetables, reduced saturated fat and low-fat dairy products, and with a macronutrient intake tailored to the individual, to distribute calories and carbohydrates throughout the day. A 2021 review showed that consumption of tree nuts (walnuts, almonds, and hazelnuts) reduced fasting blood glucose in diabetic people. , there is insufficient data to recommend nonnutritive sweeteners, which may help reduce caloric intake. An elevated intake of microbiota-accessible carbohydrates can help reducing the effects of T2D. Viscous fiber supplements may be useful in those with diabetes. Culturally appropriate education may help people with type 2 diabetes control their blood sugar levels for up to 24 months. There is not enough evidence to determine if lifestyle interventions affect mortality in those who already have type 2 diabetes. Stress management Although psychological stress is recognized as a risk factor for type 2 diabetes, the effect of stress management interventions on disease progression are not established. A Cochrane review is under way to assess the effects of mindfulness‐based interventions for adults with type 2 diabetes. Medications Blood sugar control There are several classes of diabetes medications available. Metformin is generally recommended as a first line treatment as there is some evidence that it decreases mortality; however, this conclusion is questioned. Metformin should not be used in those with severe kidney or liver problems. The American Diabetes Association and European Association for the Study of Diabetes recommend using a GLP-1 receptor agonist or SGLT2 inhibitor as the first-line treatment in patients who have or are at high risk for atherosclerotic cardiovascular disease, heart failure, or chronic kidney disease. The higher cost of these drugs compared to metformin has limited their use. Other classes of medications include: sulfonylureas, thiazolidinediones, dipeptidyl peptidase-4 inhibitors, SGLT2 inhibitors, and GLP-1 receptor agonists. A 2018 review found that SGLT2 inhibitors and GLP-1 agonists, but not DPP-4 inhibitors, were associated with lower mortality than placebo or no treatment. Rosiglitazone, a thiazolidinedione, has not been found to improve long-term outcomes even though it improves blood sugar levels. Additionally it is associated with increased rates of heart disease and death. Injections of insulin may either be added to oral medication or used alone. Most people do not initially need insulin. When it is used, a long-acting formulation is typically added at night, with oral medications being continued. Doses are then increased to effect (blood sugar levels being well controlled). When nightly insulin is insufficient, twice daily insulin may achieve better control. The long acting insulins glargine and detemir are equally safe and effective, and do not appear much better than NPH insulin, but as they are significantly more expensive, they are not cost effective as of 2010. In those who are pregnant, insulin is generally the treatment of choice. Blood pressure lowering Many international guidelines recommend blood pressure treatment targets that are lower than 140/90 mmHg for people with diabetes. However, there is only limited evidence regarding what the lower targets should be. A 2016 systematic review found potential harm to treating to targets lower than 140 mmHg, and a subsequent review in 2019 found no evidence of additional benefit from blood pressure lowering to between 130 and 140 mmHg, although there was an increased risk of adverse events. In people with diabetes and hypertension and either albuminuria or chronic kidney disease, an inhibitor of the renin-angiotensin system (such as an ACE inhibitor or angiotensin receptor blocker) to reduce the risks of progression of kidney disease and present cardiovascular events. There is some evidence that angiotensin converting enzyme inhibitors (ACEIs) are superior to other inhibitors of the renin-angiotensin system such as angiotensin receptor blockers (ARBs), or aliskiren in preventing cardiovascular disease. Although a 2016 review found similar effects of ACEIs and ARBs on major cardiovascular and renal outcomes. There is no evidence that combining ACEIs and ARBs provides additional benefits. Other The use of statins in diabetes to prevent cardiovascular disease should be considered after evaluating the person's total risk for cardiovascular disease. The use of aspirin (acetylsalicylic acid) to prevent cardiovascular disease in diabetes is controversial. Aspirin is recommended in people with previous cardiovascular disease, however routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes. Aspirin as primary prevention may have greater risk than benefit, but could be considered in people aged 50 to 70 with another significant cardiovascular risk factor and low risk of bleeding after information about possible risks and benefits as part of shared-decision making. Vitamin D supplementation to people with type 2 diabetes may improve markers of insulin resistance and HbA1c. Sharing their electronic health records with people who have type 2 diabetes helps them to reduce their blood sugar levels. It is a way of helping people understand their own health condition and involving them actively in its management. Surgery Weight loss surgery in those who are obese is an effective measure to treat diabetes. Many are able to maintain normal blood sugar levels with little or no medication following surgery and long-term mortality is decreased. There however is some short-term mortality risk of less than 1% from the surgery. The body mass index cutoffs for when surgery is appropriate are not yet clear. It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control. Epidemiology The International Diabetes Federation estimates nearly 537 million people lived with diabetes worldwide in 2021, 90–95% of whom have type 2 diabetes. Diabetes is common both in the developed and the developing world. Some ethnic groups such as South Asians, Pacific Islanders, Latinos, and Native Americans are at particularly high risk of developing type 2 diabetes. Type 2 diabetes in normal weight individuals represents 60 to 80 percent of all cases in some Asian countries. The mechanism causing diabetes in non-obese individuals is poorly understood. Rates of diabetes in 1985 were estimated at 30 million, increasing to 135 million in 1995 and 217 million in 2005. This increase is believed to be primarily due to the global population aging, a decrease in exercise, and increasing rates of obesity. Traditionally considered a disease of adults, type 2 diabetes is increasingly diagnosed in children in parallel with rising obesity rates. The five countries with the greatest number of people with diabetes as of 2000 are India having 31.7 million, China 20.8 million, the United States 17.7 million, Indonesia 8.4 million, and Japan 6.8 million. It is recognized as a global epidemic by the World Health Organization. History Diabetes is one of the first diseases described with an Egyptian manuscript from  BCE mentioning "too great emptying of the urine." The first described cases are believed to be of type 1 diabetes. Indian physicians around the same time identified the disease and classified it as madhumeha or honey urine noting that the urine would attract ants. The term "diabetes" or "to pass through" was first used in 230 BCE by the Greek Apollonius Memphites. The disease was rare during the time of the Roman empire with Galen commenting that he had only seen two cases during his career. Type 1 and type 2 diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400–500 CE with type 1 associated with youth and type 2 with being overweight. Effective treatment was not developed until the early part of the 20th century when the Canadians Frederick Banting and Charles Best discovered insulin in 1921 and 1922. This was followed by the development of the longer acting NPH insulin in the 1940s. In 1916, Elliot Joslin proposed that in people with diabetes, periods of fasting are helpful. Subsequent research has supported this, and weight loss is a first line treatment in type 2 diabetes. Research In 2020, Diabetes Severity Score (DISSCO) was developed which is a tool that might better than HbA1c identify if a person's condition is declining. It uses a computer algorithm to analyse data from anonymised electronic patient records and produces a score based on 34 indicators. Stem cells In April 2024 scientists reported the first case of reversion of type 2 diabetes by use of stem cells in a 59-year-old man treated in 2021 who has since remain insulin-free. Replication in more patients and evidence over longer periods would be needed before considering this treatment as a possible cure.
Biology and health sciences
Specific diseases
Health
154505
https://en.wikipedia.org/wiki/Digital%20signal%20processor
Digital signal processor
A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products. The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Overview Digital signal processing (DSP) algorithms typically require a large number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP applications have constraints on latency; that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred (or batch) processing is not viable. Most general-purpose microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints. A specialized DSP, however, will tend to provide a lower-cost solution, with better performance, lower latency, and no requirements for specialised cooling or large batteries. Such performance improvements have led to the introduction of digital signal processing in commercial communications satellites where hundreds or even thousands of analog filters, switches, frequency converters and so on are required to receive and process the uplinked signals and ready them for downlinking, and can be replaced with specialised DSPs with significant benefits to the satellites' weight, power consumption, complexity/cost of construction, reliability and flexibility of operation. For example, the SES-12 and SES-14 satellites from operator SES launched in 2018, were both built by Airbus Defence and Space with 25% of capacity using DSP. The architecture of a DSP is optimized specifically for digital signal processing. Most also support some of the features of an applications processor or microcontroller, since signal processing is rarely the only task of a system. Some useful features for optimizing DSP algorithms are outlined below. Architecture Software architecture By the standards of general-purpose processors, DSP instruction sets are often highly irregular; while traditional instruction sets are made up of more general instructions that allow them to perform a wider variety of operations, instruction sets optimized for digital signal processing contain instructions for common mathematical operations that occur frequently in DSP calculations. Both traditional and DSP-optimized instruction sets are able to compute any arbitrary operation but an operation that might require multiple ARM or x86 instructions to compute might require only one instruction in a DSP optimized instruction set. One implication for software architecture is that hand-optimized assembly-code routines (assembly programs) are commonly packaged into libraries for re-use, instead of relying on advanced compiler technologies to handle essential algorithms. Even with modern compiler optimizations hand-optimized assembly code is more efficient and many common algorithms involved in DSP calculations are hand-written in order to take full advantage of the architectural optimizations. Instruction sets multiply–accumulates (MACs, including fused multiply–add, FMA) operations used extensively in all kinds of matrix operations convolution for filtering dot product polynomial evaluation Fundamental DSP algorithms depend heavily on multiply–accumulate performance FIR filters Fast Fourier transform (FFT) related instructions: SIMD VLIW Specialized instructions for modulo addressing in ring buffers and bit-reversed addressing mode for FFT cross-referencing DSPs sometimes use time-stationary encoding to simplify hardware and increase coding efficiency. Multiple arithmetic units may require memory architectures to support several accesses per instruction cycle – typically supporting reading 2 data values from 2 separate data buses and the next instruction (from the instruction cache, or a 3rd program memory) simultaneously. Special loop controls, such as architectural support for executing a few instruction words in a very tight loop without overhead for instruction fetches or exit testing—such as zero-overhead looping and hardware loop buffers. Data instructions Saturation arithmetic, in which operations that produce overflows will accumulate at the maximum (or minimum) values that the register can hold rather than wrapping around (maximum+1 doesn't overflow to minimum as in many general-purpose CPUs, instead it stays at maximum). Sometimes various sticky bits operation modes are available. Fixed-point arithmetic is often used to speed up arithmetic processing. Single-cycle operations to increase the benefits of pipelining. Program flow Floating-point unit integrated directly into the datapath Pipelined architecture Highly parallel multiplier–accumulators (MAC units) Hardware-controlled looping, to reduce or eliminate the overhead required for looping operations Hardware architecture Memory architecture DSPs are usually optimized for streaming data and use special memory architectures that are able to fetch multiple data or instructions at the same time, such as the Harvard architecture or Modified von Neumann architecture, which use separate program and data memories (sometimes even concurrent access on multiple data buses). DSPs can sometimes rely on supporting code to know about cache hierarchies and the associated delays. This is a tradeoff that allows for better performance. In addition, extensive use of DMA is employed. Addressing and virtual memory DSPs frequently use multi-tasking operating systems, but have no support for virtual memory or memory protection. Operating systems that use virtual memory require more time for context switching among processes, which increases latency. Hardware modulo addressing Allows circular buffers to be implemented without having to test for wrapping Bit-reversed addressing, a special addressing mode useful for calculating FFTs Exclusion of a memory management unit Address generation unit History Development In 1976, Richard Wiggins proposed the Speak & Spell concept to Paul Breedlove, Larry Brantingham, and Gene Frantz at Texas Instruments' Dallas research facility. Two years later in 1978, they produced the first Speak & Spell, with the technological centerpiece being the TMS5100, the industry's first digital signal processor. It also set other milestones, being the first chip to use linear predictive coding to perform speech synthesis. The chip was made possible with a 7 μm PMOS fabrication process. In 1978, American Microsystems (AMI) released the S2811. The AMI S2811 "signal processing peripheral", like many later DSPs, has a hardware multiplier that enables it to do multiply–accumulate operation in a single instruction. The S2281 was the first integrated circuit chip specifically designed as a DSP, and fabricated using vertical metal oxide semiconductor (VMOS, V-groove MOS), a technology that had previously not been mass-produced. It was designed as a microprocessor peripheral, for the Motorola 6800, and it had to be initialized by the host. The S2811 was not successful in the market. In 1979, Intel released the 2920 as an "analog signal processor". It had an on-chip ADC/DAC with an internal signal processor, but it didn't have a hardware multiplier and was not successful in the market. In 1980, the first stand-alone, complete DSPs – Nippon Electric Corporation's NEC μPD7720 based on the modified Harvard architecture and AT&T's DSP1 – were presented at the International Solid-State Circuits Conference '80. Both processors were inspired by the research in public switched telephone network (PSTN) telecommunications. The μPD7720, introduced for voiceband applications, was one of the most commercially successful early DSPs. The Altamira DX-1 was another early DSP, utilizing quad integer pipelines with delayed branches and branch prediction. Another DSP produced by Texas Instruments (TI), the TMS32010 presented in 1983, proved to be an even bigger success. It was based on the Harvard architecture, and so had separate instruction and data memory. It already had a special instruction set, with instructions like load-and-accumulate or multiply-and-accumulate. It could work on 16-bit numbers and needed 390 ns for a multiply–add operation. TI is now the market leader in general-purpose DSPs. About five years later, the second generation of DSPs began to spread. They had 3 memories for storing two operands simultaneously and included hardware to accelerate tight loops; they also had an addressing unit capable of loop-addressing. Some of them operated on 24-bit variables and a typical model only required about 21 ns for a MAC. Members of this generation were for example the AT&T DSP16A or the Motorola 56000. The main improvement in the third generation was the appearance of application-specific units and instructions in the data path, or sometimes as coprocessors. These units allowed direct hardware acceleration of very specific but complex mathematical problems, like the Fourier-transform or matrix operations. Some chips, like the Motorola MC68356, even included more than one processor core to work in parallel. Other DSPs from 1995 are the TI TMS320C541 or the TMS 320C80. The fourth generation is best characterized by the changes in the instruction set and the instruction encoding/decoding. SIMD extensions were added, and VLIW and the superscalar architecture appeared. As always, the clock-speeds have increased; a 3 ns MAC now became possible. Modern DSPs Modern signal processors yield greater performance; this is due in part to both technological and architectural advancements like lower design rules, fast-access two-level cache, (E)DMA circuitry, and a wider bus system. Not all DSPs provide the same speed and many kinds of signal processors exist, each one of them being better suited for a specific task, ranging in price from about US$1.50 to US$300. Texas Instruments produces the C6000 series DSPs, which have clock speeds of 1.2 GHz and implement separate instruction and data caches. They also have an 8 MiB 2nd level cache and 64 EDMA channels. The top models are capable of as many as 8000 MIPS (millions of instructions per second), use VLIW (very long instruction word), perform eight operations per clock-cycle and are compatible with a broad range of external peripherals and various buses (PCI/serial/etc). TMS320C6474 chips each have three such DSPs, and the newest generation C6000 chips support floating point as well as fixed point processing. Freescale produces a multi-core DSP family, the MSC81xx. The MSC81xx is based on StarCore Architecture processors and the latest MSC8144 DSP combines four programmable SC3400 StarCore DSP cores. Each SC3400 StarCore DSP core has a clock speed of 1 GHz. XMOS produces a multi-core multi-threaded line of processor well suited to DSP operations, They come in various speeds ranging from 400 to 1600 MIPS. The processors have a multi-threaded architecture that allows up to 8 real-time threads per core, meaning that a 4 core device would support up to 32 real time threads. Threads communicate between each other with buffered channels that are capable of up to 80 Mbit/s. The devices are easily programmable in C and aim at bridging the gap between conventional micro-controllers and FPGAs CEVA, Inc. produces and licenses three distinct families of DSPs. Perhaps the best known and most widely deployed is the CEVA-TeakLite DSP family, a classic memory-based architecture, with 16-bit or 32-bit word-widths and single or dual MACs. The CEVA-X DSP family offers a combination of VLIW and SIMD architectures, with different members of the family offering dual or quad 16-bit MACs. The CEVA-XC DSP family targets Software-defined Radio (SDR) modem designs and leverages a unique combination of VLIW and Vector architectures with 32 16-bit MACs. Analog Devices produce the SHARC-based DSP and range in performance from 66 MHz/198 MFLOPS (million floating-point operations per second) to 400 MHz/2400 MFLOPS. Some models support multiple multipliers and ALUs, SIMD instructions and audio processing-specific components and peripherals. The Blackfin family of embedded digital signal processors combine the features of a DSP with those of a general use processor. As a result, these processors can run simple operating systems like μCLinux, velocity and Nucleus RTOS while operating on real-time data. The SHARC-based ADSP-210xx provides both delayed branches and non-delayed branches. NXP Semiconductors produce DSPs based on TriMedia VLIW technology, optimized for audio and video processing. In some products the DSP core is hidden as a fixed-function block into a SoC, but NXP also provides a range of flexible single core media processors. The TriMedia media processors support both fixed-point arithmetic as well as floating-point arithmetic, and have specific instructions to deal with complex filters and entropy coding. CSR produces the Quatro family of SoCs that contain one or more custom Imaging DSPs optimized for processing document image data for scanner and copier applications. Microchip Technology produces the PIC24 based dsPIC line of DSPs. Introduced in 2004, the dsPIC is designed for applications needing a true DSP as well as a true microcontroller, such as motor control and in power supplies. The dsPIC runs at up to 40MIPS, and has support for 16 bit fixed point MAC, bit reverse and modulo addressing, as well as DMA. Most DSPs use fixed-point arithmetic, because in real world signal processing the additional range provided by floating point is not needed, and there is a large speed benefit and cost benefit due to reduced hardware complexity. Floating point DSPs may be invaluable in applications where a wide dynamic range is required. Product developers might also use floating point DSPs to reduce the cost and complexity of software development in exchange for more expensive hardware, since it is generally easier to implement algorithms in floating point. Generally, DSPs are dedicated integrated circuits; however DSP functionality can also be produced by using field-programmable gate array chips (FPGAs). Embedded general-purpose RISC processors are becoming increasingly DSP like in functionality. For example, the OMAP3 processors include an ARM Cortex-A8 and C6000 DSP. In Communications a new breed of DSPs offering the fusion of both DSP functions and H/W acceleration function is making its way into the mainstream. Such Modem processors include ASOCS ModemX and CEVA's XC4000. In May 2018, Huarui-2 designed by Nanjing Research Institute of Electronics Technology of China Electronics Technology Group passed acceptance. With a processing speed of 0.4 TFLOPS, the chip can achieve better performance than current mainstream DSP chips. The design team has begun to create Huarui-3, which has a processing speed in TFLOPS level and a support for artificial intelligence.
Technology
Computer hardware
null
154584
https://en.wikipedia.org/wiki/Hilbert%27s%20problems
Hilbert's problems
Hilbert's problems are 23 problems in mathematics published by German mathematician David Hilbert in 1900. They were all unsolved at the time, and several proved to be very influential for 20th-century mathematics. Hilbert presented ten of the problems (1, 2, 6, 7, 8, 13, 16, 19, 21, and 22) at the Paris conference of the International Congress of Mathematicians, speaking on August 8 at the Sorbonne. The complete list of 23 problems was published later, in English translation in 1902 by Mary Frances Winston Newson in the Bulletin of the American Mathematical Society. Earlier publications (in the original German) appeared in Archiv der Mathematik und Physik. List of Hilbert's Problems The following are the headers for Hilbert's 23 problems as they appeared in the 1902 translation in the Bulletin of the American Mathematical Society. 1. Cantor's problem of the cardinal number of the continuum. 2. The compatibility of the arithmetical axioms. 3. The equality of the volumes of two tetrahedra of equal bases and equal altitudes. 4. Problem of the straight line as the shortest distance between two points. 5. Lie's concept of a continuous group of transformations without the assumption of the differentiability of the functions defining the group. 6. Mathematical treatment of the axioms of physics. 7. Irrationality and transcendence of certain numbers. 8. Problems of prime numbers (The "Riemann Hypothesis"). 9. Proof of the most general law of reciprocity in any number field. 10. Determination of the solvability of a Diophantine equation. 11. Quadratic forms with any algebraic numerical coefficients 12. Extensions of Kronecker's theorem on Abelian fields to any algebraic realm of rationality 13. Impossibility of the solution of the general equation of 7th degree by means of functions of only two arguments. 14. Proof of the finiteness of certain complete systems of functions. 15. Rigorous foundation of Schubert's enumerative calculus. 16. Problem of the topology of algebraic curves and surfaces. 17. Expression of definite forms by squares. 18. Building up of space from congruent polyhedra. 19. Are the solutions of regular problems in the calculus of variations always necessarily analytic? 20. The general problem of boundary values (Boundary value problems in PD) 21. Proof of the existence of linear differential equations having a prescribed monodromy group. 22. Uniformization of analytic relations by means of automorphic functions. 23. Further development of the methods of the calculus of variations. Nature and influence of the problems Hilbert's problems ranged greatly in topic and precision. Some of them, like the 3rd problem, which was the first to be solved, or the 8th problem (the Riemann hypothesis), which still remains unresolved, were presented precisely enough to enable a clear affirmative or negative answer. For other problems, such as the 5th, experts have traditionally agreed on a single interpretation, and a solution to the accepted interpretation has been given, but closely related unsolved problems exist. Some of Hilbert's statements were not precise enough to specify a particular problem, but were suggestive enough that certain problems of contemporary nature seem to apply; for example, most modern number theorists would probably see the 9th problem as referring to the conjectural Langlands correspondence on representations of the absolute Galois group of a number field. Still other problems, such as the 11th and the 16th, concern what are now flourishing mathematical subdisciplines, like the theories of quadratic forms and real algebraic curves. There are two problems that are not only unresolved but may in fact be unresolvable by modern standards. The 6th problem concerns the axiomatization of physics, a goal that 20th-century developments seem to render both more remote and less important than in Hilbert's time. Also, the 4th problem concerns the foundations of geometry, in a manner that is now generally judged to be too vague to enable a definitive answer. The 23rd problem was purposefully set as a general indication by Hilbert to highlight the calculus of variations as an underappreciated and understudied field. In the lecture introducing these problems, Hilbert made the following introductory remark to the 23rd problem: The other 21 problems have all received significant attention, and late into the 20th century work on these problems was still considered to be of the greatest importance. Paul Cohen received the Fields Medal in 1966 for his work on the first problem, and the negative solution of the tenth problem in 1970 by Yuri Matiyasevich (completing work by Julia Robinson, Hilary Putnam, and Martin Davis) generated similar acclaim. Aspects of these problems are still of great interest today. Knowability Following Gottlob Frege and Bertrand Russell, Hilbert sought to define mathematics logically using the method of formal systems, i.e., finitistic proofs from an agreed-upon set of axioms. One of the main goals of Hilbert's program was a finitistic proof of the consistency of the axioms of arithmetic: that is his second problem. However, Gödel's second incompleteness theorem gives a precise sense in which such a finitistic proof of the consistency of arithmetic is provably impossible. Hilbert lived for 12 years after Kurt Gödel published his theorem, but does not seem to have written any formal response to Gödel's work. Hilbert's tenth problem does not ask whether there exists an algorithm for deciding the solvability of Diophantine equations, but rather asks for the construction of such an algorithm: "to devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers". That this problem was solved by showing that there cannot be any such algorithm contradicted Hilbert's philosophy of mathematics. In discussing his opinion that every mathematical problem should have a solution, Hilbert allows for the possibility that the solution could be a proof that the original problem is impossible. He stated that the point is to know one way or the other what the solution is, and he believed that we always can know this, that in mathematics there is not any "ignorabimus" (statement whose truth can never be known). It seems unclear whether he would have regarded the solution of the tenth problem as an instance of ignorabimus. On the other hand, the status of the first and second problems is even more complicated: there is no clear mathematical consensus as to whether the results of Gödel (in the case of the second problem), or Gödel and Cohen (in the case of the first problem) give definitive negative solutions or not, since these solutions apply to a certain formalization of the problems, which is not necessarily the only possible one. The 24th problem Hilbert originally included 24 problems on his list, but decided against including one of them in the published list. The "24th problem" (in proof theory, on a criterion for simplicity and general methods) was rediscovered in Hilbert's original manuscript notes by German historian Rüdiger Thiele in 2000. Follow-ups Since 1900, mathematicians and mathematical organizations have announced problem lists but, with few exceptions, these have not had nearly as much influence nor generated as much work as Hilbert's problems. One exception consists of three conjectures made by André Weil in the late 1940s (the Weil conjectures). In the fields of algebraic geometry, number theory and the links between the two, the Weil conjectures were very important. The first of these was proved by Bernard Dwork; a completely different proof of the first two, via ℓ-adic cohomology, was given by Alexander Grothendieck. The last and deepest of the Weil conjectures (an analogue of the Riemann hypothesis) was proved by Pierre Deligne. Both Grothendieck and Deligne were awarded the Fields medal. However, the Weil conjectures were, in their scope, more like a single Hilbert problem, and Weil never intended them as a programme for all mathematics. This is somewhat ironic, since arguably Weil was the mathematician of the 1940s and 1950s who best played the Hilbert role, being conversant with nearly all areas of (theoretical) mathematics and having figured importantly in the development of many of them. Paul Erdős posed hundreds, if not thousands, of mathematical problems, many of them profound. Erdős often offered monetary rewards; the size of the reward depended on the perceived difficulty of the problem. The end of the millennium, which was also the centennial of Hilbert's announcement of his problems, provided a natural occasion to propose "a new set of Hilbert problems". Several mathematicians accepted the challenge, notably Fields Medalist Steve Smale, who responded to a request by Vladimir Arnold to propose a list of 18 problems (Smale's problems). At least in the mainstream media, the de facto 21st century analogue of Hilbert's problems is the list of seven Millennium Prize Problems chosen during 2000 by the Clay Mathematics Institute. Unlike the Hilbert problems, where the primary award was the admiration of Hilbert in particular and mathematicians in general, each prize problem includes a million-dollar bounty. As with the Hilbert problems, one of the prize problems (the Poincaré conjecture) was solved relatively soon after the problems were announced. The Riemann hypothesis is noteworthy for its appearance on the list of Hilbert problems, Smale's list, the list of Millennium Prize Problems, and even the Weil conjectures, in its geometric guise. Although it has been attacked by major mathematicians of our day, many experts believe that it will still be part of unsolved problems lists for many centuries. Hilbert himself declared: "If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proved?" In 2008, DARPA announced its own list of 23 problems that it hoped could lead to major mathematical breakthroughs, "thereby strengthening the scientific and technological capabilities of the DoD". The DARPA list also includes a few problems from Hilbert's list, e.g. the Riemann hypothesis. Summary Of the cleanly formulated Hilbert problems, numbers 3, 7, 10, 14, 17, 18, 19, and 20 have resolutions that are accepted by consensus of the mathematical community. Problems 1, 2, 5, 6, 9, 11, 12, 15, 21, and 22 have solutions that have partial acceptance, but there exists some controversy as to whether they resolve the problems. That leaves 8 (the Riemann hypothesis), 13 and 16 unresolved, and 4 and 23 as too vague to ever be described as solved. The withdrawn 24 would also be in this class. Table of problems Hilbert's 23 problems are (for details on the solutions and references, see the articles that are linked to in the first column):
Mathematics
Basics
null
154616
https://en.wikipedia.org/wiki/Negative%20number
Negative number
In mathematics, a negative number is the opposite of a positive real number. Equivalently, a negative number is a real number that is less than zero. Negative numbers are often used to represent the magnitude of a loss or deficiency. A debt that is owed may be thought of as a negative asset. If a quantity, such as the charge on an electron, may have either of two opposite senses, then one may choose to distinguish between those senses—perhaps arbitrarily—as positive and negative. Negative numbers are used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature. The laws of arithmetic for negative numbers ensure that the common-sense idea of an opposite is reflected in arithmetic. For example, −(−3) = 3 because the opposite of an opposite is the original value. Negative numbers are usually written with a minus sign in front. For example, −3 represents a negative quantity with a magnitude of three, and is pronounced "minus three" or "negative three". Conversely, a number that is greater than zero is called positive; zero is usually (but not always) thought of as neither positive nor negative. The positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as its sign. Every real number other than zero is either positive or negative. The non-negative whole numbers are referred to as natural numbers (i.e., 0, 1, 2, 3...), while the positive and negative whole numbers (together with zero) are referred to as integers. (Some definitions of the natural numbers exclude zero.) In bookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers. Negative numbers were used in the Nine Chapters on the Mathematical Art, which in its present form dates from the period of the Chinese Han dynasty (202 BC – AD 220), but may well contain much older material. Liu Hui (c. 3rd century) established rules for adding and subtracting negative numbers. By the 7th century, Indian mathematicians such as Brahmagupta were describing the use of negative numbers. Islamic mathematicians further developed the rules of subtracting and multiplying negative numbers and solved problems with negative coefficients. Prior to the concept of negative numbers, mathematicians such as Diophantus considered negative solutions to problems "false" and equations requiring negative solutions were described as absurd. Western mathematicians like Leibniz held that negative numbers were invalid, but still used them in calculations. Introduction The number line The relationship between negative numbers, positive numbers, and zero is often expressed in the form of a number line: Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are lesser. Thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left. Note that a negative number with greater magnitude is considered less. For example, even though (positive) is greater than (positive) , written negative is considered to be less than negative : Signed numbers In the context of negative numbers, a number that is greater than zero is referred to as positive. Thus every real number other than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with a plus sign in front, e.g. denotes a positive three. Because zero is neither positive nor negative, the term nonnegative is sometimes used to refer to a number that is either positive or zero, while nonpositive is used to refer to a number that is either negative or zero. Zero is a neutral number. As the result of subtraction Negative numbers can be thought of as resulting from the subtraction of a larger number from a smaller. For example, negative three is the result of subtracting three from zero: In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers. For example, since . Everyday uses of negative numbers Sport Goal difference in association football and hockey; points difference in rugby football; net run rate in cricket; golf scores relative to par. Plus-minus differential in ice hockey: the difference in total goals scored for the team (+) and against the team (−) when a particular player is on the ice is the player's +/− rating. Players can have a negative (+/−) rating. Run differential in baseball: the run differential is negative if the team allows more runs than they scored. Clubs may be deducted points for breaches of the laws, and thus have a negative points total until they have earned at least that many points that season. Lap (or sector) times in Formula 1 may be given as the difference compared to a previous lap (or sector) (such as the previous record, or the lap just completed by a driver in front), and will be positive if slower and negative if faster. In some athletics events, such as sprint races, the hurdles, the triple jump and the long jump, the wind assistance is measured and recorded, and is positive for a tailwind and negative for a headwind. Science Temperatures which are colder than 0 °C or 0 °F. Latitudes south of the equator and longitudes west of the prime meridian. Topographical features of the earth's surface are given a height above sea level, which can be negative (e.g. the surface elevation of the Dead Sea or Death Valley, or the elevation of the Thames Tideway Tunnel). Electrical circuits. When a battery is connected in reverse polarity, the voltage applied is said to be the opposite of its rated voltage. For example, a 6-volt battery connected in reverse applies a voltage of −6 volts. Ions have a positive or negative electrical charge. Impedance of an AM broadcast tower used in multi-tower directional antenna arrays, which can be positive or negative. Finance Financial statements can include negative balances, indicated either by a minus sign or by enclosing the balance in parentheses. Examples include bank account overdrafts and business losses (negative earnings). The annual percentage growth in a country's GDP might be negative, which is one indicator of being in a recession. Occasionally, a rate of inflation may be negative (deflation), indicating a fall in average prices. The daily change in a share price or stock market index, such as the FTSE 100 or the Dow Jones. A negative number in financing is synonymous with "debt" and "deficit" which are also known as "being in the red". Interest rates can be negative, when the lender is charged to deposit their money. Other The numbering of stories in a building below the ground floor. When playing an audio file on a portable media player, such as an iPod, the screen display may show the time remaining as a negative number, which increases up to zero time remaining at the same rate as the time already played increases from zero. Television game shows: Participants on QI often finish with a negative points score. Teams on University Challenge have a negative score if their first answers are incorrect and interrupt the question. Jeopardy! has a negative money score – contestants play for an amount of money and any incorrect answer that costs them more than what they have now can result in a negative score. In The Price Is Rights pricing game Buy or Sell, if an amount of money is lost that is more than the amount currently in the bank, it incurs a negative score. The change in support for a political party between elections, known as swing. A politician's approval rating. In video games, a negative number indicates loss of life, damage, a score penalty, or consumption of a resource, depending on the genre of the simulation. Employees with flexible working hours may have a negative balance on their timesheet if they have worked fewer total hours than contracted to that point. Employees may be able to take more than their annual holiday allowance in a year, and carry forward a negative balance to the next year. Transposing notes on an electronic keyboard are shown on the display with positive numbers for increases and negative numbers for decreases, e.g. "−1" for one semitone down. Arithmetic involving negative numbers The minus sign "−" signifies the operator for both the binary (two-operand) operation of subtraction (as in ) and the unary (one-operand) operation of negation (as in , or twice in ). A special case of unary negation occurs when it operates on a positive number, in which case the result is a negative number (as in ). The ambiguity of the "−" symbol does not generally lead to ambiguity in arithmetical expressions, because the order of operations makes only one interpretation or the other possible for each "−". However, it can lead to confusion and be difficult for a person to understand an expression when operator symbols appear adjacent to one another. A solution can be to parenthesize the unary "−" along with its operand. For example, the expression may be clearer if written (even though they mean exactly the same thing formally). The subtraction expression is a different expression that doesn't represent the same operations, but it evaluates to the same result. Sometimes in elementary schools a number may be prefixed by a superscript minus sign or plus sign to explicitly distinguish negative and positive numbers as in Addition Addition of two negative numbers is very similar to addition of two positive numbers. For example, The idea is that two debts can be combined into a single debt of greater magnitude. When adding together a mixture of positive and negative numbers, one can think of the negative numbers as positive quantities being subtracted. For example: In the first example, a credit of is combined with a debt of , which yields a total credit of . If the negative number has greater magnitude, then the result is negative: Here the credit is less than the debt, so the net result is a debt. Subtraction As discussed above, it is possible for the subtraction of two non-negative numbers to yield a negative answer: In general, subtraction of a positive number yields the same result as the addition of a negative number of equal magnitude. Thus and On the other hand, subtracting a negative number yields the same result as the addition a positive number of equal magnitude. (The idea is that losing a debt is the same thing as gaining a credit.) Thus and Multiplication When multiplying numbers, the magnitude of the product is always just the product of the two magnitudes. The sign of the product is determined by the following rules: The product of one positive number and one negative number is negative. The product of two negative numbers is positive. Thus and The reason behind the first example is simple: adding three 's together yields : The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six: The convention that a product of two negative numbers is positive is also necessary for multiplication to follow the distributive law. In this case, we know that Since , the product must equal . These rules lead to another (equivalent) rule—the sign of any product a × b depends on the sign of a as follows: if a is positive, then the sign of a × b is the same as the sign of b, and if a is negative, then the sign of a × b is the opposite of the sign of b. The justification for why the product of two negative numbers is a positive number can be observed in the analysis of complex numbers. Division The sign rules for division are the same as for multiplication. For example, and If dividend and divisor have the same sign, the result is positive, if they have different signs the result is negative. Negation The negative version of a positive number is referred to as its negation. For example, is the negation of the positive number . The sum of a number and its negation is equal to zero: That is, the negation of a positive number is the additive inverse of the number. Using algebra, we may write this principle as an algebraic identity: This identity holds for any positive number . It can be made to hold for all real numbers by extending the definition of negation to include zero and negative numbers. Specifically: The negation of 0 is 0, and The negation of a negative number is the corresponding positive number. For example, the negation of is . In general, The absolute value of a number is the non-negative number with the same magnitude. For example, the absolute value of and the absolute value of are both equal to , and the absolute value of is . Formal construction of negative integers In a similar manner to rational numbers, we can extend the natural numbers N to the integers Z by defining integers as an ordered pair of natural numbers (a, b). We can extend addition and multiplication to these pairs with the following rules: We define an equivalence relation ~ upon these pairs with the following rule: This equivalence relation is compatible with the addition and multiplication defined above, and we may define Z to be the quotient set N²/~, i.e. we identify two pairs (a, b) and (c, d) if they are equivalent in the above sense. Note that Z, equipped with these operations of addition and multiplication, is a ring, and is in fact, the prototypical example of a ring. We can also define a total order on Z''' by writing This will lead to an additive zero of the form (a, a), an additive inverse of (a, b) of the form (b, a), a multiplicative unit of the form (a + 1, a), and a definition of subtraction This construction is a special case of the Grothendieck construction. Uniqueness The additive inverse of a number is unique, as is shown by the following proof. As mentioned above, an additive inverse of a number is defined as a value which when added to the number yields zero. Let x be a number and let y be its additive inverse. Suppose y′ is another additive inverse of x. By definition, And so, x + y′ = x + y. Using the law of cancellation for addition, it is seen that y′ = y. Thus y is equal to any other additive inverse of x. That is, y is the unique additive inverse of x. History For a long time, understanding of negative numbers was delayed by the impossibility of having a negative-number amount of a physical object, for example "minus-three apples", and negative solutions to problems were considered "false". In Hellenistic Egypt, the Greek mathematician Diophantus in the 3rd century AD referred to an equation that was equivalent to (which has a negative solution) in Arithmetica, saying that the equation was absurd. For this reason Greek geometers were able to solve geometrically all forms of the quadratic equation which give positive roots, while they could take no account of others. Negative numbers appear for the first time in history in the Nine Chapters on the Mathematical Art (九章算術, Jiǔ zhāng suàn-shù), which in its present form dates from the Han period, but may well contain much older material. The mathematician Liu Hui (c. 3rd century) established rules for the addition and subtraction of negative numbers. The historian Jean-Claude Martzloff theorized that the importance of duality in Chinese natural philosophy made it easier for the Chinese to accept the idea of negative numbers. The Chinese were able to solve simultaneous equations involving negative numbers. The Nine Chapters used red counting rods to denote positive coefficients and black rods for negative. This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values. Liu Hui writes: The ancient Indian Bakhshali Manuscript carried out calculations with negative numbers, using "+" as a negative sign. The date of the manuscript is uncertain. LV Gurjar dates it no later than the 4th century, Hoernle dates it between the third and fourth centuries, Ayyangar and Pingree dates it to the 8th or 9th centuries, and George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century, During the 7th century AD, negative numbers were used in India to represent debts. The Indian mathematician Brahmagupta, in Brahma-Sphuta-Siddhanta (written c. AD 630), discussed the use of negative numbers to produce a general form quadratic formula similar to the one in use today. In the 9th century, Islamic mathematicians were familiar with negative numbers from the works of Indian mathematicians, but the recognition and use of negative numbers during this period remained timid. Al-Khwarizmi in his Al-jabr wa'l-muqabala (from which the word "algebra" derives) did not use negative numbers or negative coefficients. But within fifty years, Abu Kamil illustrated the rules of signs for expanding the multiplication , and al-Karaji wrote in his al-Fakhrī that "negative quantities must be counted as terms". In the 10th century, Abū al-Wafā' al-Būzjānī considered debts as negative numbers in A Book on What Is Necessary from the Science of Arithmetic for Scribes and Businessmen. By the 12th century, al-Karaji's successors were to state the general rules of signs and use them to solve polynomial divisions. As al-Samaw'al writes: the product of a negative number—al-nāqiṣ (loss)—by a positive number—al-zāʾid (gain)—is negative, and by a negative number is positive. If we subtract a negative number from a higher negative number, the remainder is their negative difference. The difference remains positive if we subtract a negative number from a lower negative number. If we subtract a negative number from a positive number, the remainder is their positive sum. If we subtract a positive number from an empty power (martaba khāliyya), the remainder is the same negative, and if we subtract a negative number from an empty power, the remainder is the same positive number. In the 12th century in India, Bhāskara II gave negative roots for quadratic equations but rejected them because they were inappropriate in the context of the problem. He stated that a negative value is "in this case not to be taken, for it is inadequate; people do not approve of negative roots." Fibonacci allowed negative solutions in financial problems where they could be interpreted as debits (chapter 13 of Liber Abaci, 1202) and later as losses (in Flos, 1225). In the 15th century, Nicolas Chuquet, a Frenchman, used negative numbers as exponents but referred to them as "absurd numbers". Michael Stifel dealt with negative numbers in his 1544 AD Arithmetica Integra, where he also called them numeri absurdi (absurd numbers). In 1545, Gerolamo Cardano, in his Ars Magna, provided the first satisfactory treatment of negative numbers in Europe. He did not allow negative numbers in his consideration of cubic equations, so he had to treat, for example, separately from (with in both cases). In all, Cardano was driven to the study of thirteen types of cubic equations, each with all negative terms moved to the other side of the = sign to make them positive. (Cardano also dealt with complex numbers, but understandably liked them even less.)
Mathematics
Basics
null
154664
https://en.wikipedia.org/wiki/Turbulence
Turbulence
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to laminar flow, which occurs when a fluid flows in parallel layers with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason, turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this, the surface is dimpled to perturb the boundary layer and promote turbulence. This results in higher skin friction, but it moves the point of boundary layer separation further along, resulting in lower drag. Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere). Most of the terrestrial atmospheric circulation. The oceanic and atmospheric mixed layers and intense oceanic currents. The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines). The external flow over all kinds of vehicles such as cars, airplanes, ships, and submarines. The motions of matter in stellar atmospheres. A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing. Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence. Bridge supports (piers) in water. When river flow is slow, water flows smoothly around the support legs. When the flow is faster, a higher Reynolds number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent. In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structures and turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence. The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere. In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process. Recently, turbulence in porous media became a highly debated subject. Strategies used by animals for olfactory navigation, and their success, are heavily influenced by turbulence affecting the odor plume. Features Turbulence is characterized by the following features: Irregularity Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent. Diffusivity The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity". Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Rotationality Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish and maintain identifiable structure function. In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. Turbulent flow is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent. Dissipation To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. Integral time scale The integral time scale for a Lagrangian flow can be defined as: where u′ is the velocity fluctuation, and is the time lag between measurements. Integral length scales Large eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as where r is the distance between two measurement locations, and u′ is the velocity fluctuation in that same direction. Kolmogorov length scales Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous. Taylor microscales The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scales, but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term "turbulence" more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space. Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. A complete description of turbulence is one of the unsolved problems in physics. According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science: "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." Onset of turbulence The onset of turbulence can be, to some extent, predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number () is used as a guide. With respect to laminar and turbulent flow regimes: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities. The Reynolds number is defined as where: is the density of the fluid (SI units: kg/m3) is a characteristic velocity of the fluid with respect to the object (m/s) is a characteristic linear dimension (m) is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)). While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Heat and momentum transfer When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: and similarly for temperature () and pressure (), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress ) in the direction normal to the flow for a given time are where is the heat capacity at constant pressure, is the density of the fluid, is the coefficient of turbulent viscosity and is the turbulent thermal conductivity. Kolmogorov's theory of 1941 Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as ). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity and the rate of energy dissipation . With only these two parameters, the unique length that can be formed by dimensional analysis is This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length , while the input of energy into the cascade comes from the decay of the large scales, of order . These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length ) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range are universally and uniquely determined by the scale and the rate of energy dissipation . The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function , where is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field : where is the Fourier transform of the flow velocity field. Thus, represents the contribution to the kinetic energy from all the Fourier modes with , and therefore, where is the mean turbulent kinetic energy of the flow. The wavenumber corresponding to length scale is . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is where would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel, and considerable experimental evidence has since accumulated that supports it. Outside of the inertial area, one can find the formula below : In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: that is, the difference in flow velocity between points separated by a vector (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of ). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent , so that when is scaled by a factor , should have the same statistical distribution as with independent of the scale . From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as where the brackets denote the statistical average, and the would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the value predicted by the theory, becoming a non-linear function of the order of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law with , the second order structure function has also a power law, with the form Since the experimental values obtained for the second order structure function only deviate slightly from the value predicted by Kolmogorov theory, the value for is very near to (differences are about 2%). Thus the "Kolmogorov − spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale . This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles.
Physical sciences
Fluid mechanics
null
154725
https://en.wikipedia.org/wiki/Probability%20mass%20function
Probability mass function
In probability and statistics, a probability mass function (sometimes called probability function or frequency function) is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete probability density function. The probability mass function is often the primary means of defining a discrete probability distribution, and such functions exist for either scalar or multivariate random variables whose domain is discrete. A probability mass function differs from a probability density function (PDF) in that the latter is associated with continuous rather than discrete random variables. A PDF must be integrated over an interval to yield a probability. The value of the random variable having the largest probability mass is called the mode. Formal definition Probability mass function is the probability distribution of a discrete random variable, and provides the possible values and their associated probabilities. It is the function defined by for , where is a probability measure. can also be simplified as . The probabilities associated with all (hypothetical) values must be non-negative and sum up to 1, and Thinking of probability as mass helps to avoid mistakes since the physical mass is conserved as is the total probability for all hypothetical outcomes . Measure theoretic formulation A probability mass function of a discrete random variable can be seen as a special case of two more general measure theoretic constructions: the distribution of and the probability density function of with respect to the counting measure. We make this more precise below. Suppose that is a probability space and that is a measurable space whose underlying σ-algebra is discrete, so in particular contains singleton sets of . In this setting, a random variable is discrete provided its image is countable. The pushforward measure —called the distribution of in this context—is a probability measure on whose restriction to singleton sets induces the probability mass function (as mentioned in the previous section) since for each . Now suppose that is a measure space equipped with the counting measure . The probability density function of with respect to the counting measure, if it exists, is the Radon–Nikodym derivative of the pushforward measure of (with respect to the counting measure), so and is a function from to the non-negative reals. As a consequence, for any we have demonstrating that is in fact a probability mass function. When there is a natural order among the potential outcomes , it may be convenient to assign numerical values to them (or n-tuples in case of a discrete multivariate random variable) and to consider also values not in the image of . That is, may be defined for all real numbers and for all as shown in the figure. The image of has a countable subset on which the probability mass function is one. Consequently, the probability mass function is zero for all but a countable number of values of . The discontinuity of probability mass functions is related to the fact that the cumulative distribution function of a discrete random variable is also discontinuous. If is a discrete random variable, then means that the casual event is certain (it is true in 100% of the occurrences); on the contrary, means that the casual event is always impossible. This statement isn't true for a continuous random variable , for which for any possible . Discretization is the process of converting a continuous random variable into a discrete one. Examples Finite There are three major distributions associated, the Bernoulli distribution, the binomial distribution and the geometric distribution. Bernoulli distribution: ber(p) , is used to model an experiment with only two possible outcomes. The two outcomes are often encoded as 1 and 0. An example of the Bernoulli distribution is tossing a coin. Suppose that is the sample space of all outcomes of a single toss of a fair coin, and is the random variable defined on assigning 0 to the category "tails" and 1 to the category "heads". Since the coin is fair, the probability mass function is Binomial distribution, models the number of successes when someone draws n times with replacement. Each draw or experiment is independent, with two possible outcomes. The associated probability mass function is . An example of the binomial distribution is the probability of getting exactly one 6 when someone rolls a fair die three times. Geometric distribution describes the number of trials needed to get one success. Its probability mass function is .An example is tossing a coin until the first "heads" appears. denotes the probability of the outcome "heads", and denotes the number of necessary coin tosses. Other distributions that can be modeled using a probability mass function are the categorical distribution (also known as the generalized Bernoulli distribution) and the multinomial distribution. If the discrete distribution has two or more categories one of which may occur, whether or not these categories have a natural ordering, when there is only a single trial (draw) this is a categorical distribution. An example of a multivariate discrete distribution, and of its probability mass function, is provided by the multinomial distribution. Here the multiple random variables are the numbers of successes in each of the categories after a given number of trials, and each non-zero probability mass gives the probability of a certain combination of numbers of successes in the various categories. Infinite The following exponentially declining distribution is an example of a distribution with an infinite number of possible outcomes—all the positive integers: Despite the infinite number of possible outcomes, the total probability mass is 1/2 + 1/4 + 1/8 + ⋯ = 1, satisfying the unit total probability requirement for a probability distribution. Multivariate case Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables.
Mathematics
Probability
null
154735
https://en.wikipedia.org/wiki/Pentlandite
Pentlandite
Pentlandite is an iron–nickel sulfide with the chemical formula . Pentlandite has a narrow variation range in nickel to iron ratios (Ni:Fe), but it is usually described as 1:1. In some cases, this ratio is skewed by the presence of pyrrhotite inclusions. It also contains minor cobalt, usually at low levels as a fraction of weight. Pentlandite forms isometric crystals, but it is normally found in massive granular aggregates. It is brittle with a hardness of 3.5–4 and specific gravity of 4.6–5.0 and is non-magnetic. It has a yellowish bronze color and a metallic luster. Pentlandite is found in abundance within ultramafic rocks, making it one of the most important sources of mined nickel. It also occasionally occurs within mantle xenoliths and "black smoker" hydrothermal vents. Etymology It is named after Irish scientist Joseph Barclay Pentland (1797–1873), who first noted the mineral at Sudbury, Ontario. Identification Physical and optical properties In the field, pentlandite is often confused with other sulfide minerals, as they are all brassy yellowish in color and have a metallic luster. For this reason, the best way to discern pentlandite is by its paler color, lack of magnetism, and light brownish bronze streak. In contrast, pyrite, pyrrhotite and chalcopyrite will all display much darker streaks: brownish black, greyish black, greenish black respectively. When looked at using reflected light ore microscopy, it possesses key diagnostic properties such as octahedral cleavage, and its alteration to bravoite, a pinkish to brownish violet sulfide mineral that occurs in euhedral to octahedral crystals. Pentlandite usually develops as granular inclusions within other sulfide minerals (mainly pyrrhotite), often taking the shape of thin veins or "flames". Although pentlandite is an opaque mineral, it exhibits a strong light creamy reflectance. Mineral associations Pentlandite occurs alongside sulfide minerals such as bravoite, chalcopyrite, cubanite, millerite, pyrrhotite, valleriite, as well as other minerals like chromite, ilmenite, magnetite, and sperrylite. It is chemically similar to mackinawite, godlevskite and horomanite. Pentlandite is synonymous with folgerite, horbachite, lillhammerite, and nicopyrite. Pentlandite group The pentlandite group is a subdivision of rare minerals that share similar chemical and structural properties with pentlandite, hence the name. Their chemical formula can be written as XY8(S, Se)8 in which X is usually replaced by silver, manganese, cadmium, and lead, while copper takes the place of Y. Iron, nickel, and cobalt have the ability to occupy both X or Y positions. These minerals are: Argentopentlandite Ag(Fe,Ni)8S8 Cobalt pentlandite Co9S8 Geffroyite (Ag,Cu,Fe)9(Se,S)8 Manganese-shadlunite (Mn,Pb)(Cu,Fe)8S8 Shadlunite (Pb,Cd)(Fe,Cu)8S8 Oberthürite Rh3Ni32S32 Sugakiite Cu(Fe,Ni)8S8 Paragenesis Pentlandite is the most common terrestrial nickel sulfide. It typically forms during cooling of a sulfide melt. These sulfide melts, in turn, are typically formed during the evolution of a silicate melt. Because nickel is a chalcophile element, it has preference for (i.e. it "partitions into") sulfide phases. In sulfide undersaturated melts, nickel substitutes for other transition metals within ferromagnesian minerals, the most common being olivine, as well as nickeliferous varieties of amphibole, biotite, pyroxene and spinel. Nickel substitutes most readily for Fe2+ and Co2+ because or their similarity in size and charge. In sulfide saturated melts, nickel behaves as a chalcophile element and partitions strongly into the sulfide phase. Because most nickel behaves as a compatible element in igneous differentiation processes, the formation of nickel-bearing sulfides is essentially restricted to sulfide saturated mafic and ultramafic melts. Minor amounts of nickel sulfides are found in mantle peridotites. The behaviour of sulfide melts is complex and is affected by copper, nickel, iron, and sulfur ratios. Typically, above 1100 °C, only one sulfide melt exists. Upon cooling to 1000 °C, a solid containing mostly Fe and minor amounts of Ni and Cu is formed. This phase is called monosulfide solid solution (MSS), and is unstable at low temperatures decomposing to mixtures of pentlandite and pyrrhotite, and (rarely) pyrite. It is only upon cooling past ~ (dependent on composition) that the MSS undergoes exsolution. A separate phase, usually a copper-rich sulfide liquid may also form, giving rise to chalcopyrite upon cooling. These phases typically form aphanitic equigranular massive sulfides, or are present as disseminated sulfides within rocks composed mostly of silicates. Pristine magmatic massive sulfide are rarely preserved as most deposits of nickeliferous sulfide have been metamorphosed. Metamorphism at a grade equal to, or higher than, greenschist facies will cause solid massive sulfides to deform in a ductile fashion and to travel some distance into the country rock and along structures. Upon cessation of metamorphism, the sulfides may inherit a foliated or sheared texture, and typically develop bright, equigranular to globular aggregates of porphyroblastic pentlandite crystals known colloquially as "fish scales". Metamorphism may also alter the concentration of nickel and the Ni:Fe ratio and Ni:S ratio of the sulfides. In this case, pentlandite may be replaced by millerite, and rarely heazlewoodite. Metamorphism may also be associated with metasomatism, and it is particularly common for arsenic to react with pre-existing sulfides, producing nickeline, gersdorffite and other Ni–Co arsenides. Occurrence Pentlandite is found within the lower margins of mineralized layered intrusions, the best examples being the Bushveld igneous complex, South Africa, the Voisey's Bay troctolite intrusive complex in Canada, the Duluth gabbro, in North America, and various other localities throughout the world. In these locations, pentlandite is considered an important nickel ore. Pentlandite is also the dominant ore mineral occurring in Kambalda type komatiitic nickel ore deposits, the prime example of which can be found in the Yilgarn Craton of Western Australia. Similar deposits exist at Nkomati, Namibia, in the Thompson Belt, Canada, and a few examples from Brazil. Pentlandite, but primarily chalcopyrite and PGEs, are also obtained from the supergiant Norilsk nickel deposit, in trans-Siberian Russia. The Sudbury Basin in Ontario, Canada, is associated with a large meteorite impact crater. The pentlandite-chalcopyrite-pyrrhotite ore around the Sudbury Structure formed from sulfide melts that segregated from the melt sheet produced by the impact. Gallery
Physical sciences
Minerals
Earth science
154738
https://en.wikipedia.org/wiki/Hydrogen%20sulfide
Hydrogen sulfide
Hydrogen sulfide is a chemical compound with the formula . It is a colorless chalcogen-hydride gas, and is poisonous, corrosive, and flammable, with trace amounts in ambient atmosphere having a characteristic foul odor of rotten eggs. Swedish chemist Carl Wilhelm Scheele is credited with having discovered the chemical composition of purified hydrogen sulfide in 1777. Hydrogen sulfide is toxic to humans and most other animals by inhibiting cellular respiration in a manner similar to hydrogen cyanide. When it is inhaled or its salts are ingested in high amounts, damage to organs occurs rapidly with symptoms ranging from breathing difficulties to convulsions and death. Despite this, the human body produces small amounts of this sulfide and its mineral salts, and uses it as a signalling molecule. Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen, such as in swamps and sewers; this process is commonly known as anaerobic digestion, which is done by sulfate-reducing microorganisms. It also occurs in volcanic gases, natural gas deposits, and sometimes in well-drawn water. Properties Hydrogen sulfide is slightly denser than air. A mixture of and air can be explosive. Oxidation In general, hydrogen sulfide acts as a reducing agent, as indicated by its ability to reduce sulfur dioxide in the Claus process. Hydrogen sulfide burns in oxygen with a blue flame to form sulfur dioxide () and water: If an excess of oxygen is present, sulfur trioxide () is formed, which quickly hydrates to sulfuric acid: Acid-base properties It is slightly soluble in water and acts as a weak acid (pKa = 6.9 in 0.01–0.1 mol/litre solutions at 18 °C), giving the hydrosulfide ion . Hydrogen sulfide and its solutions are colorless. When exposed to air, it slowly oxidizes to form elemental sulfur, which is not soluble in water. The sulfide anion is not formed in aqueous solution. Extreme temperatures and pressures At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches , the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above and achieve room-temperature superconductivity. Hydrogen sulfide decomposes without a presence of a catalyst under atmospheric pressure around 1200 °C into hydrogen and sulfur. Tarnishing Hydrogen sulfide reacts with metal ions to form metal sulfides, which are insoluble, often dark colored solids. Lead(II) acetate paper is used to detect hydrogen sulfide because it readily converts to lead(II) sulfide, which is black. Treating metal sulfides with strong acid or electrolysis often liberates hydrogen sulfide. Hydrogen sulfide is also responsible for tarnishing on various metals including copper and silver; the chemical responsible for black toning found on silver coins is silver sulfide (), which is produced when the silver on the surface of the coin reacts with atmospheric hydrogen sulfide. Coins that have been subject to toning by hydrogen sulfide and other sulfur-containing compounds may have the toning add to the numismatic value of a coin based on aesthetics, as the toning may produce thin-film interference, resulting in the coin taking on an attractive coloration. Coins can also be intentionally treated with hydrogen sulfide to induce toning, though artificial toning can be distinguished from natural toning, and is generally criticised among collectors. Production Hydrogen sulfide is most commonly obtained by its separation from sour gas, which is natural gas with a high content of . It can also be produced by treating hydrogen with molten elemental sulfur at about 450 °C. Hydrocarbons can serve as a source of hydrogen in this process. The very favorable thermodynamics for the hydrogenation of sulfur implies that the dehydrogenation (or cracking) of hydrogen sulfide would require very high temperatures. A standard lab preparation is to treat ferrous sulfide with a strong acid in a Kipp generator: For use in qualitative inorganic analysis, thioacetamide is used to generate : Many metal and nonmetal sulfides, e.g. aluminium sulfide, phosphorus pentasulfide, silicon disulfide liberate hydrogen sulfide upon exposure to water: This gas is also produced by heating sulfur with solid organic compounds and by reducing sulfurated organic compounds with hydrogen. It can also be produced by mixing ammonium thiocyanate to concentrated sulphuric acid and adding water to it. Biosynthesis Hydrogen sulfide can be generated in cells via enzymatic or non-enzymatic pathways. Three enzymes catalyze formation of : cystathionine γ-lyase (CSE), cystathionine β-synthetase (CBS), and 3-mercaptopyruvate sulfurtransferase (3-MST). CBS and CSE are the main proponents of biogenesis, which follows the trans-sulfuration pathway. These enzymes have been identified in a breadth of biological cells and tissues, and their activity is induced by a number of disease states. These enzymes are characterized by the transfer of a sulfur atom from methionine to serine to form a cysteine molecule. 3-MST also contributes to hydrogen sulfide production by way of the cysteine catabolic pathway. Dietary amino acids, such as methionine and cysteine serve as the primary substrates for the transulfuration pathways and in the production of hydrogen sulfide. Hydrogen sulfide can also be derived from proteins such as ferredoxins and Rieske proteins. Sulfate-reducing (resp. sulfur-reducing) bacteria generate usable energy under low-oxygen conditions by using sulfates (resp. elemental sulfur) to oxidize organic compounds or hydrogen; this produces hydrogen sulfide as a waste product. Water heaters can aid the conversion of sulfate in water to hydrogen sulfide gas. This is due to providing a warm environment sustainable for sulfur bacteria and maintaining the reaction which interacts between sulfate in the water and the water heater anode, which is usually made from magnesium metal. Signalling role in the body acts as a gaseous signaling molecule with implications for health and in diseases. Hydrogen sulfide is involved in vasodilation in animals, as well as in increasing seed germination and stress responses in plants. Hydrogen sulfide signaling is moderated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). has been shown to interact with the NO pathway resulting in several different cellular effects, including the inhibition of cGMP phosphodiesterases, as well as the formation of another signal called nitrosothiol. Hydrogen sulfide is also known to increase the levels of glutathione, which acts to reduce or disrupt ROS levels in cells. The field of biology has advanced from environmental toxicology to investigate the roles of endogenously produced in physiological conditions and in various pathophysiological states. has been implicated in cancer, in Down syndrome and in vascular disease. At lower concentrations, it stimulates mitochondrial function via multiple mechanisms including direct electron donation. However, at higher concentrations, it inhibits Complex IV of the mitochondrial electron transport chain, which effectively reduces ATP generation and biochemical activity within cells. Uses Production of sulfur Hydrogen sulfide is mainly consumed as a precursor to elemental sulfur. This conversion, called the Claus process, involves partial oxidation to sulfur dioxide. The latter reacts with hydrogen sulfide to give elemental sulfur. The conversion is catalyzed by alumina. Production of thioorganic compounds Many fundamental organosulfur compounds are produced using hydrogen sulfide. These include methanethiol, ethanethiol, and thioglycolic acid. Hydrosulfides can be used in the production of thiophenol. Production of metal sulfides Upon combining with alkali metal bases, hydrogen sulfide converts to alkali hydrosulfides such as sodium hydrosulfide and sodium sulfide: Sodium sulfides are used in the paper making industry. Specifically, salts of break bonds between lignin and cellulose components of pulp in the Kraft process. As indicated above, many metal ions react with hydrogen sulfide to give the corresponding metal sulfides. Oxidic ores are sometimes treated with hydrogen sulfide to give the corresponding metal sulfides which are more readily purified by flotation. Metal parts are sometimes passivated with hydrogen sulfide. Catalysts used in hydrodesulfurization are routinely activated with hydrogen sulfide. Hydrogen sulfide was a reagent in the qualitative inorganic analysis of metal ions. In these analyses, heavy metal (and nonmetal) ions (e.g., Pb(II), Cu(II), Hg(II), As(III)) are precipitated from solution upon exposure to . The components of the resulting solid are then identified by their reactivity. Miscellaneous applications Hydrogen sulfide is used to separate deuterium oxide, or heavy water, from normal water via the Girdler sulfide process. A suspended animation-like state has been induced in rodents with the use of hydrogen sulfide, resulting in hypothermia with a concomitant reduction in metabolic rate. Oxygen demand was also reduced, thereby protecting against hypoxia. In addition, hydrogen sulfide has been shown to reduce inflammation in various situations. Occurrence Volcanoes and some hot springs (as well as cold springs) emit some . Hydrogen sulfide can be present naturally in well water, often as a result of the action of sulfate-reducing bacteria. Hydrogen sulfide is produced by the human body in small quantities through bacterial breakdown of proteins containing sulfur in the intestinal tract; it therefore contributes to the characteristic odor of flatulence. It is also produced in the mouth (halitosis). A portion of global emissions are due to human activity. By far the largest industrial source of is petroleum refineries: The hydrodesulfurization process liberates sulfur from petroleum by the action of hydrogen. The resulting is converted to elemental sulfur by partial combustion via the Claus process, which is a major source of elemental sulfur. Other anthropogenic sources of hydrogen sulfide include coke ovens, paper mills (using the Kraft process), tanneries and sewerage. arises from virtually anywhere where elemental sulfur comes in contact with organic material, especially at high temperatures. Depending on environmental conditions, it is responsible for deterioration of material through the action of some sulfur oxidizing microorganisms. It is called biogenic sulfide corrosion. In 2011 it was reported that increased concentrations of were observed in the Bakken formation crude, possibly due to oil field practices, and presented challenges such as "health and environmental risks, corrosion of wellbore, added expense with regard to materials handling and pipeline equipment, and additional refinement requirements". Besides living near gas and oil drilling operations, ordinary citizens can be exposed to hydrogen sulfide by being near waste water treatment facilities, landfills and farms with manure storage. Exposure occurs through breathing contaminated air or drinking contaminated water. In municipal waste landfill sites, the burial of organic material rapidly leads to the production of anaerobic digestion within the waste mass and, with the humid atmosphere and relatively high temperature that accompanies biodegradation, biogas is produced as soon as the air within the waste mass has been reduced. If there is a source of sulfate bearing material, such as plasterboard or natural gypsum (calcium sulfate dihydrate), under anaerobic conditions sulfate reducing bacteria converts this to hydrogen sulfide. These bacteria cannot survive in air but the moist, warm, anaerobic conditions of buried waste that contains a high source of carbon – in inert landfills, paper and glue used in the fabrication of products such as plasterboard can provide a rich source of carbon – is an excellent environment for the formation of hydrogen sulfide. In industrial anaerobic digestion processes, such as waste water treatment or the digestion of organic waste from agriculture, hydrogen sulfide can be formed from the reduction of sulfate and the degradation of amino acids and proteins within organic compounds. Sulfates are relatively non-inhibitory to methane forming bacteria but can be reduced to by sulfate reducing bacteria, of which there are several genera. Removal from water A number of processes have been designed to remove hydrogen sulfide from drinking water. Continuous chlorination For levels up to 75 mg/L chlorine is used in the purification process as an oxidizing chemical to react with hydrogen sulfide. This reaction yields insoluble solid sulfur. Usually the chlorine used is in the form of sodium hypochlorite. Aeration For concentrations of hydrogen sulfide less than 2 mg/L aeration is an ideal treatment process. Oxygen is added to water and a reaction between oxygen and hydrogen sulfide react to produce odorless sulfate. Nitrate addition Calcium nitrate can be used to prevent hydrogen sulfide formation in wastewater streams. Removal from fuel gases Hydrogen sulfide is commonly found in raw natural gas and biogas. It is typically removed by amine gas treating technologies. In such processes, the hydrogen sulfide is first converted to an ammonium salt, whereas the natural gas is unaffected. The bisulfide anion is subsequently regenerated by heating of the amine sulfide solution. Hydrogen sulfide generated in this process is typically converted to elemental sulfur using the Claus Process. Safety The underground mine gas term for foul-smelling hydrogen sulfide-rich gas mixtures is stinkdamp. Hydrogen sulfide is a highly toxic and flammable gas (flammable range: 4.3–46%). It can poison several systems in the body, although the nervous system is most affected. The toxicity of is comparable with that of carbon monoxide. It binds with iron in the mitochondrial cytochrome enzymes, thus preventing cellular respiration. Its toxic properties were described in detail in 1843 by Justus von Liebig. Even before hydrogen sulfide was discovered, Italian physician Bernardino Ramazzini hypothesized in his 1713 book De Morbis Artificum Diatriba that occupational diseases of sewer-workers and blackening of coins in their clothes may be caused by an unknown invisible volatile acid (moreover, in late 18th century toxic gas emanation from Paris sewers became a problem for the citizens and authorities). Although very pungent at first (it smells like rotten eggs), it quickly deadens the sense of smell, creating temporary anosmia, so victims may be unaware of its presence until it is too late. Safe handling procedures are provided by its safety data sheet (SDS). Low-level exposure Since hydrogen sulfide occurs naturally in the body, the environment, and the gut, enzymes exist to metabolize it. At some threshold level, believed to average around 300–350 ppm, the oxidative enzymes become overwhelmed. Many personal safety gas detectors, such as those used by utility, sewage and petrochemical workers, are set to alarm at as low as 5 to 10 ppm and to go into high alarm at 15 ppm. Metabolism causes oxidation to sulfate, which is harmless. Hence, low levels of hydrogen sulfide may be tolerated indefinitely. Exposure to lower concentrations can result in eye irritation, a sore throat and cough, nausea, shortness of breath, and fluid in the lungs. These effects are believed to be due to hydrogen sulfide combining with alkali present in moist surface tissues to form sodium sulfide, a caustic. These symptoms usually subside in a few weeks. Long-term, low-level exposure may result in fatigue, loss of appetite, headaches, irritability, poor memory, and dizziness. Chronic exposure to low level (around 2 ppm) has been implicated in increased miscarriage and reproductive health issues among Russian and Finnish wood pulp workers, but the reports have not (as of 1995) been replicated. High-level exposure Short-term, high-level exposure can induce immediate collapse, with loss of breathing and a high probability of death. If death does not occur, high exposure to hydrogen sulfide can lead to cortical pseudolaminar necrosis, degeneration of the basal ganglia and cerebral edema. Although respiratory paralysis may be immediate, it can also be delayed up to 72 hours. Inhalation of resulted in about 7 workplace deaths per year in the U.S. (2011–2017 data), second only to carbon monoxide (17 deaths per year) for workplace chemical inhalation deaths. Exposure thresholds Exposure limits stipulated by the United States government: 10 ppm REL-Ceiling (NIOSH): recommended permissible exposure ceiling (the recommended level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 20 ppm PEL-Ceiling (OSHA): permissible exposure ceiling (the level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 50 ppm PEL-Peak (OSHA): peak permissible exposure (the level that must never be exceeded) 100 ppm IDLH (NIOSH): immediately dangerous to life and health (the level that interferes with the ability to escape) 0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it. 10–20 ppm is the borderline concentration for eye irritation. 50–100 ppm leads to eye damage. At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger. 320–530 ppm leads to pulmonary edema with the possibility of death. 530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing. 800 ppm is the lethal concentration for 50% of humans for 5 minutes' exposure (LC50). Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath. Treatment Treatment involves immediate inhalation of amyl nitrite, injections of sodium nitrite, or administration of 4-dimethylaminophenol in combination with inhalation of pure oxygen, administration of bronchodilators to overcome eventual bronchospasm, and in some cases hyperbaric oxygen therapy (HBOT). HBOT has clinical and anecdotal support. Incidents Hydrogen sulfide was used by the British Army as a chemical weapon during World War I. It was not considered to be an ideal war gas, partially due to its flammability and because the distinctive smell could be detected from even a small leak, alerting the enemy to the presence of the gas. It was nevertheless used on two occasions in 1916 when other gases were in short supply. On September 2, 2005, a leak in the propeller room of a Royal Caribbean Cruise Liner docked in Los Angeles resulted in the deaths of 3 crewmen due to a sewage line leak. As a result, all such compartments are now required to have a ventilation system. A dump of toxic waste containing hydrogen sulfide is believed to have caused 17 deaths and thousands of illnesses in Abidjan, on the West African coast, in the 2006 Côte d'Ivoire toxic waste dump. In September 2008, three workers were killed and two suffered serious injury, including long term brain damage, at a mushroom growing company in Langley, British Columbia. A valve to a pipe that carried chicken manure, straw and gypsum to the compost fuel for the mushroom growing operation became clogged, and as workers unclogged the valve in a confined space without proper ventilation the hydrogen sulfide that had built up due to anaerobic decomposition of the material was released, poisoning the workers in the surrounding area. An investigator said there could have been more fatalities if the pipe had been fully cleared and/or if the wind had changed directions. In 2014, levels of hydrogen sulfide as high as 83 ppm were detected at a recently built mall in Thailand called Siam Square One at the Siam Square area. Shop tenants at the mall reported health complications such as sinus inflammation, breathing difficulties and eye irritation. After investigation it was determined that the large amount of gas originated from imperfect treatment and disposal of waste water in the building. In 2014, hydrogen sulfide gas killed workers at the Promenade shopping center in North Scottsdale, Arizona, USA after climbing into 15 ft deep chamber without wearing personal protective gear. "Arriving crews recorded high levels of hydrogen cyanide and hydrogen sulfide coming out of the sewer." In November 2014, a substantial amount of hydrogen sulfide gas shrouded the central, eastern and southeastern parts of Moscow. Residents living in the area were urged to stay indoors by the emergencies ministry. Although the exact source of the gas was not known, blame had been placed on a Moscow oil refinery. In June 2016, a mother and her daughter were found dead in their still-running 2006 Porsche Cayenne SUV against a guardrail on Florida's Turnpike, initially thought to be victims of carbon monoxide poisoning. Their deaths remained unexplained as the medical examiner waited for results of toxicology tests on the victims, until urine tests revealed that hydrogen sulfide was the cause of death. A report from the Orange-Osceola Medical Examiner's Office indicated that toxic fumes came from the Porsche's starter battery, located under the front passenger seat. In January 2017, three utility workers in Key Largo, Florida, died one by one within seconds of descending into a narrow space beneath a manhole cover to check a section of paved street. In an attempt to save the men, a firefighter who entered the hole without his air tank (because he could not fit through the hole with it) collapsed within seconds and had to be rescued by a colleague. The firefighter was airlifted to Jackson Memorial Hospital and later recovered. A Monroe County Sheriff officer initially determined that the space contained hydrogen sulfide and methane gas produced by decomposing vegetation. On May 24, 2018, two workers were killed, another seriously injured, and 14 others hospitalized by hydrogen sulfide inhalation at a Norske Skog paper mill in Albury, New South Wales. An investigation by SafeWork NSW found that the gas was released from a tank used to hold process water. The workers were exposed at the end of a 3-day maintenance period. Hydrogen sulfide had built up in an upstream tank, which had been left stagnant and untreated with biocide during the maintenance period. These conditions allowed sulfate-reducing bacteria to grow in the upstream tank, as the water contained small quantities of wood pulp and fiber. The high rate of pumping from this tank into the tank involved in the incident caused hydrogen sulfide gas to escape from various openings around its top when pumping was resumed at the end of the maintenance period. The area above it was sufficiently enclosed for the gas to pool there, despite not being identified as a confined space by Norske Skog. One of the workers who was killed was exposed while investigating an apparent fluid leak in the tank, while the other who was killed and the worker who was badly injured were attempting to rescue the first after he collapsed on top of it. In a resulting criminal case, Norske Skog was accused of failing to ensure the health and safety of its workforce at the plant to a reasonably practicable extent. It pleaded guilty, and was fined AU$1,012,500 and ordered to fund the production of an anonymized educational video about the incident. In October 2019, an Odessa, Texas employee of Aghorn Operating Inc. and his wife were killed due to a water pump failure. Produced water with a high concentration of hydrogen sulfide was released by the pump. The worker died while responding to an automated phone call he had received alerting him to a mechanical failure in the pump, while his wife died after driving to the facility to check on him. A CSB investigation cited lax safety practices at the facility, such as an informal lockout-tagout procedure and a nonfunctioning hydrogen sulfide alert system. Suicides The gas, produced by mixing certain household ingredients, was used in a suicide wave in 2008 in Japan. The wave prompted staff at Tokyo's suicide prevention center to set up a special hotline during "Golden Week", as they received an increase in calls from people wanting to kill themselves during the annual May holiday. As of 2010, this phenomenon has occurred in a number of US cities, prompting warnings to those arriving at the site of the suicide. These first responders, such as emergency services workers or family members are at risk of death or injury from inhaling the gas, or by fire. Local governments have also initiated campaigns to prevent such suicides. In 2020, ingestion was used as a suicide method by Japanese pro wrestler Hana Kimura. In 2024, Lucy-Bleu Knight, stepdaughter of famed musician Slash, also used ingestion to commit suicide. Hydrogen sulfide in the natural environment Microbial: The sulfur cycle Hydrogen sulfide is a central participant in the sulfur cycle, the biogeochemical cycle of sulfur on Earth. In the absence of oxygen, sulfur-reducing and sulfate-reducing bacteria derive energy from oxidizing hydrogen or organic molecules by reducing elemental sulfur or sulfate to hydrogen sulfide. Other bacteria liberate hydrogen sulfide from sulfur-containing amino acids; this gives rise to the odor of rotten eggs and contributes to the odor of flatulence. As organic matter decays under low-oxygen (or hypoxic) conditions (such as in swamps, eutrophic lakes or dead zones of oceans), sulfate-reducing bacteria will use the sulfates present in the water to oxidize the organic matter, producing hydrogen sulfide as waste. Some of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides, which are not water-soluble. These metal sulfides, such as ferrous sulfide FeS, are often black or brown, leading to the dark color of sludge. Several groups of bacteria can use hydrogen sulfide as fuel, oxidizing it to elemental sulfur or to sulfate by using dissolved oxygen, metal oxides (e.g., iron oxyhydroxides and manganese oxides), or nitrate as electron acceptors. The purple sulfur bacteria and the green sulfur bacteria use hydrogen sulfide as an electron donor in photosynthesis, thereby producing elemental sulfur. This mode of photosynthesis is older than the mode of cyanobacteria, algae, and plants, which uses water as electron donor and liberates oxygen. The biochemistry of hydrogen sulfide is a key part of the chemistry of the iron-sulfur world. In this model of the origin of life on Earth, geologically produced hydrogen sulfide is postulated as an electron donor driving the reduction of carbon dioxide. Animals Hydrogen sulfide is lethal to most animals, but a few highly specialized species (extremophiles) do thrive in habitats that are rich in this compound. In the deep sea, hydrothermal vents and cold seeps with high levels of hydrogen sulfide are home to a number of extremely specialized lifeforms, ranging from bacteria to fish. Because of the absence of sunlight at these depths, these ecosystems rely on chemosynthesis rather than photosynthesis. Freshwater springs rich in hydrogen sulfide are mainly home to invertebrates, but also include a small number of fish: Cyprinodon bobmilleri (a pupfish from Mexico), Limia sulphurophila (a poeciliid from the Dominican Republic), Gambusia eurystoma (a poeciliid from Mexico), and a few Poecilia (poeciliids from Mexico). Invertebrates and microorganisms in some cave systems, such as Movile Cave, are adapted to high levels of hydrogen sulfide. Interstellar and planetary occurrence Hydrogen sulfide has often been detected in the interstellar medium. It also occurs in the clouds of planets in our solar system. Mass extinctions Hydrogen sulfide has been implicated in several mass extinctions that have occurred in the Earth's past. In particular, a buildup of hydrogen sulfide in the atmosphere may have caused, or at least contributed to, the Permian-Triassic extinction event 252 million years ago. Organic residues from these extinction boundaries indicate that the oceans were anoxic (oxygen-depleted) and had species of shallow plankton that metabolized . The formation of may have been initiated by massive volcanic eruptions, which emitted carbon dioxide and methane into the atmosphere, which warmed the oceans, lowering their capacity to absorb oxygen that would otherwise oxidize . The increased levels of hydrogen sulfide could have killed oxygen-generating plants as well as depleted the ozone layer, causing further stress. Small blooms have been detected in modern times in the Dead Sea and in the Atlantic Ocean off the coast of Namibia.
Physical sciences
Hydrogen compounds
Chemistry
154742
https://en.wikipedia.org/wiki/Dubai%20International%20Airport
Dubai International Airport
Dubai International Airport () is the primary international airport serving Dubai, United Arab Emirates, and is the world's busiest airport by international passenger traffic as of 2023. It is also the busiest airport in the Middle East as of 2023, the second-busiest airport in the world by passenger traffic as of 2023, the busiest airport for Airbus A380 and Boeing 777 movements, and the airport with the highest average number of passengers per flight. In 2023, the airport handled 87 million passengers and 1.81 million tonnes of cargo and registered 416,405 aircraft movements. Dubai International Airport is situated in the Al Garhoud district, east of the city center of Dubai and spread over an area of of land. Terminal 3 is the third-largest building in the world by floor space and the largest airport terminal in the world. In July 2019, Dubai International airport installed the largest solar energy system in the region's airports as part of Dubai's goal to reduce 30 per cent of the city energy consumption by 2030. Emirates Airline has its hub airport in Dubai International (DXB) and has its own terminal 3 with three concourses that they share with Flydubai. The Emirates hub is the largest airline hub in the Middle East; Emirates handles 51% of all passenger traffic and accounts for approximately 42% of all aircraft movements at the airport. Dubai Airport is also the base for low-cost carrier flydubai which handles 13% of passenger traffic and 25% of aircraft movements at DXB. The airport has a total capacity of 90 million passengers annually. As of January 2024, over 8,000 weekly flights are operated by 100 airlines to over 262 destinations across all inhabited continents. Over 63% of travelers using the airport in 2018 were connecting passengers. In 2014, Dubai International indirectly supported over 400,000 jobs and contributed over US$26.7 billion to the economy, representing around 27% of Dubai's GDP and 21% of the employment in Dubai. Due to the announced expansion of Al Maktoum Airport on 28 April 2024, Dubai International Airport will be shut down once Al Maktoum Airport expansion will be completed. History The history of civil aviation in Dubai started in July 1937 when an air agreement was signed for a flying boat base for aircraft of Imperial Airways with the rental of the base at about 440 rupees per month – this included the guards' wages. The Empire Flying Boats started operating once a week flying eastbound to Karachi from the UK and westbound to Southampton, England. By February 1938, there were four flying boats a week. In the 1940s, flying from Dubai was by flying boats operated by British Overseas Airways Corporation (BOAC), operating the Horseshoe route from Southern Africa via the Persian Gulf to Sydney. Construction Construction of the airport was ordered bby the ruler of Dubai, Sheikh Rashid bin Saeed Al Maktoum, in 1959. It officially opened on 30 September 1960, at which time it was able to handle aircraft the size of a Douglas DC-3 on a runway made of compacted sand. Three turning-areas, an apron and small terminal completed the airport that was constructed by Costain. In May 1963, construction of a asphalt runway started. This new runway, alongside the original sand runway and taxiway opened in May 1965, together with several new extensions to the terminal Building, hangars were erected, and Airport and Navigational aids were installed. The installation of the lighting system continued after the official opening and was completed in August 1965. During the second half of the 1960s several extensions, equipment upgrades like a VHF omnidirectional range (VOR) and an instrument landing system (ILS), as well as new buildings, were constructed. By 1969, the airport was served by 9 airlines serving some 20 destinations. The inauguration on 15 May 1966 was marked by the visits of the first big jets, De Havilland Comets of Middle East Airlines and Kuwait Airways. The advent of wide-body aircraft required further airport development in the 1970s and plans for a new terminal, runways, and taxiways capable of coping with international flights were drawn up. The construction of a new terminal building consisting of a three-story building long with an enclosed floor area of . A new control tower was also constructed. Expansion continued in the early 1970s including ILS Category II equipment, lengthening the existing runway to , installation of a non-directional beacon (NDB), diesel generators, taxiways, etc. This work made handling the Boeing 747 and Concorde possible. Several runway and apron extensions were carried out through the decade to meet growing demand. The new precision category 2 Approach and Runway Lighting System was commissioned in 1971. The construction of the Airport Fire Station and the installation of the generators were completed in December 1971 and were fully operational in March 1972. The ruler of Dubai also commissioned and inaugurated the Long-range Surveillance System on 19 June 1973. With the expansion of the Airport Fire Services, it became necessary to find more suitable hangars. A hangar-style building was made available for use at the end of 1976. This building was strategically located midway between the runway ends to facilitate efficient operations. Additionally, a new building was constructed to house the Airport Maintenance Engineer, Electronics Engineering section, and Stores unit. Expansion and refurbishment of the Airport Restaurant and Transit Lounge, including a new kitchen, were completed in December 1978. The next phase of development included the construction of a new runway, which was completed three months ahead of schedule and opened in April 1984. This runway, located 360 metres north of and parallel to the existing runway, was equipped with the latest meteorological, airfield lighting, and instrument landing systems, giving the airport a Category II classification.   Several extensions and upgrades were also made to the terminal facilities and supporting systems. On December 23, 1980, the airport became an ordinary member of the Airports Council International (ACI). The decline of Karachi Airport is often attributed to the traffic Dubai diverted from it. During the 1980s, Dubai was a stopping point for airlines such as Air India, Cathay Pacific, Singapore Airlines, Malaysia Airlines, and others traveling between Asia and Europe that needed a refueling point in the Persian Gulf. Later made redundant with the availability of Russian airspace due to the breakup of the Soviet Union and the advent of longer-range aircraft introduced in the late 1980s and early 1990s such as the Airbus A340, the Boeing 747-400 and the Boeing 777 series aircraft, which had the ability to fly between Europe and Southeast Asia nonstop. British Airways flights from Islamabad to Manchester also stopped for short times during the 1980s for refueling and supplies. Expansion The opening of Terminal 2 in 1998 saw the first step of phase 1 of the new development master plan launched in 1997. In the second stage, Concourse 1, named Sheikh Rashid Terminal opened in April 2000. The concourse is in length connects to the check-in area via a tunnel containing moving walkways (conveyor belt/travelators). It also contains a hotel, business center, health club, exchanges, dining and entertainment facilities, internet services, a medical center, a post office, and a prayer room. The next step was runway reconfiguration, already part of phase 2, and aprons and taxiways were expanded and strengthened in 2003–2004. In addition, the Dubai Flower Centre opened in 2005 as part of the development. The airport saw the need for this as the city is a hub for the import and export of flowers and the airport required a specialist facility since flowers need special conditions. Construction of Terminal 3 began in 2004 as the next stage of phase 2 of the development, with an estimated cost of around $4.55  billion. Completion was originally planned for 2006 but was delayed by two years. On 30 May 2008, a topping-out ceremony was conducted. The terminal became operational on 14 October 2008, with Emirates Airline (EK2926) from Jeddah, Saudi Arabia, being the first flight to arrive at the new terminal and EK843 to Doha, Qatar being the first departing flight. The terminal increased the airport's maximum annual passenger capacity by 47  million, bringing the total annual capacity to 75 million passengers. On 29 October 2010, the airport marked its 50th anniversary. The airport has seen over 402 million passengers at an average annual growth rate of 15.5% and handled over 3.87 million aircraft at an average annual growth rate of 12.4%. With the arrival of the Airbus A380, the airport made modifications costing $230 million. These included the building of 29 gates capable of handling large aircraft, five of which are in Terminal 3 and two are in Terminal 1. Other important projects at the airport include the next stage of phase 2 development, which includes the construction of Concourse 3. This will be a smaller version of Concourse 2, connected to Terminal 3. Also as part of the expansion, the airport now handles at least 75 million (an increase of 19 million) passengers per annum with the opening of Concourse 3, part of Terminal 3. However, recent communications predict a further increase to 80 million passengers with additional reassessments of existing capacities. In 2009, Terminal 2 expanded its facilities to handle 5 million (an increase of 2 million) passengers annually, taking the airport's total capacity to 62 million passengers. Terminal 2 capacity was planned to bring the total capacity of the airport from the initial 75 million passengers to 80 million passenger capacity by 2012. The Cargo Mega Terminal, which will have the capacity to handle 3 million tonnes of cargo a year, is a major development; it will be built in the long term. The completion of the mega terminal will be no later than 2018. Terminal 2 will be completely redeveloped to match the status of the other two terminals. With all of these projects completed by 2013, the airport expects to handle at least 75–80 million passengers and over 5 million tonnes of cargo. The airport's landside facilities were modified to allow the construction of two stations for the Red Line of Dubai Metro. One station was built at Terminal 1 and the other at Terminal 3. The line began service on 9 September 2009 and opened in phases over the next year. The second Metro line, the Green Line, runs near the Airport Free Zone and has served the airport's north-eastern area with the Terminal 2 starting in September 2011. With phase 2 of DXB's expansion plan complete, the airport now has three terminals and three concourses, two cargo mega terminals, an airport-free zone, an expo center with three large exhibition halls, a major aircraft maintenance hub and a flower center to handle perishable goods. A phase 3 which has been included in the master plan involves the construction of a new Concourse 4. The airport revealed its future plans in May 2011, which involve the construction of a new Concourse D for all airlines currently operating from Concourse C. Concourse D is expected to bring the total capacity of the airport to over 90 million passengers and will open in early 2016. The plan also involves Emirates solely operating from Concourse C along with Concourse A and B. In September 2012, Dubai Airports changed the names of concourses to make it easier for passengers to navigate the airport. Concourse 1, in which over 100 international airlines operate, became Concourse C (C1-C50). Concourse 2 became Concourse B (B1-B32) and Concourse 3 became Concourse A (A1-A24). The gates in Terminal 2 were changed and are now numbered F1 to F12. The remaining alpha-numeric sequences are being reserved for future airport facilities that are part of the Dubai Airports' $7.8 billion expansion programm, including Concourse D. In December 2024, CEO Paul Griffiths declared that Dubai International Airport is rapidly expanding, with plans to enhance passenger experiences through advanced technologies like facial recognition and a focus on reducing wait times while maximizing shopping opportunities. He highlighted a $35 billion expansion of Dubai World Central, aiming to create smaller, more intimate airport experiences within a vast complex, ultimately positioning it to become the world's largest airport. Dubai's government announced the construction of a new airport in Jebel Ali, named Dubai World Central – Al Maktoum International Airport. It is expected to be the second-largest airport in the world by physical size, though not by passenger metrics. It opened 27 June 2010; however, construction is not expected to finish until 2027. The airport is expected to be able to accommodate up to 160 million passengers. There has been an official plan to build the Dubai Metro Purple Line to connect Al Maktoum International Airport to Dubai International Airport; construction was set to begin in 2012. The proposed Purple Line will link Dubai International Airport and Al Maktoum International Airport. Concourse D opened on 24 February 2016 for all international airlines and moved out of Terminal 1. Emirates now operates from Concourses A, B, and C, all under Terminal 3. while FlyDubai operates from Terminal 2 (Concourse F). On 20 December 2018 the airport celebrated its one billionth passenger. In April 2024, the airport was submerged in water by floods and suffered extensive damage. Air traffic Main airlines based at DXB Emirates is the largest airline operating at the airport, with an all-wide-body fleet of over 200 Airbus and Boeing aircraft based at Dubai, providing scheduled services to the Middle East, Africa, Asia, Europe, North America, South America, Australia and New Zealand. It operates out of Terminal 3, Concourses A, B and C. Emirates SkyCargo, a subsidiary of Emirates, operates scheduled all-cargo services between Dubai and the rest of the world. Flydubai, a low-cost airline planning to operate over 100 aircraft on scheduled passenger services to and from Dubai, to the Middle East, Africa, Europe and South Asia. It operates from Terminal 2 and, since December 2018, also from Terminal 3 for selected destinations. Recreational flying to Dubai is catered for by the Dubai Aviation Club, which undertakes flying training for private pilots and provides facilities for private owners. The Government of Dubai provides short and long-range search and rescue services, police support, medical evacuation, and general-purpose flights for the airport and all VIP flights to the airport. Statistics Infrastructure Dubai International Airport was conceptualized to function as Dubai's primary airport and the region's busiest for the foreseeable future without the need for relocation or the building of another airport when passenger figures increased. The area was chosen near Dubai, to attract passengers from the city of Dubai, rather than travel to the busier Sharjah International Airport. The planned location originally was Jebel Ali. The original master plan for the existing airport initially involved a dual-terminal and one runway configuration over two phases with provisions for another two passenger terminals in the near future. Phase 1 included the construction of the first passenger terminal, the first runway, 70 aircraft parking bays, support facilities, and structures, including large maintenance hangar, the first fire station, workshops, and administrative offices, an airfreight complex, two cargo agents' buildings, in-flight catering kitchens and an control tower. Construction for the second phase would commence immediately after the completion of Phase 1 and include the second runway, 50 new aircraft parking bays in addition to the existing 70 bays, a second fire station, and a third cargo agent building. The third phase included the construction of a new terminal (now the parts of Terminal 1's main building and Concourse C) and an additional 60 parking bays, as well as a new aircraft maintenance facility. Then, in the early 2000s (decade) a new master plan was introduced which began the development of the current concourses and terminal infrastructure. Paul Griffiths (Dubai Airports' CEO) in his interview with Vision magazine, cited plans to build infrastructure to support the expansion of Emirates and budget airline flydubai and ascend the ranks of global aviation hubs. Control tower The airport traffic control tower (ATCT) was constructed as part of phase two of the then-development plan. Terminals Dubai International Airport has three terminals. Terminal 1 has one concourse (Concourse D), Terminal 2 is set apart from the other two main buildings and Terminal 3 is divided into Concourse A, B, and C. The cargo terminal is capable of handling 3 million tonnes of cargo annually and a general aviation terminal (GAT) is close by. Passenger terminals Dubai Airport has three passenger terminals. Terminals 1 and 3 are directly connected with a common transit area, with airside passengers being able to move freely between the terminals without going through immigration, while Terminal 2 is on the opposite side of the airport. For transiting passengers, a shuttle service runs between the terminals, with a journey time of around 20 minutes from Terminal 2 to Terminal 1 and 30 minutes to Terminal 3. Passengers in Terminal 3 who need to transfer between Concourse A and the rest of the Terminal have to travel via an automated people mover. Also after early 2016 when the construction of Concourse D was done, there is now an automated people mover between concourse D and Terminal 1. Situated beside Terminal 2 is the Executive Flights Terminal, which has its own check-in facilities for premium passengers and where transportation to aircraft in any of the other terminals is by personal buggy. The three passenger terminals have a total handling capacity of around 80 million passengers a year. Terminals 1 and 3 cater to international passengers, whilst Terminal 2 is for budget passengers and passengers flying to the subcontinent and Persian Gulf region; Terminals 1 and 3 handle 85% of the passenger traffic, and the Executive Flights terminal is for the higher-end travelers and important guests. Terminal 1 Terminal 1 has a capacity of 45 million passengers. It is used by over 100 airlines and is connected to Concourse D by an automated people mover. It is spread over an area of and offers 221 check-in counters. The Terminal was originally built within the airport's old building to handle 18 million passengers; however, with extreme congestion at the terminal, the airport was forced to expand the terminal to accommodate the opening of 28 remote gates. Over the years, more mobile gates were added to the airport bringing the total as of 2010 to 28. In 2013, Dubai Airports announced a major renovation for Terminal 1 and Concourse C. The renovations include upgraded baggage systems, replacement of check-in desks and a more spacious departure hall. Arrivals will also see improvements to help reduce waiting times. The renovation was completed by the middle of 2015. Concourse D Planning began for further expansion of Dubai Airport, with the construction of Terminal 4, it was revealed on the day Emirates completed its phased operations at the new Terminal 3, on 14 November 2008. According to Dubai Airport officials, plans for Terminal 4 had begun and extensions would be made to Terminal 3. These are required to bring the capacity of the airport to 80–90 million passengers a year by 2015. In May 2011, Paul Griffiths, chief executive of Dubai Airports revealed the Dubai Airport masterplan. It involves the construction of Concourse D (previously Terminal 4). With a capacity of 15 million, it would bring the total capacity of the airport to 90 million passengers by 2018—an increase of 15 million. It also will see Emirates take over the operation at Concourse C, along with Concourse A and B which it will already be operating. All remaining airlines will shift to Concourse D, or move to Al Maktoum International Airport. The airport projects that international passenger and cargo traffic will increase at an average annual growth rate of 7.2% and 6.7%, respectively, and that by 2020 passenger numbers at Dubai International Airport will reach 98.5 million and cargo volumes will top 4.1 million tonnes. Concourse D will have a capacity of 15 million passengers, include 17 gates and will be connected to Terminal 1 via an automated people mover. On 6 February 2016, members of the public were invited to trial the concourse in preparation for its opening. On Wednesday, 24 February 2016, Concourse D officially opened with the first British Airways flight arriving at gate D8. Concourse D and Terminal 1 reopened on 24 June 2021 following a year's closure due to the COVID-19 pandemic. Terminal 2 Terminal 2, built in 1998, has an area of and has a capacity of 10 million as of 2013, after several, decent reconstructions and a major expansion in 2012 which saw capacity double. It is used by over 50 airlines, mainly operating in the Persian Gulf region. Most flights operate to India, Saudi Arabia, Iran, Afghanistan and Pakistan. In June 2009, Terminal 2 became the hub of Air India Express and flydubai, and the terminal houses the airline's corporate head office. Terminal 2 has undergone a major refurbishment recently, extending check-in and boarding facilities, changing the interior and exterior décor, and offering more dining choices to passengers. Capacity was increased to allow for 10 million passengers, an increase of 5 million. The terminal has now increased the number of facilities available to passengers. Check-in counters have increased to 37. The boarding area is more spacious, with more natural light. Also, the new open boarding gates allow several flights to board simultaneously, improving passenger and aircraft movements. There are a total of 43 remote stands at the terminal. However, passengers cannot move between Terminal 2 to 1 or from 2 to 3 and vice versa inside the airport. They have to make use of Taxi services or public transport available outside. The Dubai duty-free shopping area covers in departures and in arrivals. The extension included a larger arrivals hall as well. Terminal 2 has no jetbridges and so passengers are bussed to the aircraft at gates F1-F12. Terminal 3 The partly underground Terminal 3 was built at a cost of US$4.5 billion, exclusively for Emirates, and has a capacity of 65 million passengers. The terminal has 20 Airbus A380 gates at Concourse A and 5 at Concourse B and 2 at Concourse C. It was announced on 6 September 2012 that Terminal 3 would no longer be Emirates-exclusive, as Emirates and Qantas had set up an extensive code-sharing agreement. Qantas would be the second and only one of two airlines to fly in and out of Terminal 3. This deal also allows Qantas to use the A380 dedicated concourse. Qantas services to and from Dubai ceased in 2018 in favour of a Singapore stopover instead. flydubai, Emirates' low cost subsidiary also currently operates certain selected routes, including most European destinations, to and from Terminal 3. In March 2023, United began services from Newark to Dubai, operating out of Terminal 3, becoming the only airline other than Emirates and flydubai to currently operate out of the terminal. Upon completion, Terminal 3 was the largest building in the world by floor space, with over of space, capable of handling 60 million passengers in a year. A large part is located under the taxiway area and is directly connected to Concourse B: the departure and arrival halls in the new structure are beneath the airport's apron. Concourse A is connected to the terminal via a Terminal 3 APM. It has been operational since 14 October 2008, and opened in four phases to avoid collapse of baggage handling and other IT systems. The building includes a multi-level underground structure, first and business class lounges, restaurants, 180 check-in counters, and 2,600 car-parking spaces. The terminal offers more than double the previous retail area of Concourse C, by adding about and Concourse B's of shopping facilities. In arrivals, the terminal contains 72 immigration counters and 14 baggage carousels. The baggage handling system—the largest system and also the deepest in the world—has a capacity to handle 8,000 bags per hour. The system includes 21 screening injection points, 49 make-up carousels, of conveyor belts capable of handling 15,000 items per hour at a speed of and 4,500 early baggage storage positions. Concourse A Concourse A, part of Terminal 3, opened 2 January 2013, has a capacity of 19 million passengers and is connected to the two major public levels of Terminal 3 via Terminal 3 APM in addition to the vehicular and baggage handling system utility tunnels for further transfer. The concourse opened on 2 January 2013 and was built at a cost of US$3.3 billion. The building, which follows the characteristic shape of Concourse B, long, wide and high in the centre from the apron level and accommodates 20 air bridge gates, of which all are capable of handling the Airbus A380-800. There are also 6 remote lounges for passengers departing on flights parked at 13 remote stands. The gates in Concourse A are labeled A1–A24. Gates A6, A7, A18, and A19 are not equipped with jetbridges and so passengers departing from these gates are bussed to the aircraft. The concourse includes one 4-star hotel and one 5-star hotel, first- and business-class lounges, and duty-free areas. The total built-up area is . The concourse allows for multi-level boarding and boasts the largest first and business class lounges in the world. Each lounge has its own dedicated floor offering direct aircraft access from the lounges. The total amount of retail space at the concourse is , and there is also a total of 14 cafes and restaurants. The total retail area in the concourse is approximately . Concourse B Concourse B is directly connected to terminal 3 and is dedicated exclusively to Emirates. The total built-up area of the concourse itself is . The concourse is long, wide (at midpoint) and high. The terminal has 10 floors (4 basements, a ground floor, and 5 above floors). The building currently includes a multi-level structure for departures and arrivals and includes 32 gates, labeled B1–B32. The concourse has 26 air bridge gates (gates B7-B32) and 5 boarding lounges (B1-B6) for 14 remote stands that are for Airbus A340 and Boeing 777 aircraft only. For transit passengers, the concourse has 3 transfer areas and 62 transfer desks. The concourse also includes the Emirates first and Business class lounges, and the Marhaba lounge. The First class lounge has a capacity of 1,800 passengers and a total area of . The Business class lounge has a capacity of 3,000 passengers and a total area of . The Marhaba Lounge, the smallest lounge at the concourse has a capacity of 300 passengers at a time. The total retail area at the concourse is , which also includes 18 restaurants within the food court. There are also three hotels in the concourse; a 5-star hotel and a 4-star hotel. There is a direct connection to Sheikh Rashid Terminal (Concourse C) located at the control tower structure through passenger walkways. There is also a 300-room hotel and health club including both five and four-star rooms. Concourse B includes five aerobridges that are capable of handling the new Airbus A380. Emirates Airline continues to maintain a presence in Concourse C, operating 12 gates at the concourse as well as the Emirates First Class and Business Class Lounges. Concourse C Concourse C, a part of Terminal 3, was opened in 2000 and used to be the largest concourse at Dubai International Airport before Concourse B in Terminal 3 opened. It incorporates 50 gates, including 28 air bridges at gates (C1-C23, except for C12a, C15, and C15a) and 22 remote gates located at a lower level of the terminal at gates C29-C50. The gates are labelled C1–C50. The concourse includes over 17 food and beverage cafes and restaurants, with the food court being located on the Departures Level. Also located in the concourse is a 5-star hotel and a duty-free shopping facility. Other facilities include prayer rooms and a medical center. Concourse C became part of Terminal 3 in 2016 after Concourse D opened. Al Majlis VIP Pavilion and Dubai Executive Flight Terminal The Al Majlis VIP pavilion was exclusively built for the Dubai Royal Air Wing and opened on 1 July 2008. The entire facility is a terminal and includes a Royal Majlis and an antenna farm. It also includes eight aircraft hangars with a total built up area of and maintenance hangars for Boeing 747s and Airbus A380s, and a gatehouse for VIP service. In 2010 there were 47,213 customers, 13,162 movements and in 2009, there were a total of 43,968 customers and 14,896 movements. Executive Flight Services (EFS) caters to those passengers of high class or special importance who travel through Dubai International Airport. It is the largest dedicated business aviation terminal of its kind in the Middle East. It is located at the Dubai Airport Free Zone close to Dubai International's Terminal 2. It only caters to private flights exclusive to the terminal. Airlines operating from the terminal are expected to maintain a lounge. In 2010, EFS handled 7,889 aircraft movements and 25,177 passengers. The center itself is located close to Terminal 2 and includes a two-story main building, a hangar, a ramp area for aircraft parking and a special VIP car park for long term parking. The center also has its own immigration and customs sections, its own Dubai Duty-Free outlet, a fully equipped business and conference center, eight luxury private lounges, and a limousine service between aircraft and the terminal. The ramp area of the terminal can accommodate up to 22 small-sized private jets, between 8 and 12 medium-sized jets, or up to four large-sized jets such as a Boeing Business Jet (BBJ), the Boeing 727 or the Airbus A319. The facility makes EFC the largest dedicated business aviation terminal in the Middle East. Cargo Mega Terminal The cargo village at Dubai International Airport is one of the world's largest and most central cargo hubs, with most of the cargo for Asia and Africa coming through the facility. Forecasts in 2004 for cargo growth predicted that additional major cargo handling facilities were needed to satisfy demands. Plans were put in place to construct the first stage of the cargo mega terminal, which by 2018 will have the ability to handle three million tons of freight. Phase 1 of the cargo mega terminal was completed by 2004 and the next phase of expansion was scheduled for completion in late 2007. Presently the airport has a cargo capacity of 2.5 million tonnes, and will be expanded to handle 3 million. Flower centre Dubai Airport has constructed a flower center to handle flower imports and exports, as Dubai is a major hub for the import and export of flowers, and the airport requires a specialist facility since these products need special conditions. The flower center's first phase was completed in 2004 at a cost of $50 million. The center when completed and functioning will have a floor area of approximately including different export chambers and offices. The handling capacity of the center is expected to be more than 300,000 tonnes of product throughput per annum. The entire facility (with the exception of the offices) will be maintained at an ambient temperature of just . Runways Dubai Airport has two closely spaced parallel runways, 12R/30L is , 12L/30R is . The gap between the centrelines of the two runways is . The runways are equipped with four sets of ILS to guide landing aircraft safely under very poor weather conditions. The runways were expanded to accommodate the Airbus A380 which came into service in 2007. In 2009, it was announced that the airport installed a Category III landing system, allowing planes to land in low-visibility conditions, such as fog. This system was the first of its kind in the United Arab Emirates. In 2013 Dubai Airports announced an 80-day runway refurbishment program which started on 1 May 2014 and was completed on 21 July 2014. The northern runway was resurfaced while lighting upgrades and additional taxiways were built on the southern runway to help boost its capacity. The southern runway was closed from 1 to 31 May 2014, while the northern runway was closed from 31 May to 20 July 2014. Due to extra congestion on one runway, all freighter, charter and general aviation flights were diverted to Al Maktoum International Airport. Flights at DXB were reduced by 26% and 14 airlines moved to Al Maktoum International Airport whilst the runways works were being done. Emirates cut 5,000 flights and grounded over 20 aircraft during the period. Dubai Airport plans to close the southern runway (12R/30L) for complete resurfacing and replacement of the airfield lighting and supporting infrastructure. This will be done during a 45-day period from 16 April 2019 to 30 May 2019. This upgrade will boost safety, service and capacity levels at DXB. Airlines will be required to reduce flight operations at DXB due to single runway operations. Accommodating the Airbus A380 With Dubai-based Emirates being one of the launch customers for the Airbus A380 and also the largest customer, Dubai Airport needed to expand its existing facilities to accommodate the very large aircraft. The Department of Civil Aviation spent $120 million in upgrading both of its terminals and airport infrastructure, including enlarged gate holdrooms, new finger piers, an enlarged runway, new airbridges and extended baggage belt carousels from the normal . Dubai Airport also invested $3.5 billion into a new Concourse A, exclusively for handling Emirates A380s. With the changes made, the airport does not expect embarking and disembarking passengers and baggage from the A380 to take longer than it does for Boeing 747-400s, which carry fewer passengers. On 16 July 2008, Dubai Airport unveiled the first of two specially-built gates capable of handling the aircraft. Costing $10 million, the gates will enable passengers to get on the upper cabin of the new 555-seater aircraft directly from the gate hold rooms. The hold rooms themselves have been enlarged to cater for the larger number of passengers flying the A380s. In addition to the two gates at Terminal 1, five more A380-capable gates were opened at concourse B on 14 October 2008. Concourse A opened on 2 January 2013. Labor controversy Workers building a new terminal at Dubai International Airport went on a sympathy strike in March 2006. Another strike took place in October 2007. Four thousand strikers were arrested. Most of them were released some days later and those who were not local were then deported from Dubai. Airlines and destinations Passenger The following airlines offer regular scheduled and charter services to and from Dubai International:
Technology
Asia
null
154744
https://en.wikipedia.org/wiki/Ilmenite
Ilmenite
Ilmenite is a titanium-iron oxide mineral with the idealized formula . It is a weakly magnetic black or steel-gray solid. Ilmenite is the most important ore of titanium and the main source of titanium dioxide, which is used in paints, printing inks, fabrics, plastics, paper, sunscreen, food and cosmetics. Structure and properties Ilmenite is a heavy (specific gravity 4.7), moderately hard (Mohs hardness 5.6 to 6), opaque black mineral with a submetallic luster. It is almost always massive, with thick tabular crystals being quite rare. It shows no discernible cleavage, breaking instead with a conchoidal to uneven fracture. Ilmenite crystallizes in the trigonal system with space group R. The ilmenite crystal structure consists of an ordered derivative of the corundum structure; in corundum all cations are identical but in ilmenite Fe2+ and Ti4+ ions occupy alternating layers perpendicular to the trigonal c axis. Pure ilmenite is paramagnetic (showing only very weak attraction to a magnet), but ilmenite forms solid solutions with hematite that are weakly ferromagnetic and so are noticeably attracted to a magnet. Natural deposits of ilmenite usually contain intergrown or exsolved magnetite that also contribute to its ferromagnetism. Ilmenite is distinguished from hematite by its less intensely black color and duller appearance and its black streak, and from magnetite by its weaker magnetism. Discovery In 1791 William Gregor discovered a deposit of black sand in a stream that runs through the valley just south of the village of Manaccan (Cornwall), and identified for the first time titanium as one of the constituents of the main mineral in the sand. Gregor named this mineral manaccanite. The same mineral was found in the Ilmensky Mountains, near Miass, Russia, and named ilmenite. Mineral chemistry Pure ilmenite has the composition . However, ilmenite most often contains appreciable quantities of magnesium and manganese and up to 6 wt% of hematite, , substituting for in the crystal structure. Thus the full chemical formula can be expressed as . Ilmenite forms a solid solution with geikielite () and pyrophanite () which are magnesian and manganiferous end-members of the solid solution series. Although ilmenite is typically close to the ideal composition, with minor mole percentages of Mn and Mg, the ilmenites of kimberlites usually contain substantial amounts of geikielite molecules, and in some highly differentiated felsic rocks ilmenites may contain significant amounts of pyrophanite molecules. At temperatures above , there is a complete solid solution between ilmenite and hematite. There is a miscibility gap at lower temperatures, resulting in a coexistence of these two minerals in rocks but no solid solution. This coexistence may result in exsolution lamellae in cooled ilmenites with more iron in the system than can be homogeneously accommodated in the crystal lattice. Ilmenite containing 6 to 13 percent is sometimes described as ferrian ilmenite. Ilmenite alters or weathers to form the pseudo-mineral leucoxene, a fine-grained yellowish to grayish or brownish material enriched to 70% or more of . Leucoxene is an important source of titanium in heavy mineral sands ore deposits. Paragenesis Ilmenite is a common accessory mineral found in metamorphic and igneous rocks. It is found in large concentrations in layered intrusions where it forms as part of a cumulate layer within the intrusion. Ilmenite generally occurs in these cumulates together with orthopyroxene or in combination with plagioclase and apatite (nelsonite). Magnesian ilmenite is formed in kimberlites as part of the MARID association of minerals (mica-amphibole-rutile-ilmenite-diopside) assemblage of glimmerite xenoliths. Manganiferous ilmenite is found in granitic rocks and also in carbonatite intrusions where it may also contain anomalously high amounts of niobium. Many mafic igneous rocks contain grains of intergrown magnetite and ilmenite, formed by the oxidation of ulvospinel. Processing and consumption Most ilmenite is mined for titanium dioxide production. Ilmenite and titanium dioxide are used in the production of titanium metal. Titanium dioxide is most used as a white pigment and the major consuming industries for TiO2 pigments are paints and surface coatings, plastics, and paper and paperboard. Per capita consumption of TiO2 in China is about 1.1 kilograms per year, compared with 2.7 kilograms for Western Europe and the United States. Titanium is the ninth most abundant element on Earth and represents about 0.6 percent of the Earth's crust. Ilmenite is commonly processed to obtain a titanium concentrate, which is called "synthetic rutile" if it contains more than 90 percent TiO2, or more generally "titaniferous slags" if it has a lower TiO2 content. More than 80 percent of the estimated global production of titanium concentrate is obtained from the processing of ilmenite, while 13 percent is obtained from titaniferous slags and 5 percent from rutile. Ilmenite can be converted into pigment grade titanium dioxide via either the sulfate process or the chloride process. Ilmenite can also be improved and purified to titanium dioxide in the form of rutile using the Becher process. Ilmenite ores can also be converted to liquid iron and a titanium-rich slag using a smelting process. Ilmenite ore is used as a flux by steelmakers to line blast furnace hearth refractory. Ilmenite can be used to produce ferrotitanium via an aluminothermic reduction. Feedstock production Most ilmenite is recovered from heavy mineral sands ore deposits, where the mineral is concentrated as a placer deposit and weathering reduces its iron content, increasing the percentage of titanium. However, ilmenite can also be recovered from "hard rock" titanium ore sources, such as ultramafic to mafic layered intrusions or anorthosite massifs. The ilmenite in layered intrusions is sometimes abundant, but it contains considerable intergrowths of magnetite that reduce its ore grade. Ilmenite from anorthosite massifs often contain large amounts of calcium or magnesium that render it unsuitable for the chloride process. The proven reserves of ilmenite and rutile ore are estimated at between 423 and 600 million tonnes titanium dioxide. The largest ilmenite deposits are in South Africa, India, the United States, Canada, Norway, Australia, Ukraine, Russia and Kazakhstan. Additional deposits are found in Bangladesh, Chile, Mexico and New Zealand. Australia was the world's largest ilmenite ore producer in 2011, with about 1.3 million tonnes of production, followed by South Africa, Canada, Mozambique, India, China, Vietnam, Ukraine, Norway, Madagascar and United States. The top four ilmenite and rutile feedstock producers in 2010 were Rio Tinto Group, Iluka Resources, Exxaro and Kenmare Resources, which collectively accounted for more than 60% of world's supplies. The world's two largest open cast ilmenite mines are: The Tellnes mine located in Sokndal, Norway, and run by Titania AS (owned by Kronos Worldwide Inc.) with 0.55 Mtpa capacity and 57 Mt contained reserves. The Rio Tinto Group's Lac Tio mine located near Havre Saint-Pierre, Quebec in Canada with a 3 Mtpa capacity and 52 Mt reserves. Major mineral sands based ilmenite mining operations include: Richards Bay Minerals in South Africa, majority-owned by the Rio Tinto Group. Kenmare Resources' Moma mine in Mozambique. Iluka Resources' mining operations in Australia including Murray Basin, Eneabba and Capel. The Kerala Minerals & Metals Ltd (KMML), Indian Rare Earths (IRE), VV Mineral mines in India. TiZir Ltd.'s Grande Cote mine in Senegal QIT Madagascar Minerals mine, majority-owned by the Rio Tinto Group, which began production in 2009 and is expected to produce 0.75 Mtpa of ilmenite, potentially expanding to 2 Mtpa in future phases. Attractive major potential ilmenite deposits include: The Karhujupukka magnetite-ilmenite deposit in Kolari, northern Finland with around 5 Mt reserves and ore containing about 6.2% titanium. The Balla Balla magnetite-iron-titanium-vanadium ore deposit in the Pilbara of Western Australia, which contains 456 million tonnes of cumulate ore horizon grading 45% , 13.7% and 0.64% , one of the richest magnetite-ilmenite ore bodies in Australia The Coburn, WIM 50, Douglas, Pooncarie mineral sands deposits in Australia. The Magpie titano-magnetite (iron-titanium-vanadium-chrome) deposits in eastern Quebec of Canada with about 1 billion tonnes containing about 43% Fe, 12% TiO2, 0.4% V2O5, and 2.2% Cr2O3. The Longnose deposit in Northeast Minnesota is considered to be "the largest and richest ilmenite deposit in North America." In 2020, China has by far the highest titanium mining activity. About 35 percent of the world’s ilmenite is mined in China, representing 33 percent of total titanium mineral mining (including ilmenite and rutile). South Africa and Mozambique are also important contributors, representing 13 percent and 12 percent of worldwide ilmenite mining, respectively. Australia represents 6 percent of the total ilmenite mining and 31 percent of rutile mining. Sierra Leone and Ukraine are also big contributors to rutile mining. China is the biggest producer of titanium dioxide, followed by the United States and Germany. China is also the leader in the production of titanium metal, but Japan, the Russian Federation and Kazakhstan have emerged as important contributors to this field. Patenting activities Patenting activity related to titanium dioxide production from ilmenite is rapidly increasing. Between 2002 and 2022, there have been 459 patent families that describe the production of titanium dioxide from ilmenite, and this number is growing rapidly. The majority of these patents describe pre-treatment processes, such as using smelting and magnetic separation to increase titanium concentration in low-grade ores, leading to titanium concentrates or slags. Other patents describe processes to obtain titanium dioxide, either by a direct hydrometallurgical process or through two industrially exploited processes, the sulfate process and the chloride process. Acid leaching might be used either as a pre-treatment or as part of a hydrometallurgical process to directly obtain titanium dioxide or synthetic rutile (>90 percent titanium dioxide, TiO2). The sulfate process represents 40 percent of the world’s titanium dioxide production and is protected in 23 percent of patent families. The chloride process is only mentioned in 8 percent of patent families, although it provides 60 percent of the worldwide industrial production of titanium dioxide.Key contributors to patents on the production of titanium dioxide are companies from China, Australia and the United States, reflecting the major contribution of these countries to industrial production. Chinese companies Pangang and Lomon Billions Groups are the main contributors and hold diversified patent portfolios covering both pre-treatment and the processes leading to a final product. In comparison, patenting activity related to titanium metal production from ilmenite remains stable. Between 2002 and 2022, there have been 92 patent families that describe the production of titanium metal from ilmenite, and this number has remained quite steady. These patents describe the production of titanium metal starting from mineral ores, such as ilmenite, and from titanium dioxide (TiO2) and titanium tetrachloride (TiCl4), a chemical obtained as an intermediate in the chloride process. The starting materials are purified if needed, and then converted to titanium metal by a chemical reduction process using a reducing agent. Processes mainly differ in regard to the reducing agent used to transform the starting material into titanium metal: magnesium is the most frequently cited reducing agent and the most exploited in industrial production.Key players in the field are Japanese companies, in particular Toho Titanium and Osaka Titanium Technologies, both focusing on reduction using magnesium. Pangang also contributes to titanium metal production and holds patents describing reduction by molten salt electrolysis. Lunar ilmenite Ilmenite has been found in lunar samples, particularly in high-Ti lunar mare basalts common from Apollo 11 and Apollo 17 sites, and on average, constitutes up to 5% of lunar meteorites. Ilmenite has been targeted for ISRU water and oxygen extraction due to a simplistic reduction reaction which occurs with CO and H2 buffers. Sources
Physical sciences
Minerals
Earth science
154750
https://en.wikipedia.org/wiki/Zirconium%20dioxide
Zirconium dioxide
Zirconium dioxide (), sometimes known as zirconia (not to be confused with zirconium silicate or zircon), is a white crystalline oxide of zirconium. Its most naturally occurring form, with a monoclinic crystalline structure, is the mineral baddeleyite. A dopant stabilized cubic structured zirconia, cubic zirconia, is synthesized in various colours for use as a gemstone and a diamond simulant. Production, chemical properties, occurrence Zirconia is produced by calcining zirconium compounds, exploiting its high thermostability. Structure Three phases are known: monoclinic below 1170 °C, tetragonal between 1170 °C and 2370 °C, and cubic above 2370 °C. The trend is for higher symmetry at higher temperatures, as is usually the case. A small percentage of the oxides of calcium or yttrium stabilize in the cubic phase. The very rare mineral tazheranite, , is cubic. Unlike , which features six-coordinated titanium in all phases, monoclinic zirconia consists of seven-coordinated zirconium centres. This difference is attributed to the larger size of the zirconium atom relative to the titanium atom. Chemical reactions Zirconia is chemically unreactive. It is slowly attacked by concentrated hydrofluoric acid and sulfuric acid. When heated with carbon, it converts to zirconium carbide. When heated with carbon in the presence of chlorine, it converts to zirconium(IV) chloride. This conversion is the basis for the purification of zirconium metal and is analogous to the Kroll process. Engineering properties Zirconium dioxide is one of the most studied ceramic materials. adopts a monoclinic crystal structure at room temperature and transitions to tetragonal and cubic at higher temperatures. The change of volume caused by the structure transitions from tetragonal to monoclinic to cubic induces large stresses, causing it to crack upon cooling from high temperatures. When the zirconia is blended with some other oxides, the tetragonal and/or cubic phases are stabilized. Effective dopants include magnesium oxide (MgO), yttrium oxide (, yttria), calcium oxide (), and cerium(III) oxide (). Zirconia is often more useful in its phase 'stabilized' state. Upon heating, zirconia undergoes disruptive phase changes. By adding small percentages of yttria, these phase changes are eliminated, and the resulting material has superior thermal, mechanical, and electrical properties. In some cases, the tetragonal phase can be metastable. If sufficient quantities of the metastable tetragonal phase is present, then an applied stress, magnified by the stress concentration at a crack tip, can cause the tetragonal phase to convert to monoclinic, with the associated volume expansion. This phase transformation can then put the crack into compression, retarding its growth, and enhancing the fracture toughness. This mechanism, known as transformation toughening, significantly extends the reliability and lifetime of products made with stabilized zirconia. The band gap is dependent on the phase (cubic, tetragonal, monoclinic, or amorphous) and preparation methods, with typical estimates from 5–7 eV. A special case of zirconia is that of tetragonal zirconia polycrystal, or TZP, which is indicative of polycrystalline zirconia composed of only the metastable tetragonal phase. Uses The main use of zirconia is in the production of hard ceramics, such as in dentistry, with other uses including as a protective coating on particles of titanium dioxide pigments, as a refractory material, in insulation, abrasives, and enamels. Stabilized zirconia is used in oxygen sensors and fuel cell membranes because it has the ability to allow oxygen ions to move freely through the crystal structure at high temperatures. This high ionic conductivity (and a low electronic conductivity) makes it one of the most useful electroceramics. Zirconium dioxide is also used as the solid electrolyte in electrochromic devices. Zirconia is a precursor to the electroceramic lead zirconate titanate (PZT), which is a high-κ dielectric, which is found in myriad components. Niche uses The very low thermal conductivity of cubic phase of zirconia also has led to its use as a thermal barrier coating, or TBC, in jet and diesel engines to allow operation at higher temperatures. Thermodynamically, the higher the operation temperature of an engine, the greater the possible efficiency. Another low-thermal-conductivity use is as a ceramic fiber insulation for crystal growth furnaces, fuel-cell stacks, and infrared heating systems. This material is also used in dentistry in the manufacture of subframes for the construction of dental restorations such as crowns and bridges, which are then veneered with a conventional feldspathic porcelain for aesthetic reasons, or of strong, extremely durable dental prostheses constructed entirely from monolithic zirconia, with limited but constantly improving aesthetics. Zirconia stabilized with yttria (yttrium oxide), known as yttria-stabilized zirconia, can be used as a strong base material in some full ceramic crown restorations. Transformation-toughened zirconia is used to make ceramic knives. Because of the hardness, ceramic-edged cutlery stays sharp longer than steel edged products. Due to its infusibility and brilliant luminosity when incandescent, it was used as an ingredient of sticks for limelight. Zirconia has been proposed to electrolyze carbon monoxide and oxygen from the atmosphere of Mars to provide both fuel and oxidizer that could be used as a store of chemical energy for use with surface transportation on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use, as both carbon monoxide and oxygen can be straightforwardly produced by zirconia electrolysis without requiring use of any of the Martian water resources to obtain hydrogen, which would be needed for the production of methane or any hydrogen-based fuels. Zirconia can be used as photocatalyst since its high band gap (~ 5 eV) allows the generation of high-energy electrons and holes. Some studies demonstrated the activity of doped zirconia (in order to increase visible light absorption) in degrading organic compounds and reducing Cr(VI) from wastewaters. Zirconia is also a potential high-κ dielectric material with potential applications as an insulator in transistors. Zirconia is also employed in the deposition of optical coatings; it is a high-index material usable from the near-UV to the mid-IR, due to its low absorption in this spectral region. In such applications, it is typically deposited by PVD. In jewelry making, some watch cases are advertised as being "black zirconium oxide". In 2015 Omega released a fully watch named "The Dark Side of The Moon" with ceramic case, bezel, pushers, and clasp, advertising it as four times harder than stainless steel and therefore much more resistant to scratches during everyday use. In gas tungsten arc welding, tungsten electrodes containing 1% zirconium oxide (a.k.a. zirconia) instead of 2% thorium have good arc starting and current capacity, and are not radioactive. Diamond simulant Single crystals of the cubic phase of zirconia are commonly used as diamond simulant in jewellery. Like diamond, cubic zirconia has a cubic crystal structure and a high index of refraction. Visually discerning a good quality cubic zirconia gem from a diamond is difficult, and most jewellers will have a thermal conductivity tester to identify cubic zirconia by its low thermal conductivity (diamond is a very good thermal conductor). This state of zirconia is commonly called cubic zirconia, CZ, or zircon by jewellers, but the last name is not chemically accurate. Zircon is actually the mineral name for naturally occurring zirconium(IV) silicate ().
Physical sciences
Oxide salts
Chemistry
155056
https://en.wikipedia.org/wiki/Dehydration
Dehydration
In physiology, dehydration is a lack of total body water that disrupts metabolic processes. It occurs when free water loss exceeds free water intake. This is usually due to excessive sweating, disease, or a lack of access to water. Mild dehydration can also be caused by immersion diuresis, which may increase risk of decompression sickness in divers. Most people can tolerate a 3-4% decrease in total body water without difficulty or adverse health effects. A 5-8% decrease can cause fatigue and dizziness. Loss of over 10% of total body water can cause physical and mental deterioration, accompanied by severe thirst. Death occurs with a 15 and 25% loss of body water. Mild dehydration usually resolves with oral rehydration, but severe cases may need intravenous fluids. Dehydration can cause hypernatremia (high levels of sodium ions in the blood). This is distinct from hypovolemia (loss of blood volume, particularly blood plasma). Chronic dehydration can cause kidney stones as well as the development of chronic kidney disease. Signs and symptoms The hallmarks of dehydration include thirst and neurological changes such as headaches, general discomfort, loss of appetite, nausea, decreased urine volume (unless polyuria is the cause of dehydration), confusion, unexplained tiredness, purple fingernails, and seizures. The symptoms of dehydration become increasingly severe with greater total body water loss. A body water loss of 1-2%, considered mild dehydration, is shown to impair cognitive performance. While in people over age 50, the body's thirst sensation diminishes with age, a study found that there was no difference in fluid intake between young and old people. Many older people have symptoms of dehydration, with the most common being fatigue. Dehydration contributes to morbidity in the elderly population, especially during conditions that promote insensible free water losses, such as hot weather. Cause Risk factors for dehydration include but are not limited to: exerting oneself in hot and humid weather, habitation at high altitudes, endurance athletics, elderly adults, infants, children and people living with chronic illnesses. Dehydration can also come as a side effect from many different types of drugs and medications. In the elderly, blunted response to thirst or inadequate ability to access free water in the face of excess free water losses (especially hyperglycemia related) seem to be the main causes of dehydration. Excess free water or hypotonic water can leave the body in two ways – sensible loss such as osmotic diuresis, sweating, vomiting and diarrhea, and insensible water loss, occurring mainly through the skin and respiratory tract. In humans, dehydration can be caused by a wide range of diseases and states that impair water homeostasis in the body. These occur primarily through either impaired thirst/water access or sodium excess. Mechanism Water makes up approximately 60% of the human body by mass. Within the body, water is classified as intracellular fluid or extracellular fluid. Intracellular fluid refers to water that is contained within the cells. This consists of approximately 40% of the total body water. Fluid inside the cells has high concentrations of potassium, magnesium, phosphate, and proteins. Extracellular fluid consists of all fluid outside of the cells, and it includes blood and interstitial fluid. This makes up approximately 60% of the total body water. The most common ions in extracellular fluid include sodium, chloride, and bicarbonate. The concentration of dissolved molecules and ions in the fluid is described as Osmolarity and is measured in osmoles per liter (Osm/L). When the body experiences a free water deficit, the concentration of solutes is increased. This leads to a higher serum osmolarity. When serum osmolarity is elevated, this is detected by osmoreceptors in the hypothalamus. These receptors trigger the release of antidiuretic hormone (ADH). ADH resists dehydration by increasing water absorption in the kidneys and constricting blood vessels. It acts on the V2 receptors in the cells of the collecting tubule of the nephron to increase expression of aquaporin. In more extreme cases of low blood pressure, the hypothalamus releases higher amounts of ADH which also acts on V1 receptors. These receptors cause contractions in the peripheral vascular smooth muscle. This increases systemic vascular resistance and raises blood pressure. Diagnosis Definition Dehydration occurs when water intake does not replace free water lost due to normal physiologic processes, including breathing, urination, perspiration, or other causes, including diarrhea, and vomiting. Dehydration can be life-threatening when severe and lead to seizures or respiratory arrest, and also carries the risk of osmotic cerebral edema if rehydration is overly rapid. The term "dehydration" has sometimes been used incorrectly as a proxy for the separate, related condition of hypovolemia, which specifically refers to a decrease in volume of blood plasma. The two are regulated through independent mechanisms in humans; the distinction is important in guiding treatment. Physical examination Common exam findings of dehydration include dry mucous membranes, dry axillae, increased capillary refill time, sunken eyes, and poor skin turgor. More extreme cases of dehydration can lead to orthostatic hypotension, dizziness, weakness, and altered mental status. Depending on the underlying cause of dehydration, other symptoms may be present as well. Excessive sweating from exercise may be associated with muscle cramps. Patients with gastrointestinal water loss from vomiting or diarrhea may also have fever or other systemic signs of infection. The skin turgor test can be used to support the diagnosis of dehydration. The skin turgor test is conducted by pinching skin on the patient's body, in a location such as the forearm or the back of the hand, and watching to see how quickly it returns to its normal position. The skin turgor test can be unreliable in patients who have reduced skin elasticity, such as the elderly. Laboratory tests While there is no single gold standard test to diagnose dehydration, evidence can be seen in multiple laboratory tests involving blood and urine. Serum osmolarity above 295 mOsm/kg is typically seen in dehydration due to free water loss. A urinalysis, which is a test that performs chemical and microscopic analysis of urine, may find darker color or foul odor with severe dehydration. Urinary sodium also provides information about the type of dehydration. For hyponatremic dehydration, such as from vomiting or diarrhea, urinary sodium will be less than 10mmol/L due to increased sodium retention by the kidneys in an effort to conserve water. In dehydrated patients with sodium loss due to diuretics or renal dysfunction, urinary sodium may be elevated above 20 mmol/L. Patients may also have elevated serum levels of blood urea nitrogen (BUN) and creatinine. Both of these molecules are normally excreted by the kidney, but when the circulating blood volume is low, the kidney can become injured. This causes decreased kidney function and results in elevated BUN and creatinine in the serum. Prevention For routine activities, thirst is normally an adequate guide to maintain proper hydration. Minimum water intake will vary individually depending on weight, energy expenditure, age, sex, physical activity, environment, diet, and genetics. With exercise, exposure to hot environments, or a decreased thirst response, additional water may be required. In athletes in competition, drinking to thirst optimizes performance and safety, despite weight loss, and as of 2010, there was no scientific study showing that it is beneficial to stay ahead of thirst and maintain weight during exercise. In warm or humid weather, or during heavy exertion, water loss can increase markedly, because humans have a large and widely variable capacity for sweating. Whole-body sweat losses in men can exceed 2 L/h during competitive sport, with rates of 3–4 L/h observed during short-duration, high-intensity exercise in the heat. When such large amounts of water are being lost through perspiration, electrolytes, especially sodium, are also being lost. In most athletes exercising and sweating for 4–5 hours with a sweat sodium concentration of less than 50 mmol/L, the total sodium lost is less than 10% of total body stores (total stores are approximately 2,500 mmol, or 58 g for a 70-kg person). These losses appear to be well tolerated by most people. The inclusion of sodium in fluid replacement drinks has some theoretical benefits and poses little or no risk, so long as these fluids are hypotonic (since the mainstay of dehydration prevention is the replacement of free water losses). Treatment The most effective treatment for minor dehydration is widely considered to be drinking water and reducing fluid loss. Plain water restores only the volume of the blood plasma, inhibiting the thirst mechanism before solute levels can be replenished. Consumption of solid foods can also contribute to hydration. It is estimated approximately 22% of American water intake comes from food. Urine concentration and frequency will return to normal as dehydration resolves. In some cases, correction of a dehydrated state is accomplished by the replenishment of necessary water and electrolytes (through oral rehydration therapy, or fluid replacement by intravenous therapy). As oral rehydration is less painful, non-invasive, inexpensive, and easier to provide, it is the treatment of choice for mild dehydration. Solutions used for intravenous rehydration may be isotonic,hypertonic, or hypotonic depending on the cause of dehydration as well as the sodium concentration in the blood. Pure water injected into the veins will cause the breakdown (lysis) of red blood cells (erythrocytes). When fresh water is unavailable (e.g. at sea or in a desert), seawater or drinks with significant alcohol concentration will worsen dehydration. Urine contains a lower solute concentration than seawater; this requires the kidneys to create more urine to remove the excess salt, causing more water to be lost than was consumed from seawater. If a person is dehydrated and taken to a medical facility, IVs can also be used. For severe cases of dehydration where fainting, unconsciousness, or other severely inhibiting symptoms are present (the patient is incapable of standing upright or thinking clearly), emergency attention is required. Fluids containing a proper balance of replacement electrolytes are given orally or intravenously with continuing assessment of electrolyte status; complete resolution is normal in all but the most extreme cases. Prognosis The prognosis for dehydration depends on the cause and extent of dehydration. Mild dehydration normally resolves with oral hydration. Chronic dehydration, such as from physically demanding jobs or decreased thirst, can lead to chronic kidney disease. Elderly people with dehydration are at higher risk of confusion, urinary tract infections, falls, and even delayed wound healing. In children with mild to moderate dehydration, oral hydration is adequate for a full recovery.
Biology and health sciences
Specific diseases
Health
155117
https://en.wikipedia.org/wiki/Pubic%20hair
Pubic hair
Pubic hair (or pubes , ) is terminal body hair that is found in the genital area and pubic region of adolescent and adult humans. The hair is located on and around the sex organs, and sometimes at the top of the inside of the thighs, even extending down the perineum, and to the anal region. Pubic hair is also found on the scrotum and base of the penile shaft (in males) and on the vulva (in females). Around the pubis bone and the mons pubis that covers it, it is known as a pubic patch, which can be styled. Although fine vellus hair is present in the area during childhood, pubic hair is considered to be the heavier, longer, coarser hair that develops during puberty as an effect of rising levels of hormones: androgens in males and estrogens in females. Pubic hair differs from other hair on the body, and is a secondary sex characteristic. Many cultures regard pubic hair as erotic, and most cultures associate it with the genitals, which people are expected to keep covered at all times. In some cultures, it is the norm for pubic hair to be removed, especially of females; the practice is regarded as part of personal hygiene. In some cultures, the exposure of pubic hair (for example, when wearing a swimsuit) may be regarded as unaesthetic or embarrassing, and is therefore trimmed (or otherwise styled) to avoid it being visible. Development Pubic hair forms in response to the increasing levels of testosterone in both girls and boys. Those hair follicles that are located and stimulated in androgen sensitive areas develop pubic hair. The Tanner scale describes and quantifies the development of pubic hair. Before the onset of puberty, the genital area of both boys and girls has very fine vellus hair (stage 1). At the onset of puberty, the body produces rising levels of the sex hormones, and in response, the skin of the genital area begins to produce thicker and rougher, often curlier, hair with a faster growth rate. The onset of pubic hair development is termed pubarche. In females, pubarche is usually the second sign of puberty after thelarche, though sometimes it happens before thelarche. In males, the first pubic hair appears as a few sparse hairs that are usually thin on the scrotum or at the upper base of the penis (stage 2). Within a year, hairs around the base of the penis are abundant (stage 3). Within 3 to 4 years, hair fills the pubic area (stage 4) and becomes much thicker and darker, and by 5 years extends to the near thighs and upwards on the abdomen toward the umbilicus (stage 5). Other areas of the skin are similarly, though slightly less, sensitive to androgens and androgenic hair typically appears somewhat later. In rough sequence of sensitivity to androgens and appearance of androgenic hair are the armpits (axillae), perianal area, upper lip, preauricular areas (sideburns), periareolar areas (nipples), middle of the chest, neck under the chin, remainder of chest and beard area, limbs and shoulders, back, and buttocks. Although generally considered part of the process of puberty, pubarche is distinct and independent of the process of maturation of the gonads that leads to sexual maturation and fertility. Pubic hair can develop from adrenal androgens alone and can develop even when the ovaries or testes are defective and nonfunctional. There is little, if any, difference in the capacity of male and female bodies to grow hair in response to androgens. Pubic hair and underarm hair can vary in color considerably from the hair of the scalp. In most people, it is darker, although it can also be lighter. In most cases it is most similar in color to a person's eyebrows. Hair texture varies from tightly curled to entirely straight, not necessarily correlating to the texture of the scalp hair. People of East Asian heritage tend to have black, wavy pubic hair. Pubic hair patterns can vary by race and ethnicity. Patterns of pubic hair, known as the escutcheon, vary between both the sexes and individuals. On most females, the pubic patch is triangular and lies over the vulva and mons pubis. On many males, the pubic patch tapers upwards to a line of hair pointing towards the navel (see abdominal hair), roughly a more upward-pointing triangle. As with axillary (armpit) hair, pubic hair is associated with a concentration of sebaceous glands in the area. Clinical significance Pubic lice Pubic hair can become infested with pubic lice (also known as crab lice). Adult pubic lice are in length. The pubic hair can usually host up to a dozen on average. Pubic lice are usually found attached to hair in the pubic area but sometimes are found on coarse hair elsewhere on the body (for example, eyebrows, eyelashes, beard, mustache, chest, armpits, etc.). Crab lice attach to pubic hair that is thicker than other body hair because their claws are adapted to the specific diameter of pubic hair. Pubic lice infestations (pthiriasis) are usually spread through sexual contact. The crab louse can travel up to 10 inches on the body. Pubic lice infestation is found worldwide and occurs in all races and ethnic groups and in all economic levels. Pubic lice are usually spread through sexual contact and are most common in adults. Occasionally pubic lice may be spread by close personal contact or contact with articles such as clothing, bed linens, and towels that have been used by an infested person. Pubic lice found on the head or eyelashes of children may be an indication of sexual exposure or abuse. Pubic lice do not transmit disease; however, secondary bacterial infection can occur from scratching of the skin. They are much broader in comparison to head and body lice. Adults are found only on the human host and require human blood to survive. If adults are forced off the host, they will die within 48 hours without a blood feeding. Symptoms of a crab louse infection in the pubic area is intense itching, redness and inflammation. These symptoms cause increased circulation to the skin of the pubic region creating a blood-rich environment for the crab louse. Pubic lice infestation can also be diagnosed by identifying the presence of nits or eggs on the pubic hair. In December 2016 NPR reported that "Frequent removal of pubic hair is associated with an increased risk for herpes, syphilis and human papillomavirus". However, the medical community has also seen a recent increase in folliculitis, or infection around the hair follicle, in women who wax or shave their bikini areas. Some of these infections can develop into more serious abscesses that require incision with a scalpel, drainage of the abscess, and antibiotics. Staphylococcus aureus is the most common cause of folliculitis. Burns can result when depilatory wax is used, even according to manufacturer instructions. Grooming risks Pubic hair grooming has been associated with injury and infection. It is estimated that about one quarter of the people who groom their pubic hair have had at least one lifetime injury because of the practice. Grooming has also been associated with cutaneous sexually transmitted infections, such as genital warts, syphilis, and herpes. Society and culture According to John Ruskin's biographer Mary Lutyens, the notable author, artist, and art critic was apparently accustomed only to the hairless nudes portrayed unrealistically in art, never having seen a naked woman before his wedding night. He was allegedly so shocked by his discovery of his wife Effie's pubic hair that he rejected her, and the marriage was later legally annulled. He is supposed to have thought his wife was freakish and deformed. Later writers have often followed Lutyens and repeated this version of events. For example, Gene Weingarten, writing in his book I'm with Stupid (2004) states that "Ruskin had [the marriage] annulled because he was horrified to behold upon his bride a thatch of hair, rough and wild, similar to a man's. He thought her a monster." However, there is no proof for this, and some disagree. Peter Fuller in his book Theoria: Art and the Absence of Grace writes, "It has been said that he was frightened on the wedding night by the sight of his wife's pubic hair; more probably, he was perturbed by her menstrual blood." Ruskin's biographers Tim Hilton and John Batchelor also believe that menstruation is the more likely explanation. At puberty, many girls find the sudden sprouting of pubic hair disturbing, and sometimes as unclean, because in many cases young girls have been screened by their family and by society from the sight of pubic hair. Young boys, on the other hand, tend not to be similarly disturbed by the development of their pubic hair, usually having seen body hair on their fathers. With the reintroduction of public beaches and pools, bathing in Western Europe and the Mediterranean early in the 20th century, exposure of both sexes' areas near their pubic hair became more common, and after the progressive reduction in the size of female and male swimsuits, especially since the coming into fashion and growth in popularity of the bikini after the 1940s, the practice of shaving or bikini waxing of pubic hair off the hem lines also came into vogue. Grooming practices In some Middle Eastern societies, removal of male and female body hair has been considered proper hygiene, mandated by local customs, for many centuries. Muslim teaching (applicable to males and females) includes Islamic hygienical jurisprudence in which pubic and armpit hair must be pulled out or shaven to be considered as Sunnah. Trimming is taught to be considered acceptable. Women working in pornography typically remove their pubic hair by shaving, a practice that became fashionable in the late 20th century. Shaving is used rather than bikini waxing because it can be performed daily, whereas waxing requires several days of growth before it can be repeated. According to feminist writer Caitlin Moran, the reason for the removal of pubic hair from women in pornography was a matter of "technical considerations of cinematography". Hair removal progressed to full removal. Because of the popularity of pornography, pubic hair shaving was mimicked by women, and it is among women outside the pornography industry that waxing became common in the late 20th and 21st century. The presentation is regarded by some as being erotic and aesthetic, while others consider the style as unnatural. Some people remove pubic hairs for erotic and sexual reasons or because they or their sex partner enjoy the feel of a hairless crotch. According to one academic study, as of 2016, approximately 50% of men in the United States practice regular pubic hair grooming, which can include trimming, shaving and removal. The study found that the prevalence of grooming decreases with age. Of males who groom pubic hair, 87% groom the hair above the penis, 66% groom the scrotum and 57% groom the penile shaft. Methods All hair can be removed with wax formulated for that purpose. Some individuals may remove part or all of their pubic hair, axillary hair and facial hair. Pubic hair removal using wax is called bikini waxing. The method of removing hair is called depilation (when removing only the hair above the skin) or epilation (when removing the entire hair). Beauty salons often offer various waxing services. It is sometimes referred to as "pubic topiary". Sugaring, an alternative to waxing, uses a sugar-based paste, which may include lemon, rather than wax. Sugaring removes fewer skin cells than waxing. Other methods of hair removal include laser hair removal and electrolysis. Some women modify their pubic hair, either to fit in with societal trends or as an expression of their own style or lifestyle. Styles of pubic hair modification include: Triangle or American wax (pubic hair is shortened from the sides to form a triangle so that pubic hair is hidden while wearing swimwear. The triangle can range from the very edge of the "bikini line" to up to an inch reduction on either side. Remaining hair length can be from an inch and a half to half an inch); Landing strip/French wax (pubic hair removed except for a strip of hair extending from the abdomen to the vulva); Partial Brazilian wax (pubic hair fully removed except for a small triangular strip); Full Brazilian wax or "sphinx" (complete removal of pubic hair); and Freestyle. There are variations of the Brazilian wax in which a design is formed out of the pubic hair. Stencils for several shapes are available commercially. A controversial Gucci commercial included female pubic hair shaved into a 'G'. Sexual attraction A woman or man's decision to grow or shave their pubic hair can play a role in attracting a partner. A Cosmopolitan study found that a plurality of respondents, both male and female, preferred partners who shave or at least trim their pubic hair. A smaller percentage of 6% of men and 10% of women preferred their partners to go natural and not shave or trim their pubic hair. In art In ancient Egyptian art, female pubic hair is indicated in the form of painted triangles. In medieval and classical European art, pubic hair was very rarely depicted, and male pubic hair was often, but not always, omitted. Sometimes it was portrayed in stylized form, as was the case with Greek graphic art. In 16th century southern Europe, Michelangelo showed the male David with stylized pubic hair, but female bodies were depicted hairless below the head. Nevertheless, Michelangelo's male nudes on the Sistine Chapel ceiling display no pubic hair. In the late 18th century, female pubic hair was openly portrayed in Japanese shunga (erotica), especially in the ukiyo-e tradition. Hokusai's picture The Dream of the Fisherman's Wife (1814), which depicts a woman having an erotic fantasy, is a well-known example. In Japanese drawings, such as hentai, pubic hair is often omitted, since for a long time the display of pubic hair was not legal. The interpretation of the law has since changed. (1866), by the French artist Gustave Courbet, was controversial for its realism, which pushed the limits of what was considered presentable at the time. In contrast to academic painting, which favored smooth, idealized nudes, this painting showed a close-up view of the vulva, with full pubic hair, of a woman lying on a bed with legs spread. In fashion In 1985, four weeks before his death, Rudi Gernreich unveiled the pubikini, a topless bathing suit that exposed the wearer's mons pubis and pubic hair. It was a thin, V-shaped, thong-style bottom that in the front featured a tiny strip of fabric that exposed the wearer's pubic hair. The pubikini was described as a pièce de résistance totally freeing the human body. In history Evidence of pubic hair removal in ancient India is thought to date back to 4000 to 3000 BC. According to ethnologist F. Fawcett, writing in 1901, he had observed the removal of body hair, including pubic hair about the vulva, as a custom of women from the Hindu Nair caste. In Western societies, after the spread of Christianity, public exposure of a woman's bare skin between the ankle and waist started to be disapproved of culturally. Upper body exposure due to the use of the popular vest bodices used in Western Europe from the 15th century to early 20th century, as the widespread dirndls used even in more traditionally conservative mountain areas and the more or less loose shirts under these, enabled a permissive view of the shoulders, décolletage and arms allowing a free exposure of upper body hair in women of all classes with less rejection or discrimination than body hair on the sex organs, obviously to conceal by implication. Many people came to consider public exposure of pubic hair to be embarrassing. It may be regarded as immodest and sometimes as obscene. However, it never came to have a full hold in Western culture in wide tracts of Central Europe, until the encroaching of Protestantism during the 16th century on formerly more tolerant customs. In the 1450s, British prostitutes shaved their pubic hair for purposes of personal hygiene and the combatting of pubic lice and would don merkins (or pubic wigs) when their line of work required it. Among the British upper classes during the Georgian era, pubic hair from one's lover was frequently collected as a souvenir. The curls were, for instance, worn like cockades in men's hats as potency talismans or exchanged among lovers as tokens of affection. The museum of St. Andrews University in Scotland has in its collection a snuffbox full of pubic hair of one of King George IV's mistresses (possibly Elizabeth Conyngham), which the notoriously licentious monarch donated to the Fife sex club, The Beggar's Benison. In literature In the erotic novel My Secret Life the narrator "Walter", an evident connoisseur of female pubic hair, talks with clear delight of a fine bush of a Scotswoman's thick red pubic hair: The bush was long and thick, twisting and curling in masses half-way up to her navel, and it spread about up her buttocks, gradually getting shorter there. In another part of his autobiography Walter remarks that he has seen those "bare of hair, those with but hairy stubble, those with bushes six inches long, covering them from bum bone to navel." And he adds reflectively – "there is not much that I have not seen, felt or tried, with respect to this supreme female article." In like vein, in The Memoirs of Dolly Morton, an American erotic classic, the attributes of Miss Dean are noted with some surprise – her spot was covered with a "thick forest of glossy dark brown hair," with locks nearly two inches long. One man remarked: But Gosh! I've never seen such a fleece between a woman's legs in my life. Darn me if she wouldn't have to be sheared before man could get into her.
Biology and health sciences
Integumentary system
Biology
155131
https://en.wikipedia.org/wiki/Bruxism
Bruxism
Bruxism is excessive teeth grinding or jaw clenching. It is an oral parafunctional activity; i.e., it is unrelated to normal function such as eating or talking. Bruxism is a common behavior; the global prevalence of bruxism (both sleep and awake) is 22.22%. Several symptoms are commonly associated with bruxism, including aching jaw muscles, headaches, hypersensitive teeth, tooth wear, and damage to dental restorations (e.g. crowns and fillings). Symptoms may be minimal, without patient awareness of the condition. If nothing is done, after a while many teeth start wearing down until the whole tooth is gone. There are two main types of bruxism: one occurs during sleep (nocturnal bruxism) and one during wakefulness (awake bruxism). Dental damage may be similar in both types, but the symptoms of sleep bruxism tend to be worse on waking and improve during the course of the day, and the symptoms of awake bruxism may not be present at all on waking, and then worsen over the day. The causes of bruxism are not completely understood, but probably involve multiple factors. Awake bruxism is more common in women, whereas men and women are affected in equal proportions by sleep bruxism. Awake bruxism is thought to have different causes from sleep bruxism. Several treatments are in use, although there is little evidence of robust efficacy for any particular treatment. Epidemiology There is a wide variation in reported epidemiologic data for bruxism, and this is largely due to differences in the definition, diagnosis and research methodologies of these studies. E.g. several studies use self-reported bruxism as a measure of bruxism, and since many people with bruxism are not aware of their habit, self-reported tooth grinding and clenching habits may be a poor measure of the true prevalence. The ICSD-R states that 85–90% of the general population grind their teeth to a degree at some point during their life, although only 5% will develop a clinical condition. Some studies have reported that awake bruxism affects females more commonly than males, while in sleep bruxism, males and females are affected equally. Children are reported to brux as commonly as adults. It is possible for sleep bruxism to occur as early as the first year of life, after the first teeth (deciduous incisors) erupt into the mouth, and the overall prevalence in children is about 14–20%. The ICSD-R states that sleep bruxism may occur in over 50% of normal infants. Often sleep bruxism develops during adolescence, and the prevalence in 18- to 29-year-olds is about 13%. The overall prevalence in adults is reported to be 8%, and people over the age of 60 are less likely to be affected, with the prevalence dropping to about 3% in this group. According to a meta-analysis conducted in 2024, the global prevalence of bruxism (both sleep and awake) is 22.22%. The global prevalence of sleep bruxism is 21%, while the prevalence of awake bruxism is 23%. The occurrence of sleep bruxism, based on polysomnography, was estimated at 43%. The highest prevalence of sleep bruxism was observed in North America at 31%, followed by South America at 23%, Europe at 21%, and Asia at 19%. The prevalence of awake bruxism was highest in South America at 30%, followed by Asia at 25% and Europe at 18%. The review also concluded that overall, bruxism affects males and females equally, and affects elderly people less commonly. Signs and symptoms Most people who brux are unaware of the problem, either because there are no symptoms, or because the symptoms are not understood to be associated with a clenching and grinding problem. The symptoms of sleep bruxism are usually most intense immediately after waking, and then slowly abate, and the symptoms of a grinding habit which occurs mainly while awake tend to worsen through the day, and may not be present on waking. Bruxism may cause a variety of signs and symptoms, including: A grinding or tapping noise during sleep, sometimes detected by a partner or a parent. This noise can be surprisingly loud and unpleasant, and can wake a sleeping partner. Noises are rarely associated with awake bruxism. Other parafunctional activity which may occur together with bruxism: cheek biting (which may manifest as morsicatio buccarum or linea alba), or lip biting. A burning sensation on the tongue (see: glossodynia), possibly related to a coexistent "tongue thrusting" parafunctional activity. Indentations of the teeth in the tongue ("crenated tongue" or "scalloped tongue"). Hypertrophy of the muscles of mastication (increase in the size of the muscles that move the jaw), particularly the masseter muscle. Tenderness, pain or fatigue of the muscles of mastication, which may get worse during chewing or other jaw movement. Trismus (restricted mouth opening). Pain or tenderness of the temporomandibular joints, which may manifest as preauricular pain (in front of the ear), or pain referred to the ear (otalgia). Clicking of the temporomandibular joints. Headaches, particularly pain in the temples, caused by muscle pain associated with the temporalis muscle. Excessive tooth wear, particularly attrition, which flattens the occlusal (biting) surface, but also possibly other types of tooth wear such as abfraction, where notches form around the neck of the teeth at the gumline. Tooth fractures, and repeated failure of dental restorations (fillings, crowns, etc.). Hypersensitive teeth, (e.g. dental pain when drinking a cold liquid) caused by wearing away of the thickness of insulating layers of dentin and enamel around the dental pulp. Inflammation of the periodontal ligament of teeth, which may make them sore to bite on, and possibly also a degree of loosening of the teeth. Bruxism is usually detected because of the effects of the process (most commonly tooth wear and pain), rather than the process itself. The large forces that can be generated during bruxism can have detrimental effects on the components of masticatory system, namely the teeth, the periodontium and the articulation of the mandible with the skull (the temporomandibular joints). The muscles of mastication that act to move the jaw can also be affected since they are being utilized over and above of normal function. Pain Most people with bruxism will experience no pain. The presence or degree of pain does not necessarily correlate with the severity of grinding or clenching. The pain in the muscles of mastication caused by bruxism can be likened to muscle pain after exercise. The pain may be felt over the angle of the jaw (masseter) or in the temple (temporalis), and may be described as a headache or an aching jaw. Most (but not all) bruxism includes clenching force provided by masseter and temporalis muscle groups; but some bruxers clench and grind front teeth only, which involves minimal action of the masseter and temporalis muscles. The temporomandibular joints themselves may also become painful, which is usually felt just in front of the ear, or inside the ear itself. Clicking of the jaw joint may also develop. The forces exerted on the teeth are more than the periodontal ligament is biologically designed to handle, and so inflammation may result. A tooth may become sore to bite on, and further, tooth wear may reduce the insulating width of enamel and dentin that protects the pulp of the tooth and result in hypersensitivity, e.g. to cold stimuli. The relationship of bruxism with temporomandibular joint dysfunction (TMD, or temporomandibular pain dysfunction syndrome) is debated. Many suggest that sleep bruxism can be a causative or contributory factor to pain symptoms in TMD. Indeed, the symptoms of TMD overlap with those of bruxism. Others suggest that there is no strong association between TMD and bruxism. A systematic review investigating the possible relationship concluded that when self-reported bruxism is used to diagnose bruxism, there is a positive association with TMD pain, and when stricter diagnostic criteria for bruxism are used, the association with TMD symptoms is much lower. In severe, chronic cases, bruxism can lead to myofascial pain and arthritis of the temporomandibular joints. Tooth wear Many publications list tooth wear as a consequence of bruxism, but some report a lack of a positive relationship between tooth wear and bruxism. Tooth wear caused by tooth-to-tooth contact is termed attrition. This is the most usual type of tooth wear that occurs in bruxism, and affects the occlusal surface (the biting surface) of the teeth. The exact location and pattern of attrition depends on how the bruxism occurs, e.g., when the canines and incisors of the opposing arches are moved against each other laterally, by the action of the medial pterygoid muscles, this can lead to the wearing down of the incisal edges of the teeth. To grind the front teeth, most people need to posture their mandible forwards, unless there is an existing edge to edge, class III incisal relationship. People with bruxism may also grind their posterior teeth (back teeth), which wears down the cusps of the occlusal surface. Once tooth wear progresses through the enamel layer, the exposed dentin layer is softer and more vulnerable to wear and tooth decay. If enough of the tooth is worn away or decayed, the tooth will effectively be weakened, and may fracture under the increased forces that occur in bruxism. Abfraction is another type of tooth wear that is postulated to occur with bruxism, although some still argue whether this type of tooth wear is a reality. Abfraction cavities are said to occur usually on the facial aspect of teeth, in the cervical region as V-shaped defects caused by flexing of the tooth under occlusal forces. It is argued that similar lesions can be caused by long-term forceful toothbrushing. However, the fact that the cavities are V-shaped does not suggest that the damage is caused by toothbrush abrasion, and that some abfraction cavities occur below the level of the gumline, i.e., in an area shielded from toothbrush abrasion, supports the validity of this mechanism of tooth wear. In addition to attrition, erosion is said to synergistically contribute to tooth wear in some bruxists, according to some sources. Tooth mobility The view that occlusal trauma (as may occur during bruxism) is a causative factor in gingivitis and periodontitis is not widely accepted. It is thought that the periodontal ligament may respond to increased occlusal (biting) forces by resorbing some of the bone of the alveolar crest, which may result in increased tooth mobility, however these changes are reversible if the occlusal force is reduced. Tooth movement that occurs during occlusal loading is sometimes termed fremitus. It is generally accepted that increased occlusal forces are able to increase the rate of progression of pre-existing periodontal disease (gum disease), however the main stay treatment is plaque control rather than elaborate occlusal adjustments. It is also generally accepted that periodontal disease is a far more common cause of tooth mobility and pathological tooth migration than any influence of bruxism, although bruxism may much less commonly be involved in both. Causes The muscles of mastication (the temporalis muscle, masseter muscle, medial pterygoid muscle and lateral pterygoid muscle) are paired on either side and work together to move the mandible, which hinges and slides around its dual articulation with the skull at the temporomandibular joints. Some of the muscles work to elevate the mandible (close the mouth), and others also are involved in lateral (side to side), protrusive or retractive movements. Mastication (chewing) is a complex neuromuscular activity that can be controlled either by subconscious processes or by conscious processes. In individuals without bruxism or other parafunctional activities, during wakefulness the jaw is generally at rest and the teeth are not in contact, except while speaking, swallowing or chewing. It is estimated that the teeth are in contact for less than 20 minutes per day, mostly during chewing and swallowing. Normally during sleep, the voluntary muscles are inactive due to physiologic motor paralysis, and the jaw is usually open. Ankyloglossia is suspected as a cause of bruxism. Some bruxism activity is rhythmic with bite force pulses of tenths of a second (like chewing), and some have longer bite force pulses of 1 to 30 seconds (clenching). Some individuals clench without significant lateral movements. Bruxism can also be regarded as a disorder of repetitive, unconscious contraction of muscles. This typically involves the masseter muscle and the anterior portion of the temporalis (the large outer muscles that clench), and the lateral pterygoids, relatively small bilateral muscles that act together to perform sideways grinding. Multiple causes The cause of bruxism is largely unknown, but it is generally accepted to have multiple possible causes. Bruxism is a parafunctional activity, but it is debated whether this represents a subconscious habit or is entirely involuntary. The relative importance of the various identified possible causative factors is also debated. Awake bruxism is thought to be usually semivoluntary, and often associated with stress caused by family responsibilities or work pressures. Some suggest that in children, bruxism may occasionally represent a response to earache or teething. Awake bruxism usually involves clenching (sometimes the term "awake clenching" is used instead of awake bruxism), but also possibly grinding, and is often associated with other semivoluntary oral habits such as cheek biting, nail biting, chewing on a pen or pencil absent mindedly, or tongue thrusting (where the tongue is pushed against the front teeth forcefully). There is evidence that sleep bruxism is caused by mechanisms related to the central nervous system, involving sleep arousal and neurotransmitter abnormalities. Underlying these factors may be psychosocial factors including daytime stress which is disrupting peaceful sleep. Sleep bruxism is mainly characterized by "rhythmic masticatory muscle activity" (RMMA) at a frequency of about once per second, and also with occasional tooth grinding. It has been shown that the majority (86%) of sleep bruxism episodes occur during periods of sleep arousal. One study reported that sleep arousals which were experimentally induced with sensory stimulation in sleeping bruxists triggered episodes of sleep bruxism. Sleep arousals are a sudden change in the depth of the sleep stage, and may also be accompanied by increased heart rate, respiratory changes and muscular activity, such as leg movements. Initial reports have suggested that episodes of sleep bruxism may be accompanied by gastroesophageal reflux, decreased esophageal pH (acidity), swallowing, and decreased salivary flow. Another report suggested a link between episodes of sleep bruxism and a supine sleeping position (lying face up). Disturbance of the dopaminergic system in the central nervous system has also been suggested to be involved in the etiology of bruxism. Evidence for this comes from observations of the modifying effect of medications which alter dopamine release on bruxing activity, such as levodopa, amphetamines or nicotine. Nicotine stimulates release of dopamine, which is postulated to explain why bruxism is twice as common in smokers compared to non-smokers. Historical focus Historically, many believed that problems with the bite were the sole cause for bruxism. It was often claimed that a person would grind at the interfering area in a subconscious, instinctive attempt to wear this down and "self equiliberate" their occlusion. However, occlusal interferences are extremely common and usually do not cause any problems. It is unclear whether people with bruxism tend to notice problems with the bite because of their clenching and grinding habit, or whether these act as a causative factor in the development of the condition. In sleep bruxism especially, there is no evidence that removal of occlusal interferences has any impact on the condition. People with no teeth at all who wear dentures can still have bruxism, although dentures also often change the original bite. Most modern sources state that there is no relationship, or at most a minimal relationship, between bruxism and occlusal factors. The findings of one study, which used self-reported tooth grinding rather than clinical examination to detect bruxism, suggested that there may be more of a relationship between occlusal factors and bruxism in children. However, the role of occlusal factors in bruxism cannot be completely discounted due to insufficient evidence and problems with the design of studies. A minority of researchers continue to claim that various adjustments to the mechanics of the bite are capable of curing bruxism (see Occlusal adjustment/reorganization). Psychosocial factors Many studies have reported significant psychosocial risk factors for bruxism, particularly a stressful lifestyle, and this evidence is growing, but still not conclusive. Some consider emotional stress and anxiety to be the main triggering factors. It has been reported that persons with bruxism respond differently to depression, hostility and stress compared to people without bruxism. Stress has a stronger relationship to awake bruxism, but the role of stress in sleep bruxism is less clear, with some stating that there is no evidence for a relationship with sleep bruxism. However, children with sleep bruxism have been shown to have greater levels of anxiety than other children. People aged 50 with bruxism are more likely to be single and have a high level of education. Work-related stress and irregular work shifts may also be involved. Personality traits are also commonly discussed in publications concerning the causes of bruxism, e.g. aggressive, competitive or hyperactive personality types. Some suggest that suppressed anger or frustration can contribute to bruxism. Stressful periods such as examinations, family bereavement, marriage, divorce, or relocation have been suggested to intensify bruxism. Awake bruxism often occurs during periods of concentration such as while working at a computer, driving or reading. Animal studies have also suggested a link between bruxism and psychosocial factors. Rosales et al. electrically shocked lab rats, and then observed high levels of bruxism-like muscular activity in rats that were allowed to watch this treatment compared to rats that did not see it. They proposed that the rats who witnessed the electrical shocking of other rats were under emotional stress which may have caused the bruxism-like behavior. Genetic factors Some research suggests that there may be a degree of inherited susceptibility to develop sleep bruxism. 21–50% of people with sleep bruxism have a direct family member who had sleep bruxism during childhood, suggesting that there are genetic factors involved, although no genetic markers have yet been identified. Offspring of people who have sleep bruxism are more likely to also have sleep bruxism than children of people who do not have bruxism, or people with awake bruxism rather than sleep bruxism. Medications Certain stimulant drugs, including both prescribed and recreational drugs, are thought by some to cause the development of bruxism. However, others argue that there is insufficient evidence to draw such a conclusion. Examples may include dopamine agonists, dopamine antagonists, tricyclic antidepressants, selective serotonin reuptake inhibitors, alcohol, cocaine, and amphetamines (including those taken for medical reasons). In some reported cases where bruxism is thought to have been initiated by selective serotonin reuptake inhibitors, decreasing the dose resolved the side effect. Other sources state that reports of selective serotonin reuptake inhibitors causing bruxism are rare, or only occur with long-term use. Specific examples include levodopa (when used in the long term, as in Parkinson's disease), fluoxetine, metoclopramide, lithium, cocaine, venlafaxine, citalopram, fluvoxamine, methylenedioxyamphetamine (MDA), methylphenidate (used in attention deficit hyperactive disorder), and gamma-hydroxybutyric acid (GHB) and similar gamma-aminobutyric acid-inducing analogues such as phenibut. Bruxism can also be exacerbated by excessive consumption of caffeine, as in coffee, tea or chocolate. Bruxism has also been reported to occur commonly comorbid with drug addiction. Methylenedioxymethamphetamine (MDMA, ecstasy) has been reported to be associated with bruxism, which occurs immediately after taking the drug and for several days afterwards. Tooth wear in people who take ecstasy is also frequently much more severe than in people with bruxism not associated with ecstasy. Occlusal factors Occlusion is defined most simply as "contacts between teeth", and is the meeting of teeth during biting and chewing. The term does not imply any disease. Malocclusion is a medical term referring to less than ideal positioning of the upper teeth relative to the lower teeth, which can occur both when the upper jaw is ideally proportioned to the lower jaw, or where there is a discrepancy between the size of the upper jaw relative to the lower jaw. Malocclusion of some sort is so common that the concept of an "ideal occlusion" is called into question, and it can be considered "normal to be abnormal". An occlusal interference may refer to a problem which interferes with the normal path of the bite, and is usually used to describe a localized problem with the position or shape of a single tooth or group of teeth. A premature contact is one part of the bite meeting sooner than other parts, meaning that the rest of the teeth meet later or are held open, e.g., a new dental restoration on a tooth (e.g., a crown) which has a slightly different shape or position to the original tooth may contact too soon in the bite. A deflective contact/interference is an interference with the bite that changes the normal path of the bite. A common example of a deflective interference is an over-erupted upper wisdom tooth, often because the lower wisdom tooth has been removed or is impacted. In this example, when the jaws are brought together, the lower back teeth contact the prominent upper wisdom tooth before the other teeth, and the lower jaw has to move forward to allow the rest of the teeth to meet. The difference between a premature contact and a deflective interference is that the latter implies a dynamic abnormality in the bite. Possible associations Several associations between bruxism and other conditions, usually neurological or psychiatric disorders, have rarely been reported, with varying degrees of evidence (often in the form of case reports). Examples include: Acrodynia Atypical facial pain Autism Cerebral palsy Disturbed sleep patterns and other sleep disorders, such as obstructive sleep apnea, snoring, moderate daytime sleepiness, and insomnia Down syndrome Dyskinesias Epilepsy Eustachian tube dysfunction Infarction in the basal ganglia Intellectual disability, particularly in children Leigh disease Meningococcal septicaemia Multiple system atrophy Oromandibular dystonia Parkinson's diseases, (possibly due to long-term therapy with levodopa causing dopaminergic dysfunction) Rett syndrome Torus mandibularis and buccal exostosis Trauma, e.g. brain injury or coma Diagnosis Early diagnosis of bruxism is advantageous, but difficult. Early diagnosis can prevent damage that may be incurred and the detrimental effect on quality of life. A diagnosis of bruxism is usually made clinically, and is mainly based on the person's history (e.g. reports of grinding noises) and the presence of typical signs and symptoms, including tooth mobility, tooth wear, masseteric hypertrophy, indentations on the tongue, hypersensitive teeth (which may be misdiagnosed as reversible pulpitis), pain in the muscles of mastication, and clicking or locking of the temporomandibular joints. Questionnaires can be used to screen for bruxism in both the clinical and research settings. For tooth grinders who live in a household with other people, diagnosis of grinding is straightforward: Housemates or family members would advise a bruxer of recurrent grinding. Grinders who live alone can likewise resort to a sound-activated tape recorder. To confirm the condition of clenching, on the other hand, bruxers may rely on such devices as the Bruxchecker, Bruxcore, or a beeswax-bearing biteplate. The Individual (personal) Tooth-Wear Index was developed to objectively quantify the degree of tooth wear in an individual, without being affected by the number of missing teeth. Bruxism is not the only cause of tooth wear. Another possible cause of tooth wear is acid erosion, which may occur in people who drink a lot of acidic liquids such as concentrated fruit juice, or in people who frequently vomit or regurgitate stomach acid, which itself can occur for various reasons. People also demonstrate a normal level of tooth wear, associated with normal function. The presence of tooth wear only indicates that it had occurred at some point in the past, and does not necessarily indicate that the loss of tooth substance is ongoing. People who clench and perform minimal grinding will also not show much tooth wear. Occlusal splints are usually employed as a treatment for bruxism, but they can also be of diagnostic use, e.g. to observe the presence or absence of wear on the splint after a certain period of wearing it at night. The most usual trigger in sleep bruxism that leads a person to seek medical or dental advice is being informed by a sleeping partner of unpleasant grinding noises during sleep. The diagnosis of sleep bruxism is usually straightforward, and involves the exclusion of dental diseases, temporomandibular disorders, and the rhythmic jaw movements that occur with seizure disorders (e.g. epilepsy). This usually involves a dental examination, and possibly electroencephalography if a seizure disorder is suspected. Polysomnography shows increased masseter and temporalis muscular activity during sleep. Polysomnography may involve electroencephalography, electromyography, electrocardiography, air flow monitoring and audio–video recording. It may be useful to help exclude other sleep disorders; however, due to the expense of the use of a sleep lab, polysomnography is mostly of relevance to research rather than routine clinical diagnosis of bruxism. Tooth wear may be brought to the person's attention during routine dental examination. With awake bruxism, most people will often initially deny clenching and grinding because they are unaware of the habit. Often, the person may re-attend soon after the first visit and report that they have now become aware of such a habit. Several devices have been developed that aim to objectively measure bruxism activity, either in terms of muscular activity or bite forces. They have been criticized for introducing a possible change in the bruxing habit, whether increasing or decreasing it, and are therefore poorly representative to the native bruxing activity. These are mostly of relevance to research, and are rarely used in the routine clinical diagnosis of bruxism. Examples include the "Bruxcore Bruxism-Monitoring Device" (BBMD, "Bruxcore Plate"), the "intra-splint force detector" (ISFD), and electromyographic devices to measure masseter or temporalis muscle activity (e.g. the "BiteStrip", and the "Grindcare"). ICSD-R diagnostic criteria The ICSD-R listed diagnostic criteria for sleep bruxism. The minimal criteria include both of the following: A. symptom of tooth-grinding or tooth-clenching during sleep, and B. One or more of the following: Abnormal tooth wear Grinding sounds Discomfort of the jaw muscles With the following criteria supporting the diagnosis: C. polysomnography shows both: Activity of jaw muscles during sleep No associated epileptic activity D. No other medical or mental disorders (e.g., sleep-related epilepsy, which may cause abnormal movement during sleep). E. The presence of other sleep disorders (e.g., obstructive sleep apnea syndrome). Definition examples Bruxism is derived from the Greek word (brykein) "to bite, or to gnash, grind the teeth". People with bruxism are called bruxists or bruxers and the verb itself is "to brux". There is no widely accepted definition of bruxism. Examples of definitions include: Classification by temporal pattern Bruxism can be subdivided into two types based upon when the parafunctional activity occurs – during sleep ("sleep bruxism"), or while awake ("awake bruxism"). This is the most widely used classification since sleep bruxism generally has different causes to awake bruxism, although the effects on the condition on the teeth may be the same. The treatment is also often dependent upon whether the bruxism happens during sleep or while awake, e.g., an occlusal splint worn during sleep in a person who only bruxes when awake will probably have no benefit. Some have even suggested that sleep bruxism is an entirely different disorder and is not associated with awake bruxism. Awake bruxism is sometimes abbreviated to AB, and is also termed "diurnal bruxism", DB, or "daytime bruxing". Sleep bruxism is sometimes abbreviated to SB, and is also termed "sleep-related bruxism", "nocturnal bruxism", or "nocturnal tooth grinding". According to the International Classification of Sleep Disorders revised edition (ICSD-R), the term "sleep bruxism" is the most appropriate since this type occurs during sleep specifically rather than being associated with a particular time of day, i.e., if a person with sleep bruxism were to sleep during the day and stay awake at night then the condition would not occur during the night but during the day. The ICDS-R defined sleep bruxism as "a stereotyped movement disorder characterized by grinding or clenching of the teeth during sleep", classifying it as a parasomnia. The second edition (ICSD-2) however reclassified bruxism to a "sleep related movement disorder" rather than a parasomnia. Classification by cause Alternatively, bruxism can be divided into primary bruxism (also termed "idiopathic bruxism"), where the disorder is not related to any other medical condition, or secondary bruxism, where the disorder is associated with other medical conditions. Secondary bruxism includes iatrogenic causes, such as the side effect of prescribed medications. Another source divides the causes of bruxism into three groups, namely central or pathophysiological factors, psychosocial factors and peripheral factors. The World Health Organization's International Classification of Diseases 10th revision does not have an entry called bruxism, instead listing "tooth grinding" under somatoform disorders. To describe bruxism as a purely somatoform disorder does not reflect the mainstream, modern view of this condition (see causes). Classification by severity The ICSD-R described three different severities of sleep bruxism, defining mild as occurring less than nightly, with no damage to teeth or psychosocial impairment; moderate as occurring nightly, with mild impairment of psychosocial functioning; and severe as occurring nightly, and with damage to the teeth, temporomandibular disorders and other physical injuries, and severe psychosocial impairment. Classification by duration The ICSD-R also described three different types of sleep bruxism according to the duration the condition is present, namely acute, which lasts for less than one week; subacute, which lasts for more than a week and less than one month; and chronic which lasts for over a month. Management Treatment for bruxism revolves around repairing the damage to teeth that has already occurred, and also often, via one or more of several available methods, attempting to prevent further damage and manage symptoms, but there is no widely accepted, best treatment. Since bruxism is not life-threatening, and there is little evidence of the efficacy of any treatment, it has been recommended that only conservative treatment which is reversible and that carries low risk of morbidity should be used. The main treatments that have been described in awake and sleep bruxism are described below. Psychosocial interventions Given the strong association between awake bruxism and psychosocial factors (the relationship between sleep bruxism and psychosocial factors being unclear), the role of psychosocial interventions could be argued to be central to the management. The most simple form of treatment is therefore reassurance that the condition does not represent a serious disease, which may act to alleviate contributing stress. Sleep hygiene education should be provided by the clinician, as well as a clear and short explanation of bruxism (definition, causes and treatment options). Relaxation and tension-reduction have not been found to reduce bruxism symptoms, but have given patients a sense of well-being. One study has reported less grinding and reduction of EMG activity after hypnotherapy. Other interventions include relaxation techniques, stress management, behavioural modification, habit reversal and hypnosis (self hypnosis or with a hypnotherapist). Cognitive behavioral therapy has been recommended by some for treatment of bruxism. In many cases awake bruxism can be reduced by using reminder techniques. Combined with a protocol sheet this can also help to evaluate in which situations bruxism is most prevalent. Medication Many different medications have been used to treat bruxism, including benzodiazepines, anticonvulsants, beta blockers, dopamine agents, antidepressants, muscle relaxants, and others. However, there is little, if any, evidence for their respective and comparative efficacies with each other and when compared to a placebo. A multiyear systematic review to investigate the evidence for drug treatments in sleep bruxism published in 2014 (Pharmacotherapy for Sleep Bruxism. Macedo, et al.) found "insufficient evidence on the effectiveness of pharmacotherapy for the treatment of sleep bruxism." Specific drugs that have been studied in sleep bruxism are clonazepam, levodopa, amitriptyline, bromocriptine, pergolide, clonidine, propranolol, and l-tryptophan, with some showing no effect and others appear to have promising initial results; however, it has been suggested that further safety testing is required before any evidence-based clinical recommendations can be made. When bruxism is related to the use of selective serotonin reuptake inhibitors in depression, adding buspirone has been reported to resolve the side effect. Tricyclic antidepressants have also been suggested to be preferable to selective serotonin reuptake inhibitors in people with bruxism, and may help with the pain. Prevention of dental damage Bruxism can cause significant tooth wear if it is severe, and sometimes dental restorations (crowns, fillings etc.) are damaged or lost, sometimes repeatedly. Most dentists therefore prefer to keep dental treatment in people with bruxism very simple and only carry it out when essential, since any dental work is likely to fail in the long term. Dental implants, dental ceramics such as Emax crowns and complex bridgework for example are relatively contraindicated in bruxists. In the case of crowns, the strength of the restoration becomes more important, sometimes at the cost of aesthetic considerations. E.g. a full coverage gold crown, which has a degree of flexibility and also involves less removal (and therefore less weakening) of the underlying natural tooth may be more appropriate than other types of crown which are primarily designed for esthetics rather than durability. Porcelain veneers on the incisors are particularly vulnerable to damage, and sometimes a crown can be perforated by occlusal wear. Occlusal splints (also termed dental guards) are commonly prescribed, mainly by dentists and dental specialists, as a treatment for bruxism. Proponents of their use claim many benefits, however when the evidence is critically examined in systematic reviews of the topic, it is reported that there is insufficient evidence to show that occlusal splints are effective for sleep bruxism as well as bruxism overall. Furthermore, occlusal splints are probably ineffective for awake bruxism, since they tend to be worn only during sleep. However, occlusal splints may be of some benefit in reducing the tooth wear that may accompany bruxism, but by mechanically protecting the teeth rather than reducing the bruxing activity itself. In a minority of cases, sleep bruxism may be made worse by an occlusal splint. Some patients will periodically return with splints with holes worn through them, either because the bruxism is aggravated, or unaffected by the presence of the splint. When tooth-to-tooth contact is possible through the holes in a splint, it is offering no protection against tooth wear and needs to be replaced. Occlusal splints are divided into partial or full-coverage splints according to whether they fit over some or all of the teeth. They are typically made of plastic (e.g. acrylic) and can be hard or soft. A lower appliance can be worn alone, or in combination with an upper appliance. Usually lower splints are better tolerated in people with a sensitive gag reflex. Another problem with wearing a splint can be stimulation of salivary flow, and for this reason some advise to start wearing the splint about 30 mins before going to bed so this does not lead to difficulty falling asleep. As an added measure for hypersensitive teeth in bruxism, desensitizing toothpastes (e.g. containing strontium chloride) can be applied initially inside the splint so the material is in contact with the teeth all night. This can be continued until there is only a normal level of sensitivity from the teeth, although it should be remembered that sensitivity to thermal stimuli is also a symptom of pulpitis, and may indicate the presence of tooth decay rather than merely hypersensitive teeth. Splints may also reduce muscle strain by allowing the upper and lower jaw to move easily with respect to each other. Treatment goals include: constraining the bruxing pattern to avoid damage to the temporomandibular joints; stabilizing the occlusion by minimizing gradual changes to the positions of the teeth, preventing tooth damage and revealing the extent and patterns of bruxism through examination of the markings on the splint's surface. A dental guard is typically worn during every night's sleep on a long-term basis. However, a meta-analysis of occlusal splints (dental guards) used for this purpose concluded "There is not enough evidence to state that the occlusal splint is effective for treating sleep bruxism." A repositioning splint is designed to change the patient's occlusion, or bite. The efficacy of such devices is debated. Some writers propose that irreversible complications can result from the long-term use of mouthguards and repositioning splints. Random controlled trials with these type devices generally show no benefit over other therapies. Another partial splint is the nociceptive trigeminal inhibition tension suppression system (NTI-TSS) dental guard. This splint snaps onto the front teeth only. It is theorized to prevent tissue damages primarily by reducing the bite force from attempts to close the jaw normally into a forward twisting of the lower front teeth. The intent is for the brain to interpret the nerve sensations as undesirable, automatically and subconsciously reducing clenching force. However, there may be potential for the NTI-TSS device to act as a Dahl appliance, holding the posterior teeth out of occlusion and leading to their over-eruption, deranging the occlusion (i.e. it may cause the teeth to move position). This is far more likely if the appliance is worn for excessive periods of time, which is why NTI type appliances are designed for night time use only, and ongoing follow-ups are recommended. A mandibular advancement device (normally used for treatment of obstructive sleep apnea) may reduce sleep bruxism, although its use may be associated with discomfort. Botulinum toxin Botulinum neurotoxin (BoNT) is used as a treatment for bruxism. A 2020 overview of systematic reviews found that botulinum toxin type A (BTX-A) showed a significant pain and sleep bruxism frequency reduction when compared to placebo or conventional treatment (behavioral therapy, occlusal splints, and drugs), after 6 and 12 months. Botulinum toxin causes muscle paralysis/atrophy by inhibition of acetylcholine release at neuromuscular junctions. BoNT injections are used in bruxism on the theory that a dilute solution of the toxin will partially paralyze the muscles and lessen their ability to forcefully clench and grind the jaw, while aiming to retain enough muscular function to enable normal activities such as talking and eating. This treatment typically involves five or six injections into the masseter and temporalis muscles, and less often into the lateral pterygoids (given the possible risk of decreasing the ability to swallow) taking a few minutes per side. The effects may be noticeable by the next day, and they may last for about three months. Occasionally, adverse effects may occur, such as bruising, but this is quite rare. The dose of toxin used depends upon the person, and a higher dose may be needed in people with stronger muscles of mastication. With the temporary and partial muscle paralysis, atrophy of disuse may occur, meaning that the future required dose may be smaller or the length of time the effects last may be increased. Biofeedback Biofeedback is a process or device that allows an individual to become aware of, and alter physiological activity with the aim of improving health. Although the evidence of biofeedback has not been tested for awake bruxism, there is recent evidence for the efficacy of biofeedback in the management of nocturnal bruxism in small control groups. Electromyographic monitoring devices of the associated muscle groups tied with automatic alerting during periods of clenching and grinding have been prescribed for awake bruxism. Dental appliances with capsules that break and release a taste stimulus when enough force is applied have also been described in sleep bruxism, which would wake the person from sleep in an attempt to prevent bruxism episodes. "Large scale, double-blind, experiment confirming the effectiveness of this approach have yet to be carried out." Occlusal adjustment/reorganization As an alternative to simply reactively repairing the damage to teeth and conforming to the existing occlusal scheme, occasionally some dentists will attempt to reorganize the occlusion in the belief that this may redistribute the forces and reduce the amount of damage inflicted on the dentition. Sometimes termed "occlusal rehabilitation" or "occlusal equilibration", this can be a complex procedure, and there is much disagreement between proponents of these techniques on most of the aspects involved, including the indications and the goals. It may involve orthodontics, restorative dentistry or even orthognathic surgery. Some have criticized these occlusal reorganizations as having no evidence base, and irreversibly damaging the dentition on top of the damage already caused by bruxism. History Two thousand years ago, Shuowen Jiezi by Xu Shen documented the definition of Chinese character "齘" (bruxism) as "the clenching of teeth" (齒相切也). In 610, Zhubing yuanhou lun by Chao Yuanfang documented the definition of bruxism (齘齒) as "the clenching of teeth during sleep" and explained that it was caused by Qi deficiency and blood stasis. In 978, Taiping Shenghuifang by Wang Huaiyin gave a similar explanation and three prescriptions for treatment. "La bruxomanie" (a French term, translates to bruxomania) was suggested by Marie Pietkiewics in 1907. In 1931, Frohman first coined the term bruxism. Occasionally recent medical publications will use the word bruxomania with bruxism, to denote specifically bruxism that occurs while awake; however, this term can be considered historical and the modern equivalent would be awake bruxism or diurnal bruxism. It has been shown that the type of research into bruxism has changed over time. Overall between 1966 and 2007, most of the research published was focused on occlusal adjustments and oral splints. Behavioral approaches in research declined from over 60% of publications in the period 1966–86 to about 10% in the period 1997–2007. In the 1960s, a periodontist named Sigurd Peder Ramfjord championed the theory that occlusal factors were responsible for bruxism. Generations of dentists were educated by this ideology in the prominent textbook on occlusion of the time, however therapy centered around removal of occlusal interference remained unsatisfactory. The belief among dentists that occlusion and bruxism are strongly related is still widespread, however the majority of researchers now disfavor malocclusion as the main etiologic factor in favor of a more multifactorial, biopsychosocial model of bruxism. Society and culture Clenching the teeth is generally displayed by humans and other animals as a display of anger, hostility or frustration. It is thought that in humans, clenching the teeth may be an evolutionary instinct to display teeth as weapons, thereby threatening a rival or a predator. The phrase "to grit one's teeth" is the grinding or clenching of the teeth in anger, or to accept a difficult or unpleasant situation and deal with it in a determined way. In the Bible there are several references to "gnashing of teeth" in both the Old Testament, and the New Testament, where the phrase "weeping and gnashing of teeth" appears no less than 7 times in Matthew alone. A Chinese proverb has linked bruxism with psychosocial factors. "If a boy clenches, he hates his family for not being prosperous; if a girl clenches, she hates her mother for not being dead."(男孩咬牙,恨家不起;女孩咬牙,恨妈不死。) In David Lynch's 1977 film Eraserhead, Henry Spencer's partner ("Mary X") is shown tossing and turning in her sleep, and snapping her jaws together violently and noisily, depicting sleep bruxism. In Stephen King's 1988 novel The Tommyknockers, the sister of central character Bobbi Anderson also had bruxism. In the 2000 film Requiem for a Dream, the character of Sara Goldfarb (Ellen Burstyn) begins taking an amphetamine-based diet pill and develops bruxism. In the 2005 film Beowulf & Grendel, a modern reworking of the Anglo-Saxon poem Beowulf, Selma the witch tells Beowulf that the troll's name Grendel means "grinder of teeth", stating that "he has bad dreams", a possible allusion to Grendel traumatically witnessing the death of his father as a child, at the hands of King Hrothgar. The Geats (the warriors who hunt the troll) alternatively translate the name as "grinder of men's bones" to demonize their prey. In George R. R. Martin's A Song of Ice and Fire series, King Stannis Baratheon grinds his teeth regularly, so loudly it can be heard "half a castle away". In rave culture, recreational use of ecstasy is often reported to cause bruxism. Among people who have taken ecstasy, while dancing it is common to use pacifiers, lollipops or chewing gum in an attempt to reduce the damage to the teeth and to prevent jaw pain. Bruxism is thought to be one of the contributing factors in "meth mouth", a condition potentially associated with long term methamphetamine use.
Biology and health sciences
Mental disorders
Health
155140
https://en.wikipedia.org/wiki/Hair%20removal
Hair removal
Hair removal is the deliberate removal of body hair or head hair. This process is also known as epilation or depilation. Hair is a common feature of the human body, exhibiting considerable variation in thickness and length across different populations. Hair becomes more visible during and after puberty. Additionally, men typically exhibit thicker and more conspicuous body hair than women. Both males and females have visible body hair on the head, eyebrows, eyelashes, armpits, genital area, arms, and legs. Males and some females may also have thicker hair growth on their face, abdomen, back, buttocks, anus, areola, chest, nostrils, and ears. Hair does not generally grow on the lips, back of the ear, the underside of the hands or feet, or on certain areas of the genitalia. Hair removal may be practiced for cultural, aesthetic, hygienic, sexual, medical, or religious reasons. Forms of hair removal have been practiced in almost all human cultures since at least the Neolithic era. The methods used to remove hair have varied in different times and regions. The term "depilation" is derived from the Medieval Latin "depilatio," which in turn is derived from the Latin "depilare," a word formed from the prefix "de-" and the root "pilus," meaning "hair." History For centuries, hair removal has long shaped gender roles, served to signify social status and defined notions of femininity and the ideal "body image". In early periods, the condition of being hairless was mostly done as a way to keep the body clean, using flint, seashells, beeswax and various other depilatory utensils and exfoliator substances, some highly questionable and highly caustic. Ancient Rome also associated hair removal with status: a person with smooth skin was associated with purity and superiority. Removing body hair was done by both men and women Psilothrum or psilotrum () and dropax () were depilatories in ancient Greece and Rome. In Ancient Egypt, besides being a fashion statement for affluent Egyptians of all genders, hair removal served as a treatment for louse infestation, which was a prevalent issue in the region. Very often, they would replace the removed head hair with a Nubian wig, which was seen as easier to maintain and also fashionable. Ancient Egyptian priests also shaved or depilated all over daily, so as to present a "pure" body before the images of the gods. In ancient times, one highly abrasive depilatory paste consisted of an admixture of slaked lime, water, wood-ash and yellow orpiment (arsenic trisulfide); In rural India and Iran, where this mixture is called vajibt, it is still commonly used to remove pubic hair. In other cultures, oil extracted from unripe olives (which had not reached one-third of their natural stage of ripeness) was used to remove body hair. During the medieval period, Catholic women were expected to let their hair grow long as a display of femininity, whilst keeping the hair concealed by wearing a wimple headdress in public places. The face was the only area where hair growth was considered unsightly; 14th-century ladies would also pick off hair from their foreheads to recede the hairline and give their face a more oval form. From the mid-16th century, it is said when Queen Elizabeth I came to power, she made eyebrow removal fashionable. By the 18th century, body hair removal was still considered a non-necessity by European and American women. But in 1760, when the first safety straight razor appeared for men to safely shave their beard and not inadvertently cut themselves, some women allegedly used this safety razor too. It was invented in Paris by the French master cutler , author of La pogonotomie, ou L'art d'apprendre à se raser soi-même (Pogonotomy, or The Art of Learning to Shave). It was not until the late 19th century that women in Europe and America started to make hair removal a component of their personal care regime. According to Rebecca Herzig, the modern-day notion of body hair being unwomanly can be traced back to Charles Darwin's book first published in 1871 "The Descent of Man and Selection in Relation to Sex". Darwin's theory of natural selection associated body hair with "primitive ancestry and an atavistic return to earlier less developed forms", writes Herzig, a professor of gender and sexuality studies at Bates College in Maine. Darwin also suggests having less body hair was an indication of being more evolved and sexually attractive. As Darwin's ideas polarized, other 19th century medical and scientific experts started to link hairiness to "sexual inversion, disease pathology, lunacy, and criminal violence". Those connotations were mostly applied to women's and not men's body hair. By the early 20th century, the upper- and middle-class white America increasingly saw smooth skin as a marker of femininity, and female body hair as repulsive, with hair removal giving "a way to separate oneself from cruder people, lower class and immigrant". Harper's Bazaar, in 1915, was the first women's fashion magazine to run a campaign devoted to the removal of underarm hair as "a necessity". Shortly after, Gillette launched the first safety razor marketed specifically for women—the "Milady Décolleté Gillette", one that solves "...an embarrassing personal problem" and keeps the underarm "...white and smooth". Cultural and sexual aspects Body hair characteristics such as thickness and length vary across human populations, some people have less pronounced body hair and others have more conspicuous body hair characteristics. Each culture of human society developed social norms relating to the presence or absence of body hair, which has changed from one time to another. Different standards of human physical appearance and physical attractiveness can apply to females and males. People whose hair falls outside a culture's aesthetic body image standards may experience real or perceived social acceptance problems, psychological distress and social pressure. For example, for women in several societies, exposure in public of body hair other than head hair, eyelashes and eyebrows is generally considered to be unaesthetic, unattractive and embarrassing. With the increased popularity in many countries of women wearing fashion clothing, sportswear and swimsuits during the 20th century and the consequential exposure of parts of the body on which hair is commonly found, there has emerged a popularization for women to remove visible body hair, such as on legs, underarms and elsewhere, or the consequences of hirsutism and hypertrichosis. In most of the Western world, for example, the vast majority of women regularly shave their legs and armpits, while roughly half also shave hair that may become exposed around their bikini pelvic area (often termed the "bikini line"). In Western and Asian cultures, in contrast to most Middle Eastern cultures, a majority of men are accustomed to shaving their facial hair, so only a minority of men reveal a beard, even though fast-growing facial hair must be shaved daily to achieve a clean-shaven or beardless appearance. Some men shave because they cannot genetically grow a "full" beard (generally defined as an even density from cheeks to neck), their beard color is genetically different from their scalp hair color, or because their facial hair grows in many directions, making a groomed or contoured appearance difficult to achieve. Some men shave because their beard growth is excessive, unpleasant, or coarse, causing skin irritation. Some men grow a beard or moustache from time to time to change their appearance or visual style. Some men tonsure or head shave, either as a religious practice, a fashion statement, or because they find a shaved head preferable to the appearance of male pattern baldness, or in order to attain enhanced cooling of the skull – particularly for people suffering from hyperhidrosis. A much smaller number of Western women also shave their heads, often as a fashion or political statement. Some women also shave their heads for cultural or social reasons. In India, tradition required widows in some sections of the society to shave their heads as part of being ostracized (see ). The outlawed custom is still infrequently encountered mostly in rural areas. Society at large and the government are working to end the practice of ostracizing widows. In addition, it continues to be common practice for men to shave their heads prior to embarking on a pilgrimage. The unibrow is considered a sign of beauty and attractiveness for women in Oman and for both genders in Tajikistan, often emphasized with kohl. In Middle Eastern societies, regular trimming or removal of female and male underarm hair and pubic hair has been considered proper personal hygiene, necessitated by local customs, for many centuries. Young girls and unmarried women, however, are expected to retain their body hair until shortly before marriage, when the whole body is depilated from the neck down. In China, body hair has long been regarded as normal, and even today women are confronted with far less social pressure to remove body hair. The same attitude exists in other countries in Asia. While hair removal has become routine for many of the continent's younger women, trimming or removing pubic hair, for instance, is not as common or popular as in the Western world, where both women and men may trim or remove all their pubic hair for aesthetic or sexual reasons. This custom can be motivated by reasons of potentially increased personal cleanliness or hygiene, heightened sensitivity during sexual activity, or the desire to take on a more exposed appearance or visual appeal, or to boost self-esteem when affected by excessive hair. In Korea, pubic hair has long been considered a sign of fertility and sexual health, and it has been reported in the mid-2010s that some Korean women were undergoing pubic hair transplants, to add extra hair, especially when affected by the condition of pubic atrichosis (or hypotrichosis), which is thought to affect a small percentage of Korean women. Unwanted or excessive hair is often removed in preparatory situations by both sexes, in order to avoid any perceived social stigma or prejudice. For example, unwanted or excessive hair may be removed in preparation for an intimate encounter, or before visiting a public beach or swimming pool. Though traditionally in Western culture women remove body hair and men do not, some women choose not to remove hair from their bodies, either as a non-necessity or as an act of rejection against social stigma, while some men remove or trim their body hair, a practice that is referred to in modern society as being a part of "manscaping" (a portmanteau expression for male-specific grooming). Fashions The term "glabrousness" also has been applied to human fashions, wherein some participate in culturally motivated hair removal by depilation (surface removal by shaving, dissolving), or epilation (removal of the entire hair, such as waxing or plucking). Although the appearance of secondary hair on parts of the human body commonly occurs during puberty, and therefore, is often seen as a symbol of adulthood, removal of this and other hair may become fashionable in some cultures and subcultures. In many modern Western cultures, men are encouraged to shave their beards, and women are encouraged to remove hair growth in various areas. Commonly depilated areas for women are the underarms, legs, and pubic hair. Some individuals depilate the forearms. In recent years, bodily depilation in men has increased in popularity among some subcultures of Western males. For men, the practice of depilating the pubic area is common, especially for aesthetic reasons. Most men will use a razor to shave this area, however, as best practice, it is recommended to use a body trimmer to shorten the length of the hair before shaving it off completely. Cultural and other influences In ancient Egypt, depilation was commonly practiced, with pumice and razors used to shave. In both Ancient Greece and Ancient Rome, the removal of body and pubic hair may have been practiced among both men and women. It is represented in some artistic depictions of male and female nudity, examples of which may be seen in red figure pottery and sculptures like the Kouros of Ancient Greece in which both men and women were depicted without body or pubic hair. Emperor Augustus was said, by Suetonius, to have applied "hot nutshells" on his legs as a form of depilation. In the clothes free movement, the term "smoothie" refers to an individual who has removed their body hair. In the past, such practices were frowned upon and in some cases, forbidden: violators could face exclusion from the club. Enthusiasts grouped together and formed societies of their own that catered to that fashion, and smoothies became a major percentage at some nudist venues. The first Smoothie club (TSC) was founded by a British couple in 1991. A Dutch branch was founded in 1993 in order to give the idea of a hairless body greater publicity in the Netherlands. Being a Smoothie is described by its supporters as exceptionally comfortable and liberating. The Smoothy-Club is also a branch of the World of the Nudest Nudist (WNN) and organizes nudist ship cruises and regular nudist events. Other reasons Religion Head-shaving (tonsure) is a part of some Buddhist, Christian, Muslim, Jain and Hindu traditions. Buddhist and Christian monks generally undergo some form of tonsure during their induction into monastic life. Within Amish society, tradition ordains men to stop shaving a part of their facial hair upon marriage and grow a Shenandoah style beard which serves the significance of wearing a wedding ring, moustaches are rejected as they are regarded as martial (traditionally associated with the military). In Judaism (see Shaving in Judaism), there is no obligation for women to remove body hair or facial hair, unless they wish to do so. However, in preparation for a woman's immersion in a ritual bath after concluding her days of purification (following her menstrual cycle), the custom of Jewish women is to shave off their pubic hair. During a mourning ritual, Jewish men are restricted in the Torah and Halakha to using scissors and prohibited from using a razor blade to shave their beards or sideburns, and, by custom, neither men nor women may cut or shave their hair during the shiva period. The Baháʼí Faith recommends against complete and long-term head-shaving outside of medical purposes. It is not currently practiced as a law, contingent upon a future decision by the Universal House of Justice, its highest governing body. Sikhs take an even stronger stance, opposing all forms of hair removal. One of the "Five Ks" of Sikhism is Kesh, meaning "hair". Baptized Sikhs are specifically instructed to have unshorn Kesh (the hair on their head and beards for men) as a major tenet of the Sikh faith. To Sikhs, the maintenance and management of long hair is a manifestation of one's piety. The majority of Muslims believe that adult removal of pubic and axillary hair, as a hygienic measure, is religiously beneficial. Under Muslim law (Sharia), it is recommended to keep the beard. A Muslim may trim or cut hair on the head. In the 9th century, the use of chemical depilatories for women was introduced by Ziryab in Al-Andalus. Medical The body hair of surgical patients is often removed beforehand on the skin surrounding surgical sites. Shaving was the primary form of hair removal until reports in 1983 showed that it may lead to an increased risk of infection. Clippers are now the recommended pre-surgical hair removal method. A 2021 systematic review brought together evidence on different techniques for hair removal before surgery. This involved 25 studies with a total of 8919 participants. Using a razor probably increases the chance of developing a surgical site infection compared to using clippers or hair removal cream or not removing hair before surgery. Removing hair on the day of surgery rather than the day before may also slightly reduce the number of infections. Some people with trichiasis find it medically necessary to remove ingrown eyelashes. The shaving of hair has sometimes been used in attempts to eradicate lice or to minimize body odor due to the accumulation of odor-causing micro-organisms in hair. In extreme situations, people may need to remove all body hair to prevent or combat infestation by lice, fleas and other parasites. Such a practice was used, for example, in Ancient Egypt. It has been suggested that an increasing percentage of humans removing their pubic hair has led to reduced crab louse populations in some parts of the world. In the military A buzz cut or completely shaven haircut is common in military organizations where, among other reasons, it is considered to promote uniformity and neatness. Most militaries have occupational safety and health policies that govern the hair length and hairstyles permitted; in the field and living in close-quarter environments where bathing and sanitation can be difficult, soldiers can be susceptible to parasite infestation such as head lice, that are more easily propagated with long and unkempt hair. It also requires less maintenance in the field and in adverse weather it dries more quickly. Short hair is also less likely to cause severe burns from flash flame exposure (as a result of flash fires from explosions) which can easily set hair alight. Short hair can also minimize interference with safety equipment and fittings attached to the head, such as combat helmets and NBC suits. Militaries may also require men to maintain clean-shaven faces as facial hair can prevent an air-tight seal between the face and military gas masks or other respiratory equipment, such as a pilot's oxygen mask, or full-face diving mask. The process of testing whether a mask adequately fits the face is known as a "respirator fit test". In many militaries, head-shaving (known as the induction cut) is mandatory for men when beginning their recruit training. However, even after the initial recruitment phase, when head-shaving is no longer required, many soldiers maintain a completely or partially shaven hairstyle (such as a "high and tight", "flattop" or "buzz cut") for personal convenience or neatness. Head-shaving is not required and is often not permitted for women in military service, although they must have their hair cut or tied to regulation length. For example, the shortest hair a female soldier can have in the U.S. Army is 1/4 inch from the scalp. In sport It is a common practice for professional footballers (soccer players) and road cyclists to remove leg hair for a number of reasons. In the case of a crash or tackle, the absence of the leg hair means the injuries (usually road rash or scarring) can be cleaned up more efficiently, and treatment is not impeded. Professional cyclists, as well as professional footballers, also receive regular leg massages, and the absence of hair reduces the friction and increases their comfort and effectiveness. Football players are also required to wear shin guards, and in case of a skin rash, the affected area can be treated more efficiently. It is also common for competitive swimmers to shave the hair off their legs, arms, and torsos (and even their whole bodies from the neckline down), to reduce drag and provide a heightened "feel" for the water by removing the exterior layer of skin along with the body hair. As punishment In some situations, people's hair is shaved as a punishment or a form of humiliation. After World War II, head-shaving was a common punishment in France, the Netherlands, and Norway for women who had collaborated with the Nazis during the occupation, and, in particular, for women who had sexual relations with an occupying soldier. In the United States, during the Vietnam War, conservative students would sometimes attack student radicals or "hippies" by shaving beards or cutting long hair. One notorious incident occurred at Stanford University, when unruly fraternity members grabbed Resistance founder (and student-body president) David Harris, cut off his long hair, and shaved his beard. During European witch-hunts of the Medieval and Early Modern periods, alleged witches were stripped naked and their entire body shaved to discover the so-called witches' marks. The discovery of witches' marks was then used as evidence in trials. Inmates have their heads shaved upon entry at certain prisons. Forms of hair removal and methods Depilation is the removal of the part of the hair above the surface of the skin. The most common form of depilation is shaving or trimming. Another option is the use of chemical depilatories, which work by breaking the disulfide bonds that link the protein chains that give hair its strength. Epilation is the removal of the entire hair, including the part below the skin. Methods include waxing, sugaring, epilators, lasers, threading, intense pulsed light or electrology. Hair is also sometimes removed by plucking with tweezers. Depilation methods "Depilation", or temporary removal of hair to the level of the skin, lasts several hours to several days and can be achieved by Shaving or trimming (manually or with electric shavers which can be used on pubic hair or body hair) Depilatories (creams or "shaving powders" which chemically dissolve hair) Friction (rough surfaces used to buff away hair) Epilation methods "Epilation", or removal of the entire hair from the root, lasts several days to several weeks and may be achieved by Tweezing (hairs are tweezed, or pulled out, with tweezers or with fingers) Waxing (a hot or cold layer is applied and then removed with porous strips) Sugaring (hair is removed by applying a sticky paste to the skin in the direction of hair growth and then peeling off with a porous strip) Threading (also called fatlah or khite in Arabic, or band in Persian) in which a twisted thread catches hairs as it is rolled across the skin Epilators (mechanical devices that rapidly grasp hairs and pull them out). Drugs that directly attack hair growth or inhibit the development of new hair cells. Hair growth will become less and less until it finally stops; normal depilation/epilation will be performed until that time. Hair growth will return to normal if use of product is discontinued. Products include the following: The pharmaceutical drug eflornithine hydrochloride (with the trade names Vaniqa and Follinil) inhibits the enzyme ornithine decarboxylase, preventing new hair cells from producing putrescine for stabilizing their DNA. Antiandrogens, including spironolactone, cyproterone acetate, flutamide, bicalutamide, and finasteride, can be used to reduce or eliminate unwanted body hair, such as in the treatment of hirsutism. Although effective for reducing body hair, antiandrogens have little effect on facial hair. However, slight effectiveness may be observed, such as some reduction in density/coverage and slower growth. Antiandrogens will also prevent further development of facial hair, despite only minimally affecting that which is already there. With the exception of 5α-reductase inhibitors such as finasteride and dutasteride, antiandrogens are contraindicated in men due to the risk of feminizing side effects such as gynecomastia as well as other adverse reactions (e.g., infertility), and are generally only used in women for cosmetic/hair-reduction purposes. Permanent hair removal Electrology has been practiced in the United States since 1875. It is approved by the FDA. This technique permanently destroys germ cells responsible for hair growth by way of the insertion of a fine probe into the hair follicle and the application of a current adjusted to each hair type and treatment area. Electrology is the only permanent hair removal method recognized by the FDA. Permanent hair reduction Laser hair removal (lasers and laser diodes): Laser hair removal technology became widespread in the US and many other countries from the 1990s onwards. It has been approved in the United States by the FDA since 1997. With this technology, light is directed at the hair and is absorbed by dark pigment, resulting in the destruction of the hair follicle. This hair removal method sometimes becomes permanent after several sessions. The number of sessions needed depends upon the amount and type of hair being removed. Intense pulsed light (IPL) This technology is becoming more common for at-home devices, many of which are advertised as "laser hair removal" but actually use IPL technology. Diode epilation (high energy LEDs but not laser diodes) Clinical comparisons of effectiveness A 2006 review article in the journal "Lasers in Medical Science" compared intense pulsed light (IPL) and both alexandrite and diode lasers. The review found no statistical difference in effectiveness, but a higher incidence of side effects with diode laser-based treatment. Hair reduction after 6 months was reported as 68.75% for alexandrite lasers, 71.71% for diode lasers, and 66.96% for IPL. Side effects were reported as 9.5% for alexandrite lasers, 28.9% for diode lasers, and 15.3% for IPL. All side effects were found to be temporary and even pigmentation changes returned to normal within 6 months. A 2006 meta-analysis of randomized controlled trials found that alexandrite and diode lasers caused 50% hair reduction for up to 6 months, while there was no evidence of hair reduction from intense pulsed light, neodymium-YAG or ruby lasers. Experimental or banned methods Photodynamic therapy for hair removal (experimental) X-ray hair removal is an efficient, and usually permanent, hair removal method, but also causes severe health problems, occasional disfigurement, and even death. It is illegal in the United States. Doubtful methods Many methods have been proposed or sold over the years without published clinical proof they can work as claimed. Electric tweezers Transdermal electrolysis Transcutaneous hair removal Microwave hair removal Foods and dietary supplements Non-prescription topical preparations (also called "hair inhibitors", "hair retardants", or "hair growth inhibitors") Advantages and disadvantages There are several disadvantages to many of these hair removal methods. Hair removal can cause issues: skin inflammation, minor burns, lesions, scarring, ingrown hairs, bumps, and infected hair follicles (folliculitis). Some removal methods are not permanent, can cause medical problems and permanent damage, or have very high costs. Some of these methods are still in the testing phase and have not been clinically proven. One issue that can be considered an advantage or a disadvantage depending upon an individual's viewpoint, is that removing hair has the effect of removing information about the individual's hair growth patterns due to genetic predisposition, illness, androgen levels (such as from pubertal hormonal imbalances or drug side effects), and/or gender status. In the hair follicle, stem cells reside in a discrete microenvironment called the bulge, located at the base of the part of the follicle that is established during morphogenesis but does not degenerate during the hair cycle. The bulge contains multipotent stem cells that can be recruited during wound healing to help repair the epidermis.
Biology and health sciences
Hygiene and grooming: General
Health
155154
https://en.wikipedia.org/wiki/Shaving
Shaving
Shaving is the removal of hair, by using a razor or any other kind of bladed implement, to slice it down—to the level of the skin or otherwise. Shaving is most commonly practiced by men to remove their facial hair and by women to remove their leg and underarm hair. A man is called clean-shaven if he has had his beard entirely removed. Both men and women sometimes shave their chest hair, abdominal hair, leg hair, underarm hair, pubic hair, or any other body hair. Head shaving is much more common among men. It is often associated with religious practice, the armed forces, and some competitive sports such as swimming, bodybuilding, and extreme sports. Historically, head shaving has also been used to humiliate, punish, for purification or to show submission to an authority. In more recent history, head shaving has been used in fund-raising efforts, particularly for cancer research organizations and charitable organizations which serve cancer patients. The shaving of head hair is also sometimes done by cancer patients when their treatment may result in hair loss, and by people experiencing male pattern baldness. History Before the advent of razors, hair was sometimes removed using two shells to pull the hair out or using water and a sharp tool. Around 3000 BC when copper tools were developed, copper razors were invented. The idea of an aesthetic approach to personal hygiene may have begun at this time, though Egyptian priests may have practiced something similar to this earlier. Alexander the Great strongly promoted shaving the beard for Macedonian soldiers before battle because he feared the enemy would grab them. In some Native American tribes, at the time of contact with British colonists, it was customary for men and women to remove all body hair. Straight razors have been manufactured in Sheffield, England since the 18th century. In the United States, getting a straight razor shave in a barbershop and self-shaving with a straight razor were still common in the early 1900s. The popularisation of self-shaving changed this. According to an estimate by New York City barber Charles de Zemler, barbers' shaving revenue dropped from about 50 percent around the time of the Spanish–American War to 10 percent in 1939 due to the invention of the safety razor and electric razor. Safety razors have existed since at least 1876 when the single-edge Star safety razor was patented by brothers Frederick and Otto Kampfe. The razor was essentially a small piece of a straight razor attached to a handle using a clamp mechanism. Before each shave the blade had to be attached to a special holder, stropped with a leather belt, and placed back into the razor. After a time, the blade needed to be honed by a cutler. In 1895, King Camp Gillette invented the double-edged safety razor, with cheap disposable blades sharpened from two sides. It took him until 1901 to build a working, patentable model, and commercial production began in 1903. The razor gained popularity during World War I when the U.S. military started issuing Gillette shaving kits to its servicemen: in 1918, the Gillette Safety Razor Company sold 3.5 million razors and 32 million blades. After the First World War, the company changed the pricing of its razor from a premium $5 to a more affordable $1 (equivalent to $ and $ in , respectively), leading to another big surge in popularity. The Second World War led to a similar increase in users when Gillette was ordered to dedicate its entire razor production and most blade production to the U.S. military. During the war, 12.5 million razors and 1.5 billion blades were provided to servicemen. In 1970, Wilkinson Sword introduced the 'bonded blade' razor, which consisted of a single blade housed in a plastic cartridge. Gillette followed in 1971 with its Trac II cartridge razor that utilised two blades. Gillette built on this twin blade design for a time, introducing new razors with added features such as a pivoting head, lubricating strip, and spring-mounted blades until their 1998 launch of the triple-bladed Mach3 razor. Schick launched a four-blade Quattro razor later the same year, and in 2006 Gillette launched the five-blade Fusion. Since then, razors with six and seven blades have been introduced. Wholly disposable razors gained popularity in the 1970s after Bic brought the first disposable razor to market in 1974. Other manufacturers, Gillette included, soon introduced their own disposable razors, and by 1980 disposables made up more than 27 percent of worldwide unit sales for razors. Shaving methods Shaving can be done with a straight razor or safety razor (called 'manual shaving' or 'wet shaving') or an electric razor (called 'dry shaving') or beard trimmer. The removal of a full beard often requires the use of scissors or an electric (or beard) trimmer to reduce the mass of hair, simplifying the process. Wet shaving There are two types of manual razors: straight razor and safety razors. Safety razors are further subdivided into double-edged razors, single edge, injector razors, cartridge razors and disposable razors. Double-edge razors are named so because the blade that they use has two sharp edges on opposite sides of the blade. Current multi-bladed cartridge manufacturers attempt to differentiate themselves by having more or fewer blades than their competitors, each arguing that their product gives a greater shave quality at a more affordable price. Before wet shaving, the area to be shaved is usually doused in warm to hot water by showering or bathing or covered for several minutes with a hot wet towel to soften the skin and hair. Dry hair is difficult to cut, and the required cutting force is reduced significantly once the hair is hydrated. Fully hydrated hair requires about 65% less force to cut, and hair is almost fully hydrated after two minutes of contact with room temperature water. The time required for hydration is reduced when using higher temperature water. A lathering or lubricating agent such as cream, shaving soap, gel, foam or oil is normally applied after this. Lubricating and moisturizing the skin to be shaved helps prevent irritation and damage known as razor burn. Many razor cartridges include a lubricating strip, made of polyethylene glycol, to function instead of or in supplement to extrinsic agents. It also lifts and softens the hairs, causing them to swell. This enhances the cutting action and sometimes permits cutting the hairs slightly below the surface of the skin. Additionally, during shaving, the lather indicates areas that have not been addressed. When soap is used, it is generally applied with a shaving brush, which has long, soft bristles. It is worked up into a usable lather by the brush, either against the face, in a shaving mug, bowl, scuttle, or palm of the hand. Since cuts are more likely when using safety razors and straight razors, wet shaving is generally done in more than one pass with the blade. The goal is to reduce the amount of hair with each pass, instead of trying to eliminate all of it in a single pass. This also reduces the risks of cuts, soreness, and ingrown hairs. Alum blocks and styptic pencils are used to close cuts resulting from the shave. Aftershave An aftershave lotion or balm is sometimes used after finishing shaving. It may contain an antiseptic agent such as isopropyl alcohol, both to prevent infection from cuts and to act as an astringent to reduce skin irritation, a perfume, and a moisturizer to soften the facial skin. Electric shaving The electric shaver (electric razor) consists of a set of oscillating or rotating blades, which are held behind a perforated metal screen which prevents them from coming into contact with the skin and behaves much like the second blade in a pair of scissors. When the razor is held against the skin, the whiskers poke through the holes in the screen and are sliced by the moving blades. In some designs the blades are a rotating cylinder. In others they are one or more rotating disks or a set of oscillating blades. Each design has an optimum motion over the skin for the best shave and manufacturers provide guidance on this. Generally, circular or cylindrical blades (rotary-type shaver) move in a circular motion and oscillating blades (foil-type shaver) move left and right. Hitachi has produced foil-type shavers with a rotary blade that operates similarly to the blade assembly of a reel-type lawn mower. The first electric razor was built by Jacob Schick in 1928. The main disadvantages of electric shaving are that it may not cut the whiskers as closely as razor shaving does and it requires a source of electricity, usually a rechargeable battery. The advantages include fewer cuts to the skin, quicker shaving, and no need for water and lather sources (a wet shave). The initial cost of electric shaving is higher, due to the cost of the shaver itself, but the long-term cost can be significantly lower, since the cutting parts do not need replacement for several months and a lathering product is not required. Some people also find they do not experience ingrown hairs (pseudofolliculitis barbae, also called razor bumps), when using an electric shaver. In contrast to wet shaving, electric shave lotions are intended to stiffen the whiskers. Stiffening is achieved by dehydrating the follicles using solutions of alcohols and a degreaser such as isopropyl myristate. Lotions are also sold to reduce skin irritation, but electric shaving does not usually require the application of any lubrication. This is called Dry Shaving. Mechanical shavers powered by a spring motor have been manufactured, although in the late 20th century they became rare. Such shavers can operate for up to two minutes each time the spring is wound and do not require an electrical outlet or batteries. Such type of shaver, the "Monaco" brand, was used on American space flights in the 1960s and 1970s, during the Apollo missions. Trimmer A trimmer has two adjacent blades, each with teeth on its cutting edge. One blade oscillates alongside a stationary blade so that the teeth cut any hair that falls between them. The main advantage of a trimmer, unlike shaving tools, is that longer beards can be trimmed to a short length efficiently and effectively, including as preparation for shaving. Effects of shaving Aberrations Shaving can have numerous side effects, including cuts, abrasions, and irritation. Many side effects can be minimized by using a fresh blade, applying plenty of lubrication, shaving in the direction of hair growth, and avoiding pressing the razor into the skin. A shaving brush can also help to lift the hair and spread the lubrication. The cosmetic market in some consumer economies offers many products to reduce these effects; they commonly dry the affected area, and some also help to lift out the trapped hair(s). Some people who shave choose to use only single-blade or wire-wrapped blades that shave farther away from the skin. Others have skin that cannot tolerate razor shaving at all; they use depilatory shaving powders to dissolve hair above the skin's surface, or grow a beard. Some anatomical parts, such as the scrotum, require extra care and more advanced equipment due to the uneven surface of the skin when the testicles shrivel during coldness, or its imbalance when the testicles hang low due to being warmer. Cuts Cuts from shaving can bleed for about fifteen minutes. Shaving cuts can be caused by blade movement perpendicular to the blade's cutting axis or by regular / orthogonal shaving over prominent bumps on the skin (which the blade incises). As such, the presence of acne can make shaving cuts more likely, and extra care must be exercised. The use of a fresh, sharp blade as well as proper cleaning and lubrication of skin can help prevent cuts. Some razor blade manufacturers include disposal containers or receptacles to avoid injuries to anyone handling the garbage. Razor burn Razor burn is an irritation of the skin caused by using a blunt blade or not using proper technique. It appears as a mild rash 2–4 minutes after shaving (once hair starts to grow through sealed skin) and usually disappears after a few hours to a few days, depending on severity. In severe cases, razor burn can also be accompanied by razor bumps, where the area around shaved hairs get raised red welts or infected pustules. A rash at the time of shaving is usually a sign of lack of lubrication. Razor burn is a common problem, especially among those who shave coarse hairs on areas with sensitive skin like the bikini line, pubic hair, underarms, chest, and beard. The condition can be caused by shaving too closely, shaving with a blunt blade, dry shaving, applying too much pressure when shaving, shaving too quickly or roughly, or shaving against the grain. Ways to prevent razor burn include keeping the skin moist, using a shaving brush and lather, using a moisturizing shaving gel, shaving in the direction of the hair growth, resisting the urge to shave too closely, applying minimal pressure, avoiding scratching or irritation after shaving, avoiding irritating products on the shaved area (colognes, perfumes, etc.) and using an aftershave cream with aloe vera or other emollients. Putting a warm, wet cloth on one's skin helps as well, by softening hairs. This can also be done by using pre-shave oil before the application of shaving cream. Essential oils such as coconut oil, tea-tree oil, peppermint oil, and lavender oil help to soothe skin after shaving. They have anti-inflammatory, antiseptic, and antibacterial properties. In some cases multi-bladed razors can cause skin irritation by shaving too close to the skin. Switching to a single- or double-bladed razor and not stretching the skin while shaving can mitigate this. One other technique involves exfoliating the skin before and after shaving, using various exfoliating products, included but not limited to, brushes, mitts, and loofah. This process removes dead skin cells, reducing the potential for ingrown hairs and allowing the razor to glide across the skin smoothly decreasing the risk of the razor snagging or grabbing causing razor burn. Razor bumps is a medical term for persistent inflammation caused by shaving. It is also known by the initials PFB or colloquial terms such as razor bumps. Myths Shaving does not cause terminal hair to grow back thicker, coarser or darker. This belief arose because hair that has never been cut has a naturally tapered end as it emerges from the skin's hair follicle, whereas, after cutting, there is no taper. The cut hair may thus appear to be thicker, and feel coarser as a result of the sharp edges on each cut strand. The fact that shorter hairs are "harder" (less flexible) than longer hairs also contributes to this effect. Hair can also appear darker after it grows back because hair that has never been cut is often lighter from sun exposure. In addition, as humans grow older, hair tends to grow coarser and in more places on the face and body. For example, teenagers may start shaving their face or legs at around 16, but as they age, hair will start to grow more abundantly and thicker, leading some to believe this was due to the shaving, but in reality is just part of the maturation process. Shaving in religion Hinduism, Buddhism, Jainism and Christianity Hindu, Jain and Buddhist (usually only monks or nuns) temples have ceremonies of shaving the hair from the scalp of priests, nuns, and certain followers, as a symbol of their renunciation of worldly fashion and esteem. Amish men and some other plain peoples shave their beard until they are married, after which they allow it to grow but continue to shave their mustaches. Tonsure is the practice of some Christian churches. In Hinduism, in certain communities, a child's birth hair is shaved off as part of a set of religious rites (samskaras) Islam Sunni Leading classical Islāmic jurist and theologian Abdullāh b. Abī Zayd says in his 'Risalah', "and the Prophet ordered that the beard be left alone and allowed to grow abundantly and that it not be trimmed. Malik said: “And there is no objection in trimming from its length when it becomes very long.” And what Malik said, more than one of the Companions and the Successors also said.”" Muslim jurists have unanimously agreed that shaving the entire head, and, to a lesser degree, cutting it during pilgrimage is preferable. It is proven that Muhammad shaved his entire head, and he prayed for those who shaved their heads or cut their hair. Islām also teaches followers to shave/pluck body hair such as pubic and armpit hair on a regular basis (40 days). Shī'a and Sunnī narrations from the Prophet state that: "God's Prophet (May God bless him) said: 'Anyone who believes in God and the Hereafter should not postpone shaving the pubic hair for more than forty days.'" Shia According to the Shia scholars, the length of beard should not exceed the width of a fist. Trimming of facial hair is allowed, however, shaving it is Haram (forbidden). Judaism Observant Jewish men are subject to restrictions on the shaving of their beards, as Leviticus 19:27 forbids the shaving of the corners of the head and prohibits the marring of the corners of the beard. The Hebrew word used in this verse refers specifically to shaving with a blade against the skin; rabbis at different times and places have interpreted it in many ways. Tools like scissors and electric razors, which cut the hair between two blades instead of between blade and skin, are permitted. Sikhism Observant Sikhs also follow the practice of keeping their hair uncut.
Biology and health sciences
Health and fitness
null
155176
https://en.wikipedia.org/wiki/Shrew
Shrew
Shrews (family Soricidae) are small mole-like mammals classified in the order Eulipotyphla. True shrews are not to be confused with treeshrews, otter shrews, elephant shrews, West Indies shrews, or marsupial shrews, which belong to different families or orders. Although its external appearance is generally that of a long-nosed mouse, a shrew is not a rodent, as mice are. It is, in fact, a much closer relative of hedgehogs and moles; shrews are related to rodents only in that both belong to the Boreoeutheria magnorder. Shrews have sharp, spike-like teeth, whereas rodents have gnawing front incisor teeth. Shrews are distributed almost worldwide. Among the major tropical and temperate land masses, only New Guinea, Australia, New Zealand, and South America have no native shrews. However, as a result of the Great American Interchange, South America does have a relatively recently naturalised population, present only in the northern Andes. The shrew family has 385 known species, making it the fourth-most species-diverse mammal family. The only mammal families with more species are the muroid rodent families (Muridae and Cricetidae) and the bat family Vespertilionidae. Characteristics All shrews are tiny, most no larger than a mouse. The largest species is the Asian house shrew (Suncus murinus) of tropical Asia, which is about long and weighs around The Etruscan shrew (Suncus etruscus), at about and , is the smallest known living terrestrial mammal. In general, shrews are terrestrial creatures that forage for seeds, insects, nuts, worms, and a variety of other foods in leaf litter and dense vegetation e.g. grass, but some specialise in climbing trees, living underground, living under snow, or even hunting in water. They have small eyes and generally poor vision, but have excellent senses of hearing and smell. They are very active animals, with voracious appetites. Shrews have unusually high metabolic rates, above that expected in comparable small mammals. For this reason, they need to eat almost constantly like moles. Shrews in captivity can eat to 2 times their own body weight in food daily. They do not hibernate, but some species are capable of entering torpor. In winter, many species undergo morphological changes that drastically reduce their body weight. Shrews can lose between 30% and 50% of their body weight, shrinking the size of bones, skull, and internal organs. Whereas rodents have gnawing incisors that grow throughout life, the teeth of shrews wear down throughout life, a problem made more extreme because they lose their milk teeth before birth, so have only one set of teeth throughout their lifetimes. In some species, exposed areas of the teeth are dark red due to the presence of iron in the tooth enamel. The iron reinforces the surfaces that are exposed to the most stress, which helps prolong the life of the teeth. This adaptation is not found in species with lower metabolism, which do not have to eat as much and therefore do not wear down the enamel to the same degree. The only other mammals' teeth with pigmented enamel are the incisors of rodents. Apart from the first pair of incisors, which are long and sharp, and the chewing molars at the back of the mouth, the teeth of shrews are small and peg-like, and may be reduced in number. The dental formula of shrews is: Shrews are fiercely territorial, driving off rivals, and coming together only to mate. Many species dig burrows for catching food and hiding from predators, although this is not universal. Female shrews can have up to 10 litters a year; in the tropics, they breed all year round; in temperate zones, they cease breeding only in the winter. Shrews have gestation periods of 17–32 days. The female often becomes pregnant within a day or so of giving birth, and lactates during her pregnancy, weaning one litter as the next is born. Shrews live 12 to 30 months. A characteristic behaviour observed in many species of shrew is known as "caravanning". This is when a litter of young shrews form a line behind the mother, each gripping the shrew in front by the fur at the base of the tail. Shrews are unusual among mammals in a number of respects. Unlike most mammals, some species of shrews are venomous. Shrew venom is not conducted into the wound by fangs, but by grooves in the teeth. The venom contains various compounds, and the contents of the venom glands of the American short-tailed shrew are sufficient to kill 200 mice by intravenous injection. One chemical extracted from shrew venom may be potentially useful in the treatment of high blood pressure, while another compound may be useful in the treatment of some neuromuscular diseases and migraines. The saliva of the northern short-tailed shrew (Blarina brevicauda) contains soricidin, a peptide which has been studied for use in treating ovarian cancer. Also, along with the bats and toothed whales, some species of shrews use echolocation. Unlike most other mammals, shrews lack zygomatic bones (also called the jugals), so have incomplete zygomatic arches. Echolocation The only terrestrial mammals known to echolocate are two genera (Sorex and Blarina) of shrews, the tenrecs of Madagascar, bats, and the solenodons. These include the Eurasian or common shrew (Sorex araneus) and the American vagrant shrew (Sorex vagrans) and northern short-tailed shrew (Blarina brevicauda). These shrews emit series of ultrasonic squeaks. By nature the shrew sounds, unlike those of bats, are low-amplitude, broadband, multiharmonic, and frequency modulated. They contain no "echolocation clicks" with reverberations and would seem to be used for simple, close-range spatial orientation. In contrast to bats, shrews use echolocation only to investigate their habitats rather than additionally to pinpoint food. Except for large and thus strongly reflecting objects, such as a big stone or tree trunk, they probably are not able to disentangle echo scenes, but rather derive information on habitat type from the overall call reverberations. This might be comparable to human hearing whether one calls into a beech forest or into a reverberant wine cellar. Classification The 385 shrew species are placed in 26 genera, which are grouped into three living subfamilies: Crocidurinae (white-toothed shrews), Myosoricinae (African shrews), and Soricinae (red-toothed shrews). In addition, the family contains the extinct subfamilies Limnoecinae, Crocidosoricinae, Allosoricinae, and Heterosoricinae (although Heterosoricinae is also commonly considered a separate family). Family Soricidae Subfamily Crocidurinae Crocidura Diplomesodon Feroculus Palawanosorex Paracrocidura Ruwenzorisorex Scutisorex Solisorex Suncus Sylvisorex Subfamily Myosoricinae Congosorex Myosorex Surdisorex Subfamily Soricinae Tribe Anourosoricini Anourosorex Tribe Blarinellini Blarinella Tribe Blarinini Blarina Cryptotis Tribe Nectogalini Chimarrogale Chodsigoa Episoriculus Nectogale Neomys †Asoriculus †Nesiotites Soriculus Tribe Notiosoricini Megasorex Notiosorex Tribe Soricini Sorex
Biology and health sciences
Soricomorpha
null
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Female ejaculation
Female ejaculation is characterized as an expulsion of fluid from the Skene's gland at the lower end of the urethra during or before an orgasm. It is also known colloquially as squirting or gushing, although research indicates that female ejaculation and squirting are different phenomena, squirting being attributed to a sudden expulsion of liquid that partly comes from the bladder and contains urine. Female ejaculation is physiologically distinct from coital incontinence, with which it is sometimes confused. There have been few studies on female ejaculation. A failure to adopt common definitions and research methodology by the scientific community has been the primary contributor to this lack of experimental data. Research has suffered from highly selected participants, narrow case studies, or very small sample sizes, and consequently has yet to produce significant results. Much of the research into the composition of the fluid focuses on determining whether it is, or contains, urine. It is common for any secretion that exits the vagina, and for fluid that exits the urethra, during sexual activity to be referred to as female ejaculate, which has led to significant confusion in the literature. Whether the fluid is secreted by the Skene's gland through and around the urethra has also been a topic of discussion; while the exact source and nature of the fluid remains controversial among medical professionals, and are related to doubts over the existence of the G-spot, there is substantial evidence that the Skene's gland is the source of female ejaculation. The function of female ejaculation, however, remains unclear. Reports In questionnaire surveys, 35–50% of women report that they have at some time experienced the gushing of fluid during orgasm. Other studies find anywhere from 10 to 69%, depending on the definitions and methods used. For instance Kratochvíl (1994) surveyed 200 women and found that 6% reported ejaculating, an additional 13% had some experience and about 60% reported release of fluid without actual gushing. Reports on the volume of fluid expelled vary considerably, starting from amounts that would be imperceptible to a woman, to mean values of 1–5 ml. The suggestion that women can expel fluid from their genital area as part of sexual arousal has been described by women's health writer Rebecca Chalker as "one of the most hotly debated questions in modern sexology". Female ejaculation has been discussed in anatomical, medical, and biological literature throughout recorded history. The reasons for the interest in female ejaculation have been questioned by feminist writers. Western literature 16th to 18th century In the 16th century, the Dutch physician Laevinius Lemnius, referred to how a woman "draws forth the man's seed and casts her own with it". In the 17th century, François Mauriceau described glands at the female urethral meatus that "pour out great quantities of saline liquor during coition, which increases the heat and enjoyment of women". This century saw an increasing understanding of female sexual anatomy and function, in particular the work of the Bartholin family in Denmark. De Graaf In the 17th century, the Dutch anatomist Reinier de Graaf wrote an influential treatise on the reproductive organs Concerning the Generative Organs of Women which is much cited in the literature on this topic. De Graaf discussed the original controversy but supported the Aristotelian view. He identified the source as the glandular structures and ducts surrounding the urethra. He identified [XIII:212] the various controversies regarding the ejaculate and its origin, but stated he believed that this fluid "which rushes out with such impetus during venereal combat or libidinous imagining" was derived from a number of sources, including the vagina, urinary tract, cervix and uterus. He appears to identify Skene's ducts, when he writes [XIII: 213] "those [ducts] which are visible around the orifice of the neck of the vagina and the outlet of the urinary passage receive their fluid from the female 'parastatae', or rather the thick membranous body around the urinary passage." However he appears not to distinguish between the lubrication of the perineum during arousal and an orgasmic ejaculate when he refers to liquid "which in libidinous women often rushes out at the mere sight of a handsome man." Further on [XIII:214] he refers to "liquid as usually comes from the pudenda in one gush." However, his prime purpose was to distinguish between generative fluid and pleasurable fluid, in his stand on the Aristotelian semen controversy. 19th century Krafft-Ebing's study of sexual perversion, Psychopathia Sexualis (1886), describes female ejaculation under the heading "Congenital Sexual Inversion in Women" as a perversion related to neurasthenia and homosexuality. It is also described by Freud in pathological terms in his study of Dora (1905), where he relates it to hysteria. However, women's writing of that time portrayed this in more positive terms. Thus we find Almeda Sperry writing to Emma Goldman in 1918, about the "rhythmic spurt of your love juices".<ref>{{cite book|url=https://books.google.com/books?id=8aZ-jOTonK8C&pg=PA154 |title=Falk C. Love, Anarchy and Emma Goldman. Holt Rinehart, NY 1984, at 175. Cited in Nestle J. A Restricted Country. Cleis 2003, at 163 |access-date=2011-10-30|isbn=9781573441520 |last1=Nestle |first1=Joan |year=2003 |publisher=Cleis Press }}</ref> Anatomical knowledge was also advanced by Alexander Skene's description of para-urethral or periurethral glands (glands around the urethra) in 1880, which have been variously claimed to be one source of the fluids in the ejaculate, and now commonly referred to as the Skene's glands. 20th century Early 20th-century understanding Female ejaculation is mentioned as normal in early 20th century 'marriage manuals', such as TH Van de Velde's Ideal Marriage: Its Physiology and Technique (1926). Certainly van de Velde was well aware of the varied experiences of women. In 1948, Huffman, an American gynaecologist, published his studies of the prostatic tissue in women together with a historical account and detailed drawings. These clearly showed the difference between the original glands identified by Skene at the urinary meatus, and the more proximal collections of glandular tissue emptying directly into the urethra. Most of the interest had focused on the substance and structure rather than function of the glands. A more definitive contemporary account of ejaculation appeared shortly after, in 1950, with the publication of an essay by Gräfenberg based on his observations of women during orgasm. However this paper made little impact, and was dismissed in the major sexological writings of that time, such as Kinsey (1953) and Masters and Johnson (1966), equating this "erroneous belief" with urinary stress incontinence. Although clearly Kinsey was familiar with the phenomenon, commenting that (p. 612); as were Masters and Johnson ten years later, who observed (pp 79–80): (emphasis in original) yet dismissed it (p. 135) – "female ejaculation is an erroneous but widespread concept", and even twenty years later in 1982, they repeated the statement that it was erroneous (p. 69–70) and the result of "urinary stress incontinence". Late 20th-century awareness The topic did not receive serious attention again until a review by Josephine Lowndes Sevely and JW Bennett appeared in 1978. This latter paper, which traces the history of the controversies to that point, and a series of three papers in 1981 by Beverly Whipple and colleagues in the Journal of Sex Research, became the focal point of the current debate. Whipple became aware of the phenomenon when studying urinary incontinence, with which it is often confused. As Sevely and Bennett point out, this is "not new knowledge, but a rediscovery of lost awareness that should contribute towards reshaping our view of female sexuality". Nevertheless, the theory advanced by these authors was immediately dismissed by many other authors, such as physiologist Joseph Bohlen, for not being based on rigorous scientific procedures, and psychiatrist Helen Singer Kaplan (1983) stated: Some radical feminist writers, such as Sheila Jeffreys (1985) were also dismissive, claiming it as a figment of male fantasy: It required the detailed anatomical work of Helen O'Connell from 1998 onwards to more properly elucidate the relationships between the different anatomical structures involved. As she observes, the female perineal urethra is embedded in the anterior vaginal wall and is surrounded by erectile tissue in all directions except posteriorly where it relates to the vaginal wall. "The distal vagina, clitoris, and urethra form an integrated entity covered superficially by the vulval skin and its epithelial features. These parts have a shared vasculature and nerve supply and during sexual stimulation respond as a unit". Anthropological accounts Female ejaculation appears in 20th-century anthropological works, such as Malinowski's Melanesian study, The Sexual Life of Savages (1929), and Gladwin and Sarason's "Truk: Man in Paradise" (1956). Malinowski states that in the language of the Trobriand Island people, a single word is used to describe ejaculation in both male and female. In describing sexual relations amongst the Chuukese Micronesians, Gladwin and Sarason state that "Female orgasm is commonly signalled by urination". (p. 205) provides a number of examples from other cultures, including the Ugandan Batoro, Mohave Indians, Mangaians, and Ponapese. (
Biology and health sciences
Human anatomy
Health
155282
https://en.wikipedia.org/wiki/Car%20bomb
Car bomb
A car bomb, bus bomb, van bomb, lorry bomb, or truck bomb, also known as a vehicle-borne improvised explosive device (VBIED), is an improvised explosive device designed to be detonated in an automobile or other vehicles. Car bombs can be roughly divided into two main categories: those used primarily to kill the occupants of the vehicle (often as an assassination) and those used as a means to kill, injure or damage people and buildings outside the vehicle. The latter type may be parked (the vehicle disguising the bomb and allowing the bomber to get away), or the vehicle might be used to deliver the bomb (often as part of a suicide bombing). It is commonly used as a weapon of terrorism or guerrilla warfare to kill people near the blast site or to damage buildings or other property. Car bombs act as their own delivery mechanisms and can carry a relatively large amount of explosives without attracting suspicion. In larger vehicles and trucks, weights of around or more have been used, for example, in the Oklahoma City bombing. Car bombs are activated in a variety of ways, including opening the vehicle's doors, starting the engine, remote detonation, depressing the accelerator or brake pedals, or simply lighting a fuse or setting a timing device. The gasoline in the vehicle's fuel tank may make the explosion of the bomb more powerful by dispersing and igniting the fuel. History Mario Buda's improvised wagon used in the 1920 Wall Street bombing is considered a prototype of the car bomb. The first non-suicide car bombing "fully conceptualized as a weapon of urban warfare" came January 12, 1947 when Lehi (also known as Stern Gang), a Zionist paramilitary organization, bombed the Haifa police station. In the fall of 2005, there were 140 car bombings happening per month. Car bombs are preceded by the 16th century hellburners, explosive-laden ships which were used to deadly effect by the besieged Dutch forces in Antwerp against the besieging Spanish. Though using a less refined technology, the basic principle of the hellburner is similar to that of the car bomb. Car bombs would start out with animals such as horses and cows, then it eventually emerged into a car. The first reported suicide car bombing (and possibly the first suicide bombing) was the Bath School bombings of 1927, where 45 people, including the bomber, were killed and half of a school was destroyed. Mass-casualty suicide car bombings are predominantly associated with the Middle East, particularly in recent decades. A notable suicide car bombing was the 1983 Beirut barracks bombing, when two simultaneous attacks killed 241 U.S. and 58 French peacekeepers. The perpetrator of these attacks has never been positively confirmed. In the Lebanese Civil War, an estimated 3,641 car bombs were detonated. The tactic was adopted by Palestinian militant groups such as Hamas, Fatah and Islamic Jihad, especially during the Second Intifada (2000–2005). While not an adaptation of a people-carrying vehicle, the WW2 German Goliath remote control mine, shares many parallels with a vehicle-based IED. It approached a target (often a tank or another armoured vehicle) at some speed, and then exploded, destroying itself and the target. It was armoured so that it could not be destroyed en route. However, it was not driven by a person, instead operated by remote control from a safe distance. Prior to the 20th century, bombs planted in horse carts had been used in assassination plots, notably in the unsuccessful "machine infernale" attempt to kill Napoleon on 24 December 1800. The first car bomb may have been the one used for the assassination attempt on Ottoman Sultan Abdul Hamid II in 1905 in Istanbul by Armenian separatists in the command of Papken Siuni belonging to the Armenian Revolutionary Federation. Car bombing was a significant part of the Provisional Irish Republican Army (PIRA) campaign during The Troubles in Northern Ireland. Dáithí Ó Conaill is credited with introducing the car bomb to Northern Ireland. Car bombs were also used by Ulster loyalist groups (for example, by the UVF during the Dublin and Monaghan bombings). PIRA Chief of Staff Seán Mac Stíofáin defines the car bomb as both a tactical and a strategic guerrilla warfare weapon. Strategically, it disrupts the ability of the enemy government to administer the country, and hits simultaneously at the core of its economic structure by means of massive destruction. From a tactical point of view, it ties down a large number of security forces and troops around the main urban areas of the region in conflict. As a delivery system Car bombs are effective weapons as they are an easy way to transport a large number of explosives to the intended target. A car bomb also produces copious shrapnel, or flying debris, and secondary damage to bystanders and buildings. In recent years, car bombs have become widely used by suicide bombers. Countermeasures Defending against a car bomb involves keeping vehicles at a distance from vulnerable targets by using roadblocks and checkpoints, Jersey barriers, concrete blocks or bollards, metal barriers, or by hardening buildings to withstand an explosion. The entrance to Downing Street in London has been closed since 1991 in reaction to the Provisional Irish Republican Army campaign, preventing the general public from getting near Number 10. Where major public roads pass near buildings, road closures may be the only option (thus, for instance, in Washington, D.C. the portion of Pennsylvania Avenue immediately in front of the White House is closed to traffic). Historically these tactics have encouraged potential bombers to target "soft" or unprotected targets, such as markets. Suicide usage In the Iraqi and Syrian Civil War, the car bomb concept was modified so that it could be driven and detonated by a driver but armoured to withstand incoming fire. The vehicle would be driven to its target area, in a similar fashion to a kamikaze plane of WW2. These were known by the acronym SVBIED (from Suicide Vehicle Borne Improvised Explosive Device) or VBIEDs. This saw generally civilian cars with armour plating added, that would protect the car for as long as possible, so that it could reach its intended target. Cars were sometimes driven into enemy troop areas, or into incoming enemy columns. Most often, the SVBIEDs were used by ISIL against Government forces, but also used by Syrian rebels (FSA and allied militias, especially the Al-Nusra Front) against government troops. The vehicles have become more sophisticated, with armour plating on the vehicle, protected vision slits, armour plating over the wheels so they would withstand being shot at, and also in some cases, additional metal grating over the front of the vehicle designed to crush or destroy shaped charges such as those used on rocket propelled grenades. In some cases, trucks were also used as well as cars. They were sometimes used to start an assault. Generally, the vehicles had a large space that would contain very heavy explosives. In some cases, animal drawn carts with improvised explosive devices have been used, generally either mules or horses. Tactically, a single vehicle may be used, or an initial "breakthrough" vehicle, then followed by another vehicle. While many car bombs are disguised as ordinary vehicles, some that are used against military forces have improvised vehicle armour attached to prevent the driver from being shot when attacking a fortified outpost. Operation Car bombs and detonators function in a diverse manner of ways and there are numerous variables in the operation and placement of the bomb within the vehicle. Earlier and less advanced car bombs were often wired to the car's ignition system, but this practice is now considered more laborious and less effective than other more recent methods, as it requires a greater amount of work for a system that can often be quite easily defused. While it is more common nowadays for car bombs to be fixed magnetically to the underside of the car, underneath the passenger or driver's seat, or inside of the mudguard, detonators triggered by the opening of the vehicle door or by pressure applied to the brakes or accelerating pedals are also used. Bombs operating by the former method of fixation to the underside of the car more often than not make use of a device called a tilt fuse. A small tube made of glass or plastic, the tilt fuse is similar in operation to a mercury switch or medical tablet tube. One end of the fuse will be filled with mercury, while the other open end is wired with the ends of an open circuit to an electrical firing system. When the tilt fuse moves or is jerked, the supply of mercury will flow to the top of the tube and close the circuit. Thus, as the vehicle goes through the regular bumping and dipping that comes with driving over a terrain, the circuit is completed, and the explosive is detonated. Car bombs are effective as booby traps because they also leave very little evidence. When an explosion happens, it is difficult for forensics to find any evidence because things either denigrate or become charred. As a safety mechanism to protect the bomber, the placer of the bomb may rig a timing device incorporated with the circuit to activate the circuit only after a certain time period, therefore ensuring the bomber will not accidentally activate the bomb before they are able to get clear of the blast radius. Even though right now car bombs are supposed to be stealth weapons that cause a good deal of damage, it is feared that they can become bigger, more lethal weapons such as the size of a trailer, making huge explosions and causing plenty of damage. Examples 20th century 1920: The Wall Street bombing — Suspected that Italian anarchist Mario Buda (a member of the "Galleanists") parked a horse-drawn wagon filled with explosives and shrapnel in the Financial District of New York City. The blast killed 38 and wounded 143. 1927: The Bath School disaster — Andrew Kehoe used a detonator to ignite dynamite and hundreds of pounds of pyrotol which he had secretly planted inside a school. As rescuers started gathering at the school, Kehoe drove up, stopped, and detonated a bomb inside his shrapnel-filled vehicle, killing himself and the school superintendent, and killing and injuring several others. In total, Kehoe killed 44 people and injured 58 making the Bath School bombing the deadliest act of mass murder in a school in U.S. history. It is possibly the first suicide car bombing in history. Militant group Lehi were the first group to use car bombs in the British Mandate for Palestine during the 1940s. The Viet Cong guerrillas used them throughout the Vietnam War in the 1960s and 1970s. The OAS used them at the end of the French rule in Algeria in 1961 and 1962. The Sicilian Mafia used them to assassinate independent magistrates starting in the 1960s and up to the early 1990s. The IRA used them frequently during its 1960s to 1990s campaign during the Troubles in Northern Ireland and England. The 1998 Omagh bombing by the Real IRA, an IRA splinter group, caused the most casualties in the Troubles from a single car bomb. Loyalist organisations in Northern Ireland of the 1960s and 1970s such as the Ulster Volunteer Force (UVF) and Ulster Defence Association used car bombs against civilians in both Northern Ireland and the Republic of Ireland. The 1974 UVF bombs in Dublin and Monaghan caused the most casualties in a single day during the Troubles. Palestinian writer Ghassan Kanafani was assassinated by a car bomb on 8 July 1972 with his 17-year-old niece Lamees Najim in Beirut by the Israeli Mossad. Former Chilean General Carlos Prats was killed by a car bomb on September 30, 1974, along with his wife. Freelance terrorist Carlos the Jackal claimed responsibility for three car bomb attacks on French newspapers accused of pro-Israeli bias during the 1970s. Cleveland mobster Danny Greene frequently used car bombs against his enemies, beginning in 1968. Afterwards, they also began to be used against Greene and his associates. The use of car bombs in Cleveland peaked in 1976, when 36 bombs exploded in the city, most of them car bombs, causing it to be nicknamed "Bomb City." Several people, including innocent bystanders, were killed or wounded. Greene himself was finally killed in a car bomb explosion himself, on October 6, 1977. Agents of the Chilean intelligence agency DINA were convicted of using car bombs to assassinate Orlando Letelier in 1976 and Carlos Prats in 1974, who were exiled opponents of dictator Augusto Pinochet. Letelier was killed in Sheridan Circle, in the heart of Embassy Row in Washington, D.C. The Tamil Tigers of Sri Lanka frequently made use of car bombs during that country's civil war in a campaign which lasted from 1976 until the group's defeat in 2009. From 1979 to early 1983, under the guise of the Front for the Liberation of Lebanon from Foreigners, Israel Defense Forces commanders Rafael Eitan, Avigdor Ben-Gal and Meir Dagan launched a campaign of bombings, including car, bicycle, and even donkey bombs. Initially conducted as a response to the killing of Israeli civilians at Nahariya. Largely indiscriminate in its targeting of those associated with the Palestine Liberation Organization in south, Lebanon, the FLLF attacks killed hundreds of Palestinians and Lebanese, mainly in Tyre, Lebanon, Sidon and the surrounding PLO run refugee camps. After 1981, as part of Ariel Sharon's policy of goading the PLO into committing more acts of terror, justifying a military response, FLLF attacks escalated in intensity and scope, spreading to Beirut and northern Lebanon by September. The FLLF even took credit for fictional attacks on the IDF to maintain its cover as a Lebanese organisation. Its most prominent attack on October 1, 1981, in West Beirut killed at least 50 and injured over 250 people. Seven other similar bombs were found and defused before they could explode. The German Red Army Faction occasionally used car bombs, such as in an unsuccessful attempt to attack a NATO school for officers in 1984. The Basque separatist group (ETA) attempted their first car bomb assassination in September 1985 and carried out at least 80 massive car bomb attacks in Spain during the last decade before putting its activities on hold in 2011. Constable Angela Taylor died on her way to collect lunch, the sole fatality of the Russell Street bombing in Melbourne, Australia on 27 March 1986. 22 others were injured. On 23 November 1986, two members of the Armenian Revolutionary Federation carried out the Melbourne Turkish consulate bombing using a car bomb, which resulted in the death of one of the attackers. Suicide car bombs were a regular feature against Israel in the 1982 Lebanon War which lasted from 1982 until Israel's withdrawal in 2000. The bombing campaign was waged by several groups, most prominently Hezbollah. In the 1980s, the Colombian drug lord Pablo Escobar used vehicle bombs extensively against government forces and population centers in Colombia and Latin America. The most notable car bombing attack was the 1989 DAS Building bombing, which killed 63 and injured about 1,000. Also, on July 4, 1989, a car bomb killed governor of Antioquia Antonio Roldán Betancur and five others; a prominent member of Escobar's Medellin Cartel later confessed to the crime. During the Soviet–Afghan War of the 1980s, at a variety of training camps in the tribal areas of Pakistan, the Pakistani Inter-Services Intelligence (ISI), with the aid of the United States' Central Intelligence Agency (CIA) and Britain's MI6, trained mujahideen in the preparation of car bombs. Car bombs became a regular occurrence during the war, the Afghan civil conflicts which followed, and then during the U.S. invasion of Afghanistan from 2001 and the war in Afghanistan ending in 2021. On 26 February 1993, Islamist terrorists led by Ramzi Yousef detonated a Ryder van filled with explosives in the parking garage of the World Trade Center in New York City. Yousef's plan had been to cause one of the towers to collapse into the other, destroying both and killing thousands of people. Although this was not achieved, six people were killed, 1,402 others injured, and extensive damage was caused. On 18 April 1993, a tanker containing 500 kilograms of explosives exploded near the mosque in Vitez, destroying the offices of the Bosnian War Presidency, killing at least six people and injuring 50 others. The ICTY accepted that this action was a piece of pure terrorism committed by elements within the Croat forces, as an attack on the Bosniak population of Stari Vitez - Vitez old town. HVO members tied a Bosniak male civilian from a concentration camp to the steering wheel and set the truck in motion towards the old town. On 20 October 1994, Hamas-led bus bombing in Tel Aviv, Israel lead to the death of 22 civilians and the injury of 50. At that time, it was the deadliest suicide bombing in Israeli history, and the first successful attack in Tel Aviv. The Quebec Biker War that lasted from 1994 to 2002 involved the use of car bombings, including one that killed a drug dealer and an 11-year-old boy on 9 August 1995. On 19 April 1995, Timothy McVeigh detonated a Ryder box truck filled with an explosive mixture of ammonium nitrate fertilizer and fuel oil (ANFO) in front of the Alfred P. Murrah Federal Building in Oklahoma City during the Oklahoma City bombing, killing 168 people, including 19 children who were in the daycare. On 25 June 1996, a truck bomb destroyed the Khobar Towers military complex in Saudi Arabia, killing 19 United States Air Force (USAF) personnel and injuring 372 persons of all nationalities. In the late 1990s and early 2000s, vehicular explosives were used by Chechen nationalists against targets in Russia. On 20 April 1999, Eric Harris and Dylan Klebold planned to use two car bombs as the last act of the Columbine High School massacre, apparently to murder first responders. Both failed to explode. 21st century On 2 December 2001, a Hamas assailant boarded a bus in Haifa, Israel, and then detonated himself, leading to the death of 15 civilians. Southeast Asia-based militant Islamist group Jemaah Islamiyah utilized car bombs in their campaigns during the early 2000s, the most prominent being the 2002 Bali bombings, which killed 202 people. Former Lebanese Prime Minister Rafic Hariri was assassinated by a car bomb during Valentine's Day of 2005. 21 others were also killed. A car bomb which had misfired was discovered in Times Square, New York City on May 1, 2010. The bomb had been planted by Faisal Shahzad. Evidence suggests that the bombing was planned by the Pakistani Taliban. On 11 December 2010, a car bomb exploded in central Stockholm in Sweden, slightly injuring two bystanders. Twelve minutes later, an Iraqi-born Swedish citizen accidentally detonated six pipe bombs he was carrying, but only one exploded. The bomber was killed but there were no other casualties. It is believed that the attacks were the work of homegrown terrorists who were protesting Sweden's involvement in the war in Afghanistan and the publication in Sweden of cartoons depicting Muhammad. On 22 July 2011, in the Norway massacre, far-right extremist Anders Behring Breivik detonated a car bomb within the executive government quarter of Oslo, Norway, killing 8 people. In 2013, Afghan security forces intercepted a truck bomb deployed by the Haqqanis. It was the largest truck bomb ever built, with some 61,500 lbs of explosives. It was ultimately defused. The bomb was over 10 times the size of the car bomb used on the Murrah Building in Oklahoma City. While the bomb was not detonated, it caused security changes throughout the region and the closure of the US Army base FOB Goode near Gardez. During June 2015, in Ramadi, Iraq, a vehicle-borne IED resulted in the collapse of an 8-story tall building during battle between the Iraqi military and Daesh (ISIS). The Daesh truck bomb was fired upon by a rocket-propelled grenade to detonate it. On 30 August 2016, Kurdish female soldier from YPJ, Asia Ramazan Antar, was killed in Manbij offensive, when ISIS suicide bombers drove cars filled with explosives towards the Kurdish front. On 16 October 2017, Maltese journalist and blogger Daphne Caruana Galizia died in a car bomb attack. On 25 December 2020, a car bomb was detonated in downtown Nashville, Tennessee, injuring at least 8 and killing the perpetrator, Anthony Quinn Warner. On 14 November 2021, a car bomb exploded outside of a women's hospital in Liverpool after a man detonated an IED suicide vest inside a taxi, killing him and severely injuring the driver. During the Russian invasion of Ukraine, Ukrainian partisans have made extensive use of vehicular bomb attacks on Russian and collaborative officials in occupied areas, such as in the 2022 Crimean Bridge explosion On 20 August 2022, Aleksandr Dugin's daughter, Darya Dugina, was killed in Bolshiye Vyazyomy, Moscow Oblast by a bomb placed on Dugin's car. In late February 2023, it was reported that the Russian Army attempted to use a MT-LB filled with OFAB-100-120 aerial bombs and mine-clearing charges from the UR-77 vehicle against Ukrainian positions. On 18 June 2023, the Russian Army was documented as using a T-55 tank filled with approximately 6 tons of high explosives against entrenched Ukrainian Forces near Marinka, Donetsk Oblast with the intent of clearing the trenches. On 10 September 2023, it was reported that Ukraine's 128th Mountain Assault Brigade converted a captured T-62 tank into a VBIED filled with 1.5 tons of explosives and drove it against Russian positions in the Zaporizhzhia region. The tank hit a mine and exploded before it could reach the enemy positions. On 13 July 2024, Thomas Matthew Crooks attempted to assassinate Donald Trump, the former president of the United States, at a campaign rally near Butler, Pennsylvania. Crooks attempt was unsuccessful and he was killed in the process. Following his death, investigators found explosive devices in the trunk of his car, suggesting he planned to set off an explosion remotely as a possible distraction. Groups that use car bombs West Asia Hezbollah member Imad Mughniyah was assassinated by a car bomb in Syria in 2008, allegedly by Mossad. Various Palestinian militant groups such as Hamas, Islamic Jihad, Fatah, and various opposition Islamist groups against both military and civilian Israeli targets. Al-Qaeda, in attacks around the world since the 1990s, most notably the 1998 United States embassy bombings. During the U.S.-led war in Afghanistan, the Taliban have often employed vehicular explosives against enemy targets. This included not only cars and trucks, but even bicycle bombs. Similar to Mossad, other organizations associated with the Israeli government have been known to use car bombs to kill rivals of the Israeli state, including the Israel Defense Forces (IDF). The Iraqi insurgency. An estimated 578 car bombs were detonated in Iraq between June 2003 and June 2006. The Islamic State, which has employed armored explosive-laden crossovers, full-sized pickup trucks, and SUVs as suicidal tactical units to breach enemy defensive fronts in Syria and Iraq. The use of armored tractors and haul trucks was also recorded over the course of the war. Americas Although it has never been officially acknowledged, the American CIA has occasionally been accused of being behind car bombings. One such attack was the failed assassination attempt on Grand Ayatollah Mohammad Hussein Fadlallah in the Beirut car bombing on 8 March 1985. Although there has been widespread speculation of CIA involvement, this has never been proven conclusively. The Juárez Cartel's armed wing, La Línea, used a car bomb to attack police officers in Ciudad Juárez, Mexico on 15 July 2010. The Sinaloa Cartel and the Gulf Cartel were blamed for using car bombs in Nuevo Laredo, Mexico on 24 April 2011 to "heat up" the turf of Los Zetas. Europe Dissident republicans in Northern Ireland used car bombs in the last two decades, the deadliest attack being the Omagh bombing of 1998. The Security Service of Ukraine used a car bomb in 2022 Crimean Bridge explosion. The Sicilian Mafia used a car bomb in the Via d'Amelio bombing. South Asia Militants and criminals in India occasionally utilize car bombs in attacks. This includes Muslim, Sikh, Kashmiri and Naxalite militants, as well as rival politicians within the government and organized crime. A notable recent attack was the 25 August 2003 Mumbai bombings, in which two car bombs killed 54 people. The attack was claimed by the Pakistani-backed Kashmiri separatist group Lashkar-e-Taiba. The Pakistani Taliban have occasionally used car bombs in their ongoing conflict with the government of Pakistan.
Technology
Explosive weapons
null
155350
https://en.wikipedia.org/wiki/Munsell%20color%20system
Munsell color system
In colorimetry, the Munsell color system is a color space that specifies colors based on three properties of color: hue (basic color), value (lightness), and chroma (color intensity). It was created by Albert H. Munsell in the first decade of the 20th century and adopted by the United States Department of Agriculture (USDA) as the official color system for soil research in the 1930s. Several earlier color order systems had placed colors into a three-dimensional color solid of one form or another, but Munsell was the first to separate hue, value, and chroma into perceptually uniform and independent dimensions, and he was the first to illustrate the colors systematically in three-dimensional space. Munsell's system, particularly the later renotations, is based on rigorous measurements of human subjects' visual responses to color, putting it on a firm experimental scientific basis. Because of this basis in human visual perception, Munsell's system has outlasted its contemporary color models, and though it has been superseded for some uses by models such as CIELAB (L*a*b*) and CIECAM02, it is still in wide use today. Explanation The system consists of three independent properties of color which can be represented cylindrically in three dimensions as an irregular color solid: hue, measured by degrees around horizontal circles chroma, measured radially outward from the neutral (gray) vertical axis value, measured vertically on the core cylinder from 0 (black) to 10 (white) Munsell determined the spacing of colors along these dimensions by taking measurements of human visual responses. In each dimension, Munsell colors are as close to perceptually uniform as he could make them, which makes the resulting shape quite irregular. As Munsell explains: Hue Each horizontal circle Munsell divided into five principal hues: Red, Yellow, Green, Blue, and Purple, along with 5 intermediate hues (e.g., YR) halfway between adjacent principal hues. Each of these 10 steps, with the named hue given number 5, is then broken into 10 sub-steps, so that 100 hues are given integer values. In practice, color charts conventionally specify 40 hues, in increments of 2.5, progressing as for example 10R to 2.5YR. Two colors of equal value and chroma, on opposite sides of a hue circle, are complementary colors, and mix additively to the neutral gray of the same value. The diagram below shows 40 evenly spaced Munsell hues, with complements vertically aligned. Value Value, or lightness, varies vertically along the color solid, from black (value 0) at the bottom, to white (value 10) at the top. Neutral grays lie along the vertical axis between black and white. Several color solids before Munsell's plotted luminosity from black on the bottom to white on the top, with a gray gradient between them, but these systems neglected to keep perceptual lightness constant across horizontal slices. Instead, they plotted fully saturated yellow (light), and fully saturated blue and purple (dark) along the equator. Chroma Chroma, measured radially from the center of each slice, represents the “purity” of a color (related to saturation), with lower chroma being less pure (more washed out, as in pastels). Note that there is no intrinsic upper limit to chroma. Different areas of the color space have different maximal chroma coordinates. For instance light yellow colors have considerably more potential chroma than light purples, due to the nature of the eye and the physics of color stimuli. This led to a wide range of possible chroma levels—up to the high 30s for some hue–value combinations (though it is difficult or impossible to make physical objects in colors of such high chromas, and they cannot be reproduced on current computer displays). Vivid solid colors are in the range of approximately 8. Specifying a color A color is fully specified by listing the three numbers for hue, value, and chroma in that order. For instance, a purple of medium lightness and fairly saturated would be 5P 5/10 with 5P meaning the color in the middle of the purple hue band, 5/ meaning medium value (lightness), and a chroma of 10 (see swatch). An achromatic color is specified by the syntax . For example, a medium grey is specified by "N 5/". In computer processing, the Munsell colors are converted to a set of "HVC" numbers. The V and C are the same as the normal chroma and value. The H (hue) number is converted by mapping the hue rings into numbers between 0 and 100, where both 0 and 100 correspond to 10RP. As the Munsell books, including the 1943 renotation, only contains colors for some points in the Munsell space, it is non-trivial to specify an arbitrary color in Munsell space. Interpolation must be used to assign meanings to non-book colors such as "2.8Y 6.95/2.3", followed by an inversion of the fitted Munsell-to-xyY transform. The ASTM has defined a method in 2008, but Centore 2012 is known to work better. History and influence The idea of using a three-dimensional color solid to represent all colors was developed during the 18th and 19th centuries. Several different shapes for such a solid were proposed, including: a double triangular pyramid by Tobias Mayer in 1758, a single triangular pyramid by Johann Heinrich Lambert in 1772, a sphere by Philipp Otto Runge in 1810, a hemisphere by Michel Eugène Chevreul in 1839, a cone by Hermann von Helmholtz in 1860, a tilted cube by William Benson in 1868, and a slanted double cone by August Kirschmann in 1895. These systems became progressively more sophisticated, with Kirschmann’s even recognizing the difference in value between bright colors of different hues. But all of them remained either purely theoretical or encountered practical problems in accommodating all colors. Furthermore, none was based on any rigorous scientific measurement of human vision; before Munsell, the relationship between hue, value, and chroma was not understood. Albert Munsell, an artist and professor of art at the Massachusetts Normal Art School (now Massachusetts College of Art and Design, or MassArt), wanted to create a "rational way to describe color" that would use decimal notation instead of color names (which he felt were "foolish" and "misleading"), which he could use to teach his students about color. He first started work on the system in 1898 and published it in full form in A Color Notation in 1905. The original embodiment of the system (the 1905 Atlas) had some deficiencies as a physical representation of the theoretical system. These were improved significantly in the 1929 Munsell Book of Color and through an extensive series of experiments carried out by the Optical Society of America in the 1940s resulting in the notations (sample definitions) for the modern Munsell Book of Color. Though several replacements for the Munsell system have been invented, building on Munsell's foundational ideas—including the Optical Society of America's Uniform Color Scales, and the International Commission on Illumination’s CIELAB (L*a*b*) and CIECAM02 color models—the Munsell system is still widely used, by, among others, ANSI to define skin color and hair color for forensic pathology, the USGS for matching soil color, in prosthodontics during the selection of tooth color for dental restorations, and breweries for matching beer color. The original Munsell color chart remains useful for comparing computer models of human color vision.
Physical sciences
Basics
Physics
155383
https://en.wikipedia.org/wiki/Liriodendron
Liriodendron
Liriodendron () is a genus of two species of characteristically large trees, deciduous over most of their populations, in the magnolia family (Magnoliaceae). These trees are widely known by the common name tulip tree or tuliptree for their large flowers superficially resembling tulips. It is sometimes referred to as tulip poplar or yellow poplar, and the wood simply as "poplar", although not closely related to the true poplars. Other common names include canoewood, saddle-leaf tree, and white wood. The two extant species are Liriodendron tulipifera, native to eastern North America, and Liriodendron chinense, native to China and Vietnam. Both species often grow to great size; the North American species may reach as much as in height. The North American species is commonly used horticulturally, the Chinese species is increasing in cultivation, and hybrids have been produced between these two allopatrically distributed species. Various extinct species of Liriodendron have been described from the fossil record. Description Liriodendron trees are easily recognized by their leaves, which are distinctive, having four lobes in most cases and a cross-cut notched or straight apex. Leaf size varies from 8–22 cm long and 6–25 cm wide. They are deciduous in the vast majority of cases for both species; however, each species has a semi-deciduous variety at the southern limit of its range in Florida and Yunnan respectively. The tulip tree is often a large tree, 18–60 m high and 60–120 cm in diameter. The stoutest well-authenticated Tulip tree was the Liberty Tree in Maryland which was in circumference. It died in 1999. The tree is known to reach the height of , in groves where they compete for sunlight, somewhat less if growing in an open field. Its trunk is usually columnar, with a long, branch-free bole forming a compact, rather than open, conical crown of slender branches. It has deep roots that spread widely. Leaves are slightly larger in L. chinense, compared to L. tulipifera, but with considerable overlap between the species; the petiole is 4–18 cm long. Leaves on young trees tend to be more deeply lobed and larger in size than those on mature trees. In autumn, the leaves turn yellow, or brown and yellow. Both species grow rapidly in rich, moist soils of temperate climates. They hybridize easily, producing L. x sinoamericanum cultivars. Flowers are 3–10 cm in diameter and have nine tepals — three green outer sepals and six inner petals which are yellow-green, with an orange flare at the base in L. tulipifera and L. x sinoamericanum. They start forming after around 15 years and are superficially similar to a tulip in shape, hence the tree's name. Flowers of L. tulipifera have a faint cucumber odor. The stamens and pistils are arranged spirally around a central spike or gynaecium; the stamens fall off, and the pistils become the samaras. The fruit is a cone-like aggregate of samaras 4–9 cm long, each of which has a roughly tetrahedral seed with one edge attached to the central conical spike and the other edge attached to the wing. Distribution Liriodendron trees are also easily recognized by their general shape, with the higher branches sweeping together in one direction, and they are also recognizable by their height, as the taller ones usually protrude above the canopy of oaks, maples, and other trees—more markedly with the American species. Appalachian cove forests often contain several tulip trees of height and girth not seen in other species of eastern hardwoods. In the Appalachian cove forests, trees 150 to 165 ft in height are common, and trees from 166 to nearly 180 ft are also found. More Liriodendron over 170 ft in height have been measured by the Eastern Native Tree Society than for any other eastern species. The current tallest tulip tree on record has reached 191.9 ft, the tallest native angiosperm tree known in North America. The tulip tree is rivaled in eastern forests only by white pine, loblolly pine, and eastern hemlock. Reports of tulip trees over 200 ft have been made, but none of the measurements has been confirmed by the Eastern Native Tree Society. Most reflect measurement errors attributable to not accurately locating the highest crown point relative to the base of the tree—a common error made by the users employing only clinometers/hypsometers when measuring height. Maximum circumferences for the species are between 24 and 30 ft at breast height, although a few historical specimens may have been slightly larger. The Great Smoky Mountains National Park has the greatest population of tulip trees 20 ft and over in circumference. The largest-volume tulip tree known anywhere is the Sag Branch Giant, which has a trunk and limb volume approaching . Paleo history Liriodendrons have been reported as fossils from the Late Cretaceous and early Tertiary of North America and central Asia. They are known widely as Tertiary-age fossils in Europe and well outside their present range in Asia and North America, showing a once-circumpolar northern distribution. Like many "Arcto-Tertiary" genera, Liriodendron apparently became extinct in Europe due to the east-west orientation of its mountains that blocked southward migration during the large-scale glaciation and aridity of climate during glacial phases. The genus name should not be confused with an extinct genus known only through fossils. That is Lepidodendron, which entails an important group of long-extinct pteridophytes in the phylum Lycopodiophyta that are well known Paleozoic coal-age fossils). Cultivation and use Liriodendron trees prefer a temperate climate, sun or part shade, and deep, fertile, well-drained and slightly acidic soil. Propagation is by seed or grafting. Plants grown from seed may take more than eight years to flower. Grafted plants flower depending on the age of the scion plant. The wood of the North American species (called poplar or tulipwood) is fine grained and stable. It is easy to work and commonly used for cabinet and furniture framing, i.e. internal structural members and subsurfaces for veneering. Additionally, much inexpensive furniture, described for sales purposes simply as "hardwood", is in fact primarily stained poplar. In the literature of American furniture manufacturers from the first half of the 20th century, it is often referred to as "gum wood". The wood is only moderately rot-resistant and is not commonly used in shipbuilding, but has found some recent use in light-craft construction. The wood is readily available, and when air dried, has a density around . The name canoewood probably refers to the tree's use for construction of dugout canoes by eastern Native Americans, for which its fine grain and large trunk size is eminently suited. Tulip tree leaves are eaten by the caterpillars of some Lepidoptera, for example the eastern tiger swallowtail (Papilio glaucus). Species and cultivars Liriodendron chinense Liriodendron tulipifera 'Ardis' is a small-leaf, compact cultivar 'Aureomarginatum' is variegated with yellow-margined leaves 'Fastigiatum' grows with an erect or columnar habit (fastigiate) 'Florida' strain — a fast-growing early bloomer, leaves have round lobes 'Glen Gold' bears yellow-gold colored leaves 'Mediopictum' is a variegated cultivar with gold-centered leaves 'Chapel Hill' and 'Doc Deforce's Delight' are hybrids of the above two species
Biology and health sciences
Magnoliales
Plants
155414
https://en.wikipedia.org/wiki/Computability%20theory
Computability theory
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics. Introduction Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis" and "Turing's thesis". Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis: With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false. Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers. Turing computability The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator. The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology. Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers. Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory. Areas of research Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them. Relative computability and the Turing degrees Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot. Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is. The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common: They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent). Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two. As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure. There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets. The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic. Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression. Other reducibilities An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example. The strong reducibilities include: One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory). The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees. Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic. Rice's theorem and the arithmetical hierarchy Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. Reverse mathematics The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility. Numberings A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms. The priority method Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event. Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method. The lattice of computably enumerable sets When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property. Automorphism problems Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem. Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area. Kolmogorov complexity The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area. Frequency computation This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one. Inductive inference This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. Generalizations of Turing computability Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory. Continuous computability theory Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals. Relationships between definability, proof and computability There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability. Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics. The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem. Name The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow and Simpson. Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers. Professional organizations The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences.
Mathematics
Discrete mathematics
null
155443
https://en.wikipedia.org/wiki/Corrosion
Corrosion
Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion. In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical. Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable. The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode. Galvanic corrosion Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures. Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes. Galvanic series In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a galvanic series and is useful in predicting and understanding corrosion. Corrosion removal Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of naval jelly is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion. Resistance to corrosion Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these. Intrinsic chemistry The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means. Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions. Passivation Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon. Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism. Corrosion in passivated materials Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking. Pitting corrosion Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment. Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds. Weld decay and knifeline attack Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time. A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries. Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable. Crevice corrosion Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles. Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion. Hydrogen grooving In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path. High-temperature corrosion High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion. Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films. Microbial corrosion Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion. Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack. Metal dusting Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer. Protection from corrosion Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated. Surface treatments When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time. Applied coatings Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness. Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary. Reactive coatings If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups). Anodization Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area. Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it. Biofilm coatings A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria. Controlled permeability formwork Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion. Cathodic protection Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms. Sacrificial anode protection For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing. A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time. Impressed current cathodic protection For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires. Anodic protection Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state Rate of corrosion The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion () is calculated as where is a constant, is the weight loss of the metal in time , is the surface area of the metal exposed, and is the density of the metal (in g/cm3). Other common expressions for the corrosion rate is penetration depth and change of mechanical properties. Economic impact In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities. Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time. Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached. Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures. Corrosion in nonmetals Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature. Corrosion of polymers Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against. A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes. The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter. Corrosion of glass Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases. Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH. Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements (g/cm2·d) which are determined as the ratio of total amount of released species into the water (g) to the water-contacting surface area (cm2), time of contact (days), and weight fraction content of the element in the glass : . The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): . Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by: , where is time, is the th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as , and is the density of the glass (g/cm3). Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d): , where is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion. In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation. Glass corrosion tests There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions. The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below. The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass. Usual glasses are differentiated into the following classes: Hydrolytic class 1 (Type I): This class, which is also called neutral glass, includes borosilicate glasses (e.g., Duran, Pyrex, Fiolax). Glass of this class contains essential quantities of boron oxides, aluminium oxides and alkaline earth oxides. Through its composition neutral glass has a high resistance against temperature shocks and the highest hydrolytic resistance. Against acid and neutral solutions it shows high chemical resistance, because of its poor alkali content against alkaline solutions. Hydrolytic class 2 (Type II): This class usually contains sodium silicate glasses with a high hydrolytic resistance through surface finishing. Sodium silicate glass is a silicate glass, which contains alkali- and alkaline earth oxide and primarily sodium oxide and calcium oxide. Hydrolytic class 3 (Type III): Glass of the 3rd hydrolytic class usually contains sodium silicate glasses and has a mean hydrolytic resistance, which is two times poorer than of type 1 glasses. Acid class DIN 12116 and alkali class DIN 52322 (ISO 695) are to be distinguished from the hydrolytic class DIN 12111 (ISO 719).
Physical sciences
Chemical reactions
null
155624
https://en.wikipedia.org/wiki/Heritability
Heritability
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?" Other causes of measured variation in a trait are characterized as environmental factors, including observational error. In human studies of heritability these are often apportioned into factors from "shared environment" and "non-shared environment" based on whether they tend to result in persons brought up in the same household being more or less similar to persons who were not. Heritability is estimated by comparing individual phenotypic variation among related individuals in a population, by examining the association between individual phenotype and genotype data, or even by modeling summary-level data from genome-wide association studies (GWAS). Heritability is an important concept in quantitative genetics, particularly in selective breeding and behavior genetics (for instance, twin studies). It is the source of much confusion due to the fact that its technical definition is different from its commonly-understood folk definition. Therefore, its use conveys the incorrect impression that behavioral traits are "inherited" or specifically passed down through the genes. Behavioral geneticists also conduct heritability analyses based on the assumption that genes and environments contribute in a separate, additive manner to behavioral traits. Overview Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment. High heritability of a trait, consequently, does not necessarily mean that the trait is not very susceptible to environmental influences. Heritability can also change as a result of changes in the environment, migration, inbreeding, or how heritability itself is measured in the population under study. The heritability of a trait should not be interpreted as a measure of the extent to which said trait is genetically determined in an individual. The extent of dependence of phenotype on environment can also be a function of the genes involved. Matters of heritability are complicated because genes may canalize a phenotype, making its expression almost inevitable in all occurring environments. Individuals with the same genotype can also exhibit different phenotypes through a mechanism called phenotypic plasticity, which makes heritability difficult to measure in some cases. Recent insights in molecular biology have identified changes in transcriptional activity of individual genes associated with environmental changes. However, there are a large number of genes whose transcription is not affected by the environment. Estimates of heritability use statistical analyses to help to identify the causes of differences between individuals. Since heritability is concerned with variance, it is necessarily an account of the differences between individuals in a population. Heritability can be univariate – examining a single trait – or multivariate – examining the genetic and environmental associations between multiple traits at once. This allows a test of the genetic overlap between different phenotypes: for instance hair color and eye color. Environment and genetics may also interact, and heritability analyses can test for and examine these interactions (GxE models). A prerequisite for heritability analyses is that there is some population variation to account for. This last point highlights the fact that heritability cannot take into account the effect of factors which are invariant in the population. Factors may be invariant if they are absent and do not exist in the population, such as no one having access to a particular antibiotic, or because they are omnipresent, like if everyone is drinking coffee. In practice, all human behavioral traits vary and almost all traits show some heritability. Definition Any particular phenotype can be modeled as the sum of genetic and environmental effects: Phenotype (P) = Genotype (G) + Environment (E). Likewise the phenotypic variance in the trait – Var (P) – is the sum of effects as follows: Var(P) = Var(G) + Var(E) + 2 Cov(G,E). In a planned experiment Cov(G,E) can be controlled and held at 0. In this case, heritability, is defined as H2 is the broad-sense heritability. This reflects all the genetic contributions to a population's phenotypic variance including additive, dominant, and epistatic (multi-genic interactions), as well as maternal and paternal effects, where individuals are directly affected by their parents' phenotype, such as with milk production in mammals. A particularly important component of the genetic variance is the additive variance, Var(A), which is the variance due to the average effects (additive effects) of the alleles. Since each parent passes a single allele per locus to each offspring, parent-offspring resemblance depends upon the average effect of single alleles. Additive variance represents, therefore, the genetic component of variance responsible for parent-offspring resemblance. The additive genetic portion of the phenotypic variance is known as Narrow-sense heritability and is defined as An upper case H2 is used to denote broad sense, and lower case h2 for narrow sense. For traits which are not continuous but dichotomous such as an additional toe or certain diseases, the contribution of the various alleles can be considered to be a sum, which past a threshold, manifests itself as the trait, giving the liability threshold model in which heritability can be estimated and selection modeled. Additive variance is important for selection. If a selective pressure such as improving livestock is exerted, the response of the trait is directly related to narrow-sense heritability. The mean of the trait will increase in the next generation as a function of how much the mean of the selected parents differs from the mean of the population from which the selected parents were chosen. The observed response to selection leads to an estimate of the narrow-sense heritability (called realized heritability). This is the principle underlying artificial selection or breeding. Example The simplest genetic model involves a single locus with two alleles (b and B) affecting one quantitative phenotype. The number of B alleles can be 0, 1, or 2. For any genotype, (Bi,Bj), where Bi and Bj are either 0 or 1, the expected phenotype can then be written as the sum of the overall mean, a linear effect, and a dominance deviation (one can think of the dominance term as an interaction between Bi and Bj): The additive genetic variance at this locus is the weighted average of the squares of the additive effects: where There is a similar relationship for the variance of dominance deviations: where The linear regression of phenotype on genotype is shown in Figure 1. Assumptions Estimates of the total heritability of human traits assume the absence of epistasis, which has been called the "assumption of additivity". Although some researchers have cited such estimates in support of the existence of "missing heritability" unaccounted for by known genetic loci, the assumption of additivity may render these estimates invalid. There is also some empirical evidence that the additivity assumption is frequently violated in behavior genetic studies of adolescent intelligence and academic achievement. Estimating heritability Since only P can be observed or measured directly, heritability must be estimated from the similarities observed in subjects varying in their level of genetic or environmental similarity. The statistical analyses required to estimate the genetic and environmental components of variance depend on the sample characteristics. Briefly, better estimates are obtained using data from individuals with widely varying levels of genetic relationship - such as twins, siblings, parents and offspring, rather than from more distantly related (and therefore less similar) subjects. The standard error for heritability estimates is improved with large sample sizes. In non-human populations it is often possible to collect information in a controlled way. For example, among farm animals it is easy to arrange for a bull to produce offspring from a large number of cows and to control environments. Such experimental control is generally not possible when gathering human data, relying on naturally occurring relationships and environments. In classical quantitative genetics, there were two schools of thought regarding estimation of heritability. One school of thought was developed by Sewall Wright at The University of Chicago, and further popularized by C. C. Li (University of Chicago) and J. L. Lush (Iowa State University). It is based on the analysis of correlations and, by extension, regression. Path Analysis was developed by Sewall Wright as a way of estimating heritability. The second was originally developed by R. A. Fisher and expanded at The University of Edinburgh, Iowa State University, and North Carolina State University, as well as other schools. It is based on the analysis of variance of breeding studies, using the intraclass correlation of relatives. Various methods of estimating components of variance (and, hence, heritability) from ANOVA are used in these analyses. Today, heritability can be estimated from general pedigrees using linear mixed models and from genomic relatedness estimated from genetic markers. Studies of human heritability often utilize adoption study designs, often with identical twins who have been separated early in life and raised in different environments. Such individuals have identical genotypes and can be used to separate the effects of genotype and environment. A limit of this design is the common prenatal environment and the relatively low numbers of twins reared apart. A second and more common design is the twin study in which the similarity of identical and fraternal twins is used to estimate heritability. These studies can be limited by the fact that identical twins are not completely genetically identical, potentially resulting in an underestimation of heritability. In observational studies, or because of evocative effects (where a genome evokes environments by its effect on them), G and E may covary: gene environment correlation. Depending on the methods used to estimate heritability, correlations between genetic factors and shared or non-shared environments may or may not be confounded with heritability. Regression/correlation methods of estimation The first school of estimation uses regression and correlation to estimate heritability. Comparison of close relatives In the comparison of relatives, we find that in general, where r can be thought of as the coefficient of relatedness, b is the coefficient of regression and t is the coefficient of correlation. Parent-offspring regression Heritability may be estimated by comparing parent and offspring traits (as in Fig. 2). The slope of the line (0.57) approximates the heritability of the trait when offspring values are regressed against the average trait in the parents. If only one parent's value is used then heritability is twice the slope. (This is the source of the term "regression," since the offspring values always tend to regress to the mean value for the population, i.e., the slope is always less than one). This regression effect also underlies the DeFries–Fulker method for analyzing twins selected for one member being affected. Sibling comparison A basic approach to heritability can be taken using full-Sib designs: comparing similarity between siblings who share both a biological mother and a father. When there is only additive gene action, this sibling phenotypic correlation is an index of familiarity – the sum of half the additive genetic variance plus full effect of the common environment. It thus places an upper limit on additive heritability of twice the full-Sib phenotypic correlation. Half-Sib designs compare phenotypic traits of siblings that share one parent with other sibling groups. Twin studies Heritability for traits in humans is most frequently estimated by comparing resemblances between twins. "The advantage of twin studies, is that the total variance can be split up into genetic, shared or common environmental, and unique environmental components, enabling an accurate estimation of heritability". Fraternal or dizygotic (DZ) twins on average share half their genes (assuming there is no assortative mating for the trait), and so identical or monozygotic (MZ) twins on average are twice as genetically similar as DZ twins. A crude estimate of heritability, then, is approximately twice the difference in correlation between MZ and DZ twins, i.e. Falconer's formula H2=2(r(MZ)-r(DZ)). The effect of shared environment, c2, contributes to similarity between siblings due to the commonality of the environment they are raised in. Shared environment is approximated by the DZ correlation minus half heritability, which is the degree to which DZ twins share the same genes, c2=DZ-1/2h2. Unique environmental variance, e2, reflects the degree to which identical twins raised together are dissimilar, e2=1-r(MZ). Analysis of variance methods of estimation The second set of methods of estimation of heritability involves ANOVA and estimation of variance components. Basic model We use the basic discussion of Kempthorne. Considering only the most basic of genetic models, we can look at the quantitative contribution of a single locus with genotype Gi as where is the effect of genotype Gi and is the environmental effect. Consider an experiment with a group of sires and their progeny from random dams. Since the progeny get half of their genes from the father and half from their (random) mother, the progeny equation is Intraclass correlations Consider the experiment above. We have two groups of progeny we can compare. The first is comparing the various progeny for an individual sire (called within sire group). The variance will include terms for genetic variance (since they did not all get the same genotype) and environmental variance. This is thought of as an error term. The second group of progeny are comparisons of means of half sibs with each other (called among sire group). In addition to the error term as in the within sire groups, we have an addition term due to the differences among different means of half sibs. The intraclass correlation is , since environmental effects are independent of each other. The ANOVA In an experiment with sires and progeny per sire, we can calculate the following ANOVA, using as the genetic variance and as the environmental variance: The term is the intraclass correlation between half sibs. We can easily calculate . The expected mean square is calculated from the relationship of the individuals (progeny within a sire are all half-sibs, for example), and an understanding of intraclass correlations. The use of ANOVA to calculate heritability often fails to account for the presence of gene–-environment interactions, because ANOVA has a much lower statistical power for testing for interaction effects than for direct effects. Model with additive and dominance terms For a model with additive and dominance terms, but not others, the equation for a single locus is where is the additive effect of the ith allele, is the additive effect of the jth allele, is the dominance deviation for the ijth genotype, and is the environment. Experiments can be run with a similar setup to the one given in Table 1. Using different relationship groups, we can evaluate different intraclass correlations. Using as the additive genetic variance and as the dominance deviation variance, intraclass correlations become linear functions of these parameters. In general, Intraclass correlation where and are found as P[ alleles drawn at random from the relationship pair are identical by descent], and P[ genotypes drawn at random from the relationship pair are identical by descent]. Some common relationships and their coefficients are given in Table 2. Linear mixed models A wide variety of approaches using linear mixed models have been reported in literature. Via these methods, phenotypic variance is partitioned into genetic, environmental and experimental design variances to estimate heritability. Environmental variance can be explicitly modeled by studying individuals across a broad range of environments, although inference of genetic variance from phenotypic and environmental variance may lead to underestimation of heritability due to the challenge of capturing the full range of environmental influence affecting a trait. Other methods for calculating heritability use data from genome-wide association studies to estimate the influence on a trait by genetic factors, which is reflected by the rate and influence of putatively associated genetic loci (usually single-nucleotide polymorphisms) on the trait. This can lead to underestimation of heritability, however. This discrepancy is referred to as "missing heritability" and reflects the challenge of accurately modeling both genetic and environmental variance in heritability models. When a large, complex pedigree or another aforementioned type of data is available, heritability and other quantitative genetic parameters can be estimated by restricted maximum likelihood (REML) or Bayesian methods. The raw data will usually have three or more data points for each individual: a code for the sire, a code for the dam and one or several trait values. Different trait values may be for different traits or for different time points of measurement. The currently popular methodology relies on high degrees of certainty over the identities of the sire and dam; it is not common to treat the sire identity probabilistically. This is not usually a problem, since the methodology is rarely applied to wild populations (although it has been used for several wild ungulate and bird populations), and sires are invariably known with a very high degree of certainty in breeding programmes. There are also algorithms that account for uncertain paternity. The pedigrees can be viewed using programs such as Pedigree Viewer , and analyzed with programs such as ASReml, VCE , WOMBAT , MCMCglmm within the R environment or the BLUPF90 family of programs . Pedigree models are helpful for untangling confounds such as reverse causality, maternal effects such as the prenatal environment, and confounding of genetic dominance, shared environment, and maternal gene effects. Genomic heritability When genome-wide genotype data and phenotypes from large population samples are available, one can estimate the relationships between individuals based on their genotypes and use a linear mixed model to estimate the variance explained by the genetic markers. This gives a genomic heritability estimate based on the variance captured by common genetic variants. There are multiple methods that make different adjustments for allele frequency and linkage disequilibrium. Particularly, the method called High-Definition Likelihood (HDL) can estimate genomic heritability using only GWAS summary statistics, making it easier to incorporate large sample size available in various GWAS meta-analysis. Response to selection In selective breeding of plants and animals, the expected response to selection of a trait with known narrow-sense heritability can be estimated using the breeder's equation: In this equation, the Response to Selection (R) is defined as the realized average difference between the parent generation and the next generation, and the Selection Differential (S) is defined as the average difference between the parent generation and the selected parents. For example, imagine that a plant breeder is involved in a selective breeding project with the aim of increasing the number of kernels per ear of corn. For the sake of argument, let us assume that the average ear of corn in the parent generation has 100 kernels. Let us also assume that the selected parents produce corn with an average of 120 kernels per ear. If h2 equals 0.5, then the next generation will produce corn with an average of 0.5(120-100) = 10 additional kernels per ear. Therefore, the total number of kernels per ear of corn will equal, on average, 110. Observing the response to selection in an artificial selection experiment will allow calculation of realized heritability as in Fig. 4. Heritability in the above equation is equal to the ratio only if the genotype and the environmental noise follow Gaussian distributions. Controversies Heritability estimates' prominent critics, such as Steven Rose, Jay Joseph, and Richard Bentall, focus largely on heritability estimates in behavioral sciences and social sciences. Bentall has claimed that such heritability scores are typically calculated counterintuitively to derive numerically high scores, that heritability is misinterpreted as genetic determination, and that this alleged bias distracts from other factors that researches have found more causally important, such as childhood abuse causing later psychosis. Heritability estimates are also inherently limited because they do not convey any information regarding whether genes or environment play a larger role in the development of the trait under study. For this reason, David Moore and David Shenk describe the term "heritability" in the context of behavior genetics as "...one of the most misleading in the history of science" and argue that it has no value except in very rare cases. When studying complex human traits, it is impossible to use heritability analysis to determine the relative contributions of genes and environment, as such traits result from multiple causes interacting. In particular, Feldman and Lewontin emphasize that heritability is itself a function of environmental variation. However, some researchers argue that it is possible to disentangle the two. The controversy over heritability estimates is largely via their basis in twin studies. The scarce success of molecular-genetic studies to corroborate such population-genetic studies' conclusions is the missing heritability problem. Eric Turkheimer has argued that newer molecular methods have vindicated the conventional interpretation of twin studies, although it remains mostly unclear how to explain the relations between genes and behaviors. According to Turkheimer, both genes and environment are heritable, genetic contribution varies by environment, and a focus on heritability distracts from other important factors. Overall, however, heritability is a concept widely applicable.
Biology and health sciences
Genetics
Biology
155625
https://en.wikipedia.org/wiki/Penetrance
Penetrance
Penetrance in genetics is the proportion of individuals carrying a particular variant (or allele) of a gene (genotype) that also expresses an associated trait (phenotype). In medical genetics, the penetrance of a disease-causing mutation is the proportion of individuals with the mutation that exhibit clinical symptoms among all individuals with such mutation. For example: If a mutation in the gene responsible for a particular autosomal dominant disorder has 95% penetrance, then 95% of those with the mutation will go on to develop the disease, showing its phenotype, whereas 5% will not.   Penetrance only refers to whether an individual with a specific genotype exhibits any phenotypic signs or symptoms, and is not to be confused with variable expressivity which is to what extent or degree the symptoms for said disease are shown (the expression of the phenotypic trait). Meaning that, even if the same disease-causing mutation affects separate individuals, the expressivity will vary. Degrees of penetrance Complete penetrance If 100% of individuals carrying a particular genotype express the associated trait, the genotype is said to show complete penetrance. Neurofibromatosis type 1 (NF1), is an autosomal dominant condition which shows complete penetrance, consequently everyone who inherits the disease-causing variant of this gene will develop some degree of symptoms for NF1. Reduced penetrance The penetrance is said to be reduced if less than 100% of individuals carrying a particular genotype express associated traits, and is likely to be caused by a combination of genetic, environmental and lifestyle factors. BRCA1 is an example of a genotype with reduced penetrance. By age 70, the mutation is estimated to have a breast cancer penetrance of around 65% in women. Meaning that about 65% of women carrying the gene will develop breast cancer by the time they turn 70. Non-penetrance: Within the category of reduced penetrance, individuals carrying the mutation without displaying any signs or symptoms, are said to have a genotype that is non-penetrant. For the BRCA1 example above, the remaining 35% which never develop breast cancer, are therefore carrying the mutation, but it is non-penetrant. This can lead to healthy, unaffected parents carrying the mutation on to future generations that might be affected. Factors affecting penetrance Many factors such as age, sex, environment, epigenetic modifiers, and modifier genes are linked to penetrance. These factors can help explain why certain individuals with a specific genotype exhibit symptoms or signs of disease, whilst others do not. Age-dependent penetrance If clinical signs associated with a specific genotype appear more frequently with increasing age, the penetrance is said to be age dependent. Some diseases are non-penetrant up until a certain age and then the penetrance starts to increase drastically, whilst others exhibit low penetrance at an early age and continue to increase with time. For this reason, many diseases have a different estimated penetrance dependent on the age. A specific hexanucleotide repeat expansion within the C9orf72 gene said to be a major cause for developing amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is an example of a genotype with age dependent penetrance. The genotype is said to be non-penetrant until the age of 35, 50% penetrant by the age of 60, and almost completely penetrant by age 80. Gender-related penetrance For some mutations, the phenotype is more frequently present in one sex and in rare cases mutations appear completely non-penetrant in a particular gender. This is called gender-related penetrance or sex-dependent penetrance and may be the result of allelic variation, disorders in which the expression of the disease is limited to organs only found in one sex such as testis or ovaries, or sex steroid-responsive genes. Breast cancer caused by the BRCA2 mutation is an example of a disease with gender-related penetrance. The penetrance is determined to be much higher in women than men. By age 70, around 86% of females in contrast to 6% of males with the same mutation is estimated to develop breast cancer. In cases where clinical symptoms or the phenotype related to a genetic mutation are present only in one sex, the disorder is said to be sex-limited. Familial male-limited precocious puberty (FMPP) caused by a mutation in the LHCGR gene, is an example of a genotype only penetrant in males. Meaning that males with this particular genotype exhibit symptoms of the disease whilst the same genotype is nonpenetrant in females. Genetic modifiers Genetic modifiers are genetic variants or mutations able to modify a primary disease-causing variant's phenotypic outcome without being disease causing themselves. For instance, in single gene disorders there is one gene primarily responsible for development of the disease, but modifier genes inherited separately can affect the phenotype. Meaning that the presence of a mutation located on a loci different from the one with the disease-causing mutation, may either hinder manifestation of the phenotype or alter the mutations effects, and thereby influencing the penetrance. Environmental modifiers Exposure to environmental and lifestyle factors such as chemicals, diet, alcohol intake, drugs and stress are some of the factors that might influence disease penetrance. For example, several studies of BRCA1 and BRCA2 mutations, associated with an elevated risk of breast and ovarian cancer in women, have examined associations with environmental and behavioral modifiers such as pregnancies, history of breast feeding, smoking, diet, and so forth. Epigenetic regulation Sometimes, genetic alterations which can cause genetic disease and phenotypic traits, are not from changes related directly to the DNA sequence, but from epigenetic alterations such as DNA methylation or histone modifications. Epigenetic differences may therefore be one of the factors contributing to reduced penetrance. A study done on a pair of genetically identical monozygotic twins, where one twin got diagnosed with leukemia and later on thyroid carcinoma whilst the other had no registered illnesses, showed that the affected twin had increased methylation levels of the BRCA 1 gene. The research concluded that the family had no known DNA-repair syndrome or any other hereditary diseases in the last four generations, and no genetic differences between the studied pair of monozygotic twins were detected in the BRCA1 regulatory region. This indicates that epigenetic changes caused by environmental or behavioral factors had a key role in the cause of promotor hypermethylation of the BRCA1 gene in the affected twin, which caused the cancer. Determining penetrance It can be challenging to estimate the penetrance of a specific genotype due to all the influencing factors. In addition to the factors mentioned above there are several other considerations that must be taken into account when penetrance is determined: Ascertainment bias Penetrance estimates can be affected by ascertainment bias if the sampling is not systematic. Traditionally a phenotype-driven approach focusing on individuals with a given condition and their family members has been used to determine penetrance. However, it may be difficult to transfer these estimates over to the general population because family members may share other genetic and/or environmental factors that could influence manifestation of said disease, leading to ascertainment bias and an overestimation of the penetrance. Large-scale population-based studies, which use both genetic sequencing and phenotype data from large groups of people, is a different method for determining penetrance. This method offers less upward bias compared to family-based studies and is more accurate the larger the sample population is. These studies may contain a healthy-participant-bias which can lead to lower penetrance estimates. Phenocopies A genotype with complete penetrance will always display the clinical phenotypic traits related to its mutation (taking into consideration the expressivity), but the signs or symptoms displayed by a specific affected individual can often be similar to other unrelated phenotypical traits. Taking into consideration the effect that environmental or behavioral modifiers have, and how they can impact the cause of a mutation or epigenetic alteration, we now have the cause as to how different paths lead to the same phenotypic display. When similar phenotypes can be observed but by different causes, it is called phenocopies. Phenocopies is when environmental and/or behavioral modifiers causes an illness which mimics the phenotype of a genetic inherited disease. Because of phenocopies, determining the degree of penetrance for a genetic disease requires full knowledge of the individuals attending the studies, and the factors that may or may not have caused their illness.       For example, new research on Hypertrophic Cardiomyopathy (HCM) based on a technique called Cardiac Magnetic Resonance (CMR), describes how various genetic illnesses that showcase the same phenotypic traits as HCM, are actually phenocopies. Previously these phenocopies were all diagnosed and treated, thought to arrive from the same cause, but because of new diagnostic methods, they can now be separated and treated more efficiently. Subjects not yet covered Allelic heterogeneity Polygenic inheritance Locus heterogeneity
Biology and health sciences
Genetics
Biology
155627
https://en.wikipedia.org/wiki/Ibuprofen
Ibuprofen
Ibuprofen is a nonsteroidal anti-inflammatory drug (NSAID) that is used to relieve pain, fever, and inflammation. This includes painful menstrual periods, migraines, and rheumatoid arthritis. It may also be used to close a patent ductus arteriosus in a premature baby. It can be taken orally (by mouth) or intravenously. It typically begins working within an hour. Common side effects include heartburn, nausea, indigestion, and abdominal pain. As with other NSAIDs, potential side effects include gastrointestinal bleeding. Long-term use has been associated with kidney failure, and rarely liver failure, and it can exacerbate the condition of patients with heart failure. At low doses, it does not appear to increase the risk of heart attack; however, at higher doses it may. Ibuprofen can also worsen asthma. While its safety in early pregnancy is unclear, it appears to be harmful in later pregnancy, so it is not recommended during that period. Like other NSAIDs, it works by inhibiting the production of prostaglandins by decreasing the activity of the enzyme cyclooxygenase (COX). Ibuprofen is a weaker anti-inflammatory agent than other NSAIDs. Ibuprofen was discovered in 1961 by Stewart Adams and John Nicholson while working at Boots UK Limited and initially marketed as Brufen. It is available under a number of brand names including Advil, Motrin, and Nurofen. Ibuprofen was first marketed in 1969 in the United Kingdom and in 1974 in the United States. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 33rd most commonly prescribed medication in the United States, with more than 17million prescriptions. Medical uses Ibuprofen is used primarily to treat fever (including postvaccination fever), mild to moderate pain (including pain relief after surgery), painful menstruation, osteoarthritis, dental pain, headaches, and pain from kidney stones. About 60% of people respond to any NSAID; those who do not respond well to a particular one may respond to another. A Cochrane medical review of 51 trials of NSAIDs for the treatment of lower back pain found that "NSAIDs are effective for short-term symptomatic relief in patients with acute low back pain". It is used for inflammatory diseases such as juvenile idiopathic arthritis and rheumatoid arthritis. It is also used for pericarditis and patent ductus arteriosus. Ibuprofen lysine In some countries, ibuprofen lysine (the lysine salt of ibuprofen, sometimes called "ibuprofen lysinate") is licensed for treatment of the same conditions as ibuprofen; the lysine salt is used because it is more water-soluble. In 2006, ibuprofen lysine was approved in the United States by the Food and Drug Administration (FDA) for closure of patent ductus arteriosus in premature infants weighing between , who are no more than 32 weeks gestational age when usual medical management (such as fluid restriction, diuretics, and respiratory support) is not effective. Adverse effects Adverse effects include nausea, heartburn, indigestion, diarrhea, constipation, gastrointestinal ulceration, headache, dizziness, rash, salt and fluid retention, and high blood pressure. Infrequent adverse effects include esophageal ulceration, heart failure, high blood levels of potassium, kidney impairment, confusion, and bronchospasm. Ibuprofen can exacerbate asthma, sometimes fatally. Allergic reactions, including anaphylaxis, may occur. Ibuprofen may be quantified in blood, plasma, or serum to demonstrate the presence of the drug in a person having experienced an anaphylactic reaction, confirm a diagnosis of poisoning in people who are hospitalized, or assist in a medicolegal death investigation. A monograph relating ibuprofen plasma concentration, time since ingestion, and risk of developing renal toxicity in people who have overdosed has been published. In October 2020, the U.S. FDA required the drug label to be updated for all NSAID medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. Cardiovascular risk Along with several other NSAIDs, chronic ibuprofen use is correlated with the risk of progression to hypertension in women, though less than for paracetamol (acetaminophen), and myocardial infarction (heart attack), particularly among those chronically using higher doses. On 9 July 2015, the U.S. FDA toughened warnings of increased heart attack and stroke risk associated with ibuprofen and related NSAIDs; the NSAID aspirin is not included in this warning. The European Medicines Agency (EMA) issued similar warnings in 2015. Skin Along with other NSAIDs, ibuprofen has been associated with the onset of bullous pemphigoid or pemphigoid-like blistering. As with other NSAIDs, ibuprofen has been reported to be a photosensitizing agent, but it is considered a weak photosensitizing agent compared to other members of the 2-arylpropionic acid class. Like other NSAIDs, ibuprofen is an extremely rare cause of the autoimmune diseases Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis. Interactions Alcohol Drinking alcohol when taking ibuprofen may increase the risk of stomach bleeding. Aspirin According to the FDA, "ibuprofen can interfere with the antiplatelet effect of low-dose aspirin, potentially rendering aspirin less effective when used for cardioprotection and stroke prevention". Allowing sufficient time between doses of ibuprofen and immediate-release (IR) aspirin can avoid this problem. The recommended elapsed time between a dose of ibuprofen and a dose of aspirin depends on which is taken first. It would be 30 minutes or more for ibuprofen taken after IR aspirin, and 8 hours or more for ibuprofen taken before IR aspirin. However, this timing cannot be recommended for enteric-coated aspirin. If ibuprofen is taken only occasionally without the recommended timing, though, the reduction of the cardioprotection and stroke prevention of a daily aspirin regimen is minimal. Paracetamol (acetaminophen) Ibuprofen combined with paracetamol is considered generally safe in children for short-term usage. Overdose Ibuprofen overdose has become common since it was licensed for over-the-counter (OTC) use. Many overdose experiences are reported in the medical literature, although the frequency of life-threatening complications from ibuprofen overdose is low. Human responses in cases of overdose range from an absence of symptoms to a fatal outcome despite intensive-care treatment. Most symptoms are an excess of the pharmacological action of ibuprofen and include abdominal pain, nausea, vomiting, drowsiness, dizziness, headache, ear ringing, and nystagmus. Rarely, more severe symptoms such as gastrointestinal bleeding, seizures, metabolic acidosis, hyperkalemia, low blood pressure, slow heart rate, fast heart rate, atrial fibrillation, coma, liver dysfunction, acute kidney failure, cyanosis, respiratory depression, and cardiac arrest have been reported. The severity of symptoms varies with the ingested dose and the time elapsed; however, individual sensitivity also plays an important role. Generally, the symptoms observed with an overdose of ibuprofen are similar to the symptoms caused by overdoses of other NSAIDs. The correlation between the severity of symptoms and measured ibuprofen plasma levels is weak. Toxic effects are unlikely at doses below 100mg/kg, but can be severe above 400mg/kg (around 150 tablets of 200mg units for an average adult male); however, large doses do not indicate the clinical course is likely to be lethal. A precise lethal dose is difficult to determine, as it may vary with age, weight, and concomitant conditions of the person. Treatment to address an ibuprofen overdose is based on how the symptoms present. In cases presenting early, decontamination of the stomach is recommended. This is achieved using activated charcoal; charcoal absorbs the drug before it can enter the bloodstream. Gastric lavage is now rarely used, but can be considered if the amount ingested is potentially life-threatening, and it can be performed within 60 minutes of ingestion. Purposeful vomiting is not recommended. Most ibuprofen ingestions produce only mild effects, and the management of overdose is straightforward. Standard measures to maintain normal urine output should be instituted and kidney function monitored. Since ibuprofen has acidic properties and is also excreted in the urine, forced alkaline diuresis is theoretically beneficial. However, because ibuprofen is highly protein-bound in the blood, the kidneys' excretion of the unchanged drug is minimal. Forced alkaline diuresis is, therefore, of limited benefit. Miscarriage A Canadian study of pregnant women suggests that those taking any type or amount of NSAIDs (including ibuprofen, diclofenac, and naproxen) were 2.4 times more likely to miscarry than those not taking the medications. However, an Israeli study found no increased risk of miscarriage in the group of mothers using NSAIDs. Pharmacology Ibuprofen works by inhibiting cyclooxygenase (COX) enzymes, which convert arachidonic acid to prostaglandin H2 (PGH2). PGH2, in turn, is converted by other enzymes into various prostaglandins (which mediate pain, inflammation, and fever) and thromboxane A2 (which stimulates platelet aggregation and promotes blood clot formation). Like aspirin and indomethacin, ibuprofen is a nonselective COX inhibitor, in that it inhibits two isoforms of cyclooxygenase, COX-1 and COX-2. The analgesic, antipyretic, and anti-inflammatory activity of NSAIDs appears to operate mainly through inhibition of COX-2, which decreases the synthesis of prostaglandins involved in mediating inflammation, pain, fever, and swelling. Antipyretic effects may be due to action on the hypothalamus, resulting in an increased peripheral blood flow, vasodilation, and subsequent heat dissipation. Inhibition of COX-1 instead would be responsible for unwanted effects on the gastrointestinal tract. However, the role of the individual COX isoforms in the analgesic, anti-inflammatory, and gastric damage effects of NSAIDs is uncertain, and different compounds cause different degrees of analgesia and gastric damage. Ibuprofen is administered as a racemic mixture. The R-enantiomer undergoes extensive interconversion to the S-enantiomer in vivo. The S-enantiomer is believed to be the more pharmacologically active enantiomer. The R-enantiomer is converted through a series of three main enzymes. These enzymes include acyl-CoA-synthetase, which converts the R-enantiomer to (−)-R-ibuprofen I-CoA; 2-arylpropionyl-CoA epimerase, which converts (−)-R-ibuprofen I-CoA to (+)-S-ibuprofen I-CoA; and hydrolase, which converts (+)-S-ibuprofen I-CoA to the S-enantiomer. In addition to the conversion of ibuprofen to the S-enantiomer, the body can metabolize ibuprofen to several other compounds, including numerous hydroxyl, carboxyl and glucuronyl metabolites. Virtually all of these have no pharmacological effects. Unlike most other NSAIDs, ibuprofen also acts as an inhibitor of Rho kinase and may be useful in recovery from spinal cord injury. Another unusual activity is inhibition of the sweet taste receptor. Pharmacokinetics After oral administration, peak serum concentration is reached after 12 hours, and up to 99% of the drug is bound to plasma proteins. The majority of ibuprofen is metabolized and eliminated within 24 hours in the urine; however, 1% of the unchanged drug is removed through biliary excretion. Chemistry Ibuprofen is practically insoluble in water, but very soluble in most organic solvents like ethanol (66.18g/100mL at 40°C for 90% EtOH), methanol, acetone and dichloromethane. The original synthesis of ibuprofen by the Boots Group started with the compound isobutylbenzene. The synthesis took six steps. A modern, greener technique with fewer waste byproducts for the synthesis involves only three steps and was developed in the 1980s by the Celanese Chemical Company. The synthesis is initiated with the acylation of isobutylbenzene using the recyclable Lewis acid catalyst hydrogen fluoride. The following catalytic hydrogenation of isobutylacetophenone is performed with either Raney nickel or palladium on carbon to lead into the key-step, the carbonylation of 1-(4-isobutylphenyl)ethanol. This is achieved by a PdCl2(PPh3)2 catalyst, at around 50 bar of CO pressure, in the presence of HCl (10%). The reaction presumably proceeds through the intermediacy of the styrene derivative (acidic elimination of the alcohol) and (1-chloroethyl)benzene derivative (Markovnikow addition of HCl to the double bond). Stereochemistry Ibuprofen, like other 2-arylpropionate derivatives such as ketoprofen, flurbiprofen and naproxen, contains a stereocenter in the α-position of the propionate moiety. The product sold in pharmacies is a racemic mixture of the S and R-isomers. The S (dextrorotatory) isomer is the more biologically active; this isomer has been isolated and used medically (see dexibuprofen for details). The isomerase enzyme, alpha-methylacyl-CoA racemase, converts (R)-ibuprofen into the (S)-enantiomer. (S)-ibuprofen, the eutomer, harbors the desired therapeutic activity. The inactive (R)-enantiomer, the distomer, undergoes a unidirectional chiral inversion to offer the active (S)-enantiomer. That is, when the ibuprofen is administered as a racemate the distomer is converted in vivo into the eutomer while the latter is unaffected. History Ibuprofen was derived from propionic acid by the research arm of Boots Group during the 1960s. The name is derived from the 3 functional groups: isobutyl (ibu) propionic acid (pro) phenyl (fen). Its discovery was the result of research during the 1950s and 1960s to find a safer alternative to aspirin. The molecule was discovered and synthesized by a team led by Stewart Adams, with a patent application filed in 1961. Adams initially tested the drug as treatment for his hangover. In 1985, Boots' worldwide patent for ibuprofen expired and generic products were launched. The medication was launched as a treatment for rheumatoid arthritis in the United Kingdom in 1969, and in the United States in 1974. Later, in 1983 and 1984, it became the first NSAID (other than aspirin) to be available over-the-counter (OTC) in these two countries. Boots was awarded the Queen's Award for Technical Achievement in 1985 for the development of the drug. In November 2013, work on ibuprofen was recognized by the erection of a Royal Society of Chemistry blue plaque at Boots' Beeston Factory site in Nottingham, which reads: and another at BioCity Nottingham, the site of the original laboratory, which reads: Availability and administration Ibuprofen was made available by prescription in the United Kingdom in 1969 and in the United States in 1974. Ibuprofen is the International nonproprietary name (INN), British Approved Name (BAN), Australian Approved Name (AAN) and United States Adopted Name (USAN). In the United States, it has been sold under the brand-names Motrin and Advil since 1974 and 1984, respectively. Ibuprofen is commonly available in the United States up to the FDA's 1984 dose limit OTC, rarely used higher by prescription. In 2009, the first injectable formulation of ibuprofen was approved in the United States, under the brand name Caldolor. Ibuprofen can be taken orally (by mouth) (as a tablet, a capsule, or a suspension) and intravenously. Research Ibuprofen is sometimes used for the treatment of acne because of its anti-inflammatory properties, and has been sold in Japan in topical form for adult acne. As with other NSAIDs, ibuprofen may be useful in the treatment of severe orthostatic hypotension (low blood pressure when standing up). NSAIDs are of unclear utility in the prevention and treatment of Alzheimer's disease. Ibuprofen has been associated with a lower risk of Parkinson's disease and may delay or prevent it. Aspirin, other NSAIDs, and paracetamol (acetaminophen) had no effect on the risk for Parkinson's. In March 2011, researchers at Harvard Medical School announced in Neurology that ibuprofen had a neuroprotective effect against the risk of developing Parkinson's disease. People regularly consuming ibuprofen were reported to have a 38% lower risk of developing Parkinson's disease, but no such effect was found for other pain relievers, such as aspirin and paracetamol. Use of ibuprofen to lower the risk of Parkinson's disease in the general population would not be problem-free, given the possibility of adverse effects on the urinary and digestive systems. Some dietary supplements might be dangerous to take along with ibuprofen and other NSAIDs, but , more research needs to be conducted to be certain. These supplements include those that can prevent platelet aggregation, including ginkgo, garlic, ginger, bilberry, dong quai, feverfew, ginseng, turmeric, meadowsweet (Filipendula ulmaria), and willow (Salix spp.); those that contain coumarin, including chamomile, horse chestnut, fenugreek and red clover; and those that increase the risk of bleeding, like tamarind. Ibuprofen lysine is sold for rapid pain relief; given in the form of its lysine salt, absorption is much quicker (35 minutes for the salt compared to 90120 minutes for ibuprofen). However, a clinical trial with 351 participants in 2020, funded by Sanofi, found no significant difference between ibuprofen and ibuprofen lysine concerning the eventual onset of action or analgesic efficacy.
Biology and health sciences
Pain treatments
Health
155650
https://en.wikipedia.org/wiki/Fluoride
Fluoride
Fluoride () is an inorganic, monatomic anion of fluorine, with the chemical formula (also written ), whose salts are typically white or colorless. Fluoride salts typically have distinctive bitter tastes, and are odorless. Its salts and minerals are important chemical reagents and industrial chemicals, mainly used in the production of hydrogen fluoride for fluorocarbons. Fluoride is classified as a weak base since it only partially associates in solution, but concentrated fluoride is corrosive and can attack the skin. Fluoride is the simplest fluorine anion. In terms of charge and size, the fluoride ion resembles the hydroxide ion. Fluoride ions occur on Earth in several minerals, particularly fluorite, but are present only in trace quantities in bodies of water in nature. Nomenclature Fluorides include compounds that contain ionic fluoride and those in which fluoride does not dissociate. The nomenclature does not distinguish these situations. For example, sulfur hexafluoride and carbon tetrafluoride are not sources of fluoride ions under ordinary conditions. The systematic name fluoride, the valid IUPAC name, is determined according to the additive nomenclature. However, the name fluoride is also used in compositional IUPAC nomenclature which does not take the nature of bonding involved into account. Fluoride is also used non-systematically, to describe compounds which release fluoride upon dissolving. Hydrogen fluoride is itself an example of a non-systematic name of this nature. However, it is also a trivial name, and the preferred IUPAC name for fluorane. Occurrence Fluorine is estimated to be the 13th-most abundant element in Earth's crust and is widely dispersed in nature, entirely in the form of fluorides. The vast majority is held in mineral deposits, the most commercially important of which is fluorite (CaF2). Natural weathering of some kinds of rocks, as well as human activities, releases fluorides into the biosphere through what is sometimes called the fluorine cycle. In water Fluoride is naturally present in groundwater, fresh and saltwater sources, as well as in rainwater, particularly in urban areas. Seawater fluoride levels are usually in the range of 0.86 to 1.4 mg/L, and average 1.1 mg/L (milligrams per litre). For comparison, chloride concentration in seawater is about 19 g/L. The low concentration of fluoride reflects the insolubility of the alkaline earth fluorides, e.g., CaF2. Concentrations in fresh water vary more significantly. Surface water such as rivers or lakes generally contains between 0.01 and 0.3 mg/L. Groundwater (well water) concentrations vary even more, depending on the presence of local fluoride-containing minerals. For example, natural levels of under 0.05 mg/L have been detected in parts of Canada but up to 8 mg/L in parts of China; in general levels rarely exceed 10 mg/litre In parts of Asia the groundwater can contain dangerously high levels of fluoride, leading to serious health problems. Worldwide, 50 million people receive water from water supplies that naturally have close to the "optimal level". In other locations the level of fluoride is very low, sometimes leading to fluoridation of public water supplies to bring the level to around 0.7–1.2 ppm. Mining can increase local fluoride levels Fluoride can be present in rain, with its concentration increasing significantly upon exposure to volcanic activity or atmospheric pollution derived from burning fossil fuels or other sorts of industry, particularly aluminium smelters. In plants All vegetation contains some fluoride, which is absorbed from soil and water. Some plants concentrate fluoride from their environment more than others. All tea leaves contain fluoride; however, mature leaves contain as much as 10 to 20 times the fluoride levels of young leaves from the same plant. Chemical properties Basicity Fluoride can act as a base. It can combine with a proton (): This neutralization reaction forms hydrogen fluoride (HF), the conjugate acid of fluoride. In aqueous solution, fluoride has a pKb value of 10.8. It is therefore a weak base, and tends to remain as the fluoride ion rather than generating a substantial amount of hydrogen fluoride. That is, the following equilibrium favours the left-hand side in water: However, upon prolonged contact with moisture, soluble fluoride salts will decompose to their respective hydroxides or oxides, as the hydrogen fluoride escapes. Fluoride is distinct in this regard among the halides. The identity of the solvent can have a dramatic effect on the equilibrium shifting it to the right-hand side, greatly increasing the rate of decomposition. Structure of fluoride salts Salts containing fluoride are numerous and adopt myriad structures. Typically the fluoride anion is surrounded by four or six cations, as is typical for other halides. Sodium fluoride and sodium chloride adopt the same structure. For compounds containing more than one fluoride per cation, the structures often deviate from those of the chlorides, as illustrated by the main fluoride mineral fluorite (CaF2) where the Ca2+ ions are surrounded by eight F− centers. In CaCl2, each Ca2+ ion is surrounded by six Cl− centers. The difluorides of the transition metals often adopt the rutile structure whereas the dichlorides have cadmium chloride structures. Inorganic chemistry Upon treatment with a standard acid, fluoride salts convert to hydrogen fluoride and metal salts. With strong acids, it can be doubly protonated to give . Oxidation of fluoride gives fluorine. Solutions of inorganic fluorides in water contain F− and bifluoride . Few inorganic fluorides are soluble in water without undergoing significant hydrolysis. In terms of its reactivity, fluoride differs significantly from chloride and other halides, and is more strongly solvated in protic solvents due to its smaller radius/charge ratio. Its closest chemical relative is hydroxide, since both have similar geometries. Naked fluoride Most fluoride salts dissolve to give the bifluoride () anion. Sources of true F− anions are rare because the highly basic fluoride anion abstracts protons from many, even adventitious, sources. Relative unsolvated fluoride, which does exist in aprotic solvents, is called "naked". Naked fluoride is a strong Lewis base, and a powerful nucleophile. Some quaternary ammonium salts of naked fluoride include tetramethylammonium fluoride and tetrabutylammonium fluoride. Cobaltocenium fluoride is another example. However, they all lack structural characterization in aprotic solvents. Because of their high basicity, many so-called naked fluoride sources are in fact bifluoride salts. In late 2016 imidazolium fluoride was synthesized that is the closest approximation of a thermodynamically stable and structurally characterized example of a "naked" fluoride source in an aprotic solvent (acetonitrile). The sterically demanding imidazolium cation stabilizes the discrete anions and protects them from polymerization. Biochemistry At physiological pHs, hydrogen fluoride is usually fully ionised to fluoride. In biochemistry, fluoride and hydrogen fluoride are equivalent. Fluorine, in the form of fluoride, is considered to be a micronutrient for human health, necessary to prevent dental cavities, and to promote healthy bone growth. The tea plant (Camellia sinensis L.) is a known accumulator of fluorine compounds, released upon forming infusions such as the common beverage. The fluorine compounds decompose into products including fluoride ions. Fluoride is the most bioavailable form of fluorine, and as such, tea is potentially a vehicle for fluoride dosing. Approximately, 50% of absorbed fluoride is excreted renally with a twenty-four-hour period. The remainder can be retained in the oral cavity, and lower digestive tract. Fasting dramatically increases the rate of fluoride absorption to near 100%, from a 60% to 80% when taken with food. Per a 2013 study, it was found that consumption of one litre of tea a day, can potentially supply the daily recommended intake of 4 mg per day. Some lower quality brands can supply up to a 120% of this amount. Fasting can increase this to 150%. The study indicates that tea drinking communities are at an increased risk of dental and skeletal fluorosis, in the case where water fluoridation is in effect. Fluoride ion in low doses in the mouth reduces tooth decay. For this reason, it is used in toothpaste and water fluoridation. At much higher doses and frequent exposure, fluoride causes health complications and can be toxic. Applications Fluoride salts and hydrofluoric acid are the main fluorides of industrial value. Organofluorine chemistry Organofluorine compounds are pervasive. Many drugs, many polymers, refrigerants, and many inorganic compounds are made from fluoride-containing reagents. Often fluorides are converted to hydrogen fluoride, which is a major reagent and precursor to reagents. Hydrofluoric acid and its anhydrous form, hydrogen fluoride, are particularly important. Production of metals and their compounds The main uses of fluoride, in terms of volume, are in the production of cryolite, Na3AlF6. It is used in aluminium smelting. Formerly, it was mined, but now it is derived from hydrogen fluoride. Fluorite is used on a large scale to separate slag in steel-making. Mined fluorite (CaF2) is a commodity chemical used in steel-making. Uranium hexafluoride is employed in the purification of uranium isotopes. Cavity prevention Fluoride-containing compounds, such as sodium fluoride or sodium monofluorophosphate are used in topical and systemic fluoride therapy for preventing tooth decay. They are used for water fluoridation and in many products associated with oral hygiene. Originally, sodium fluoride was used to fluoridate water; hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives, especially in the United States. The fluoridation of water is known to prevent tooth decay and is considered by the U.S. Centers for Disease Control and Prevention to be "one of 10 great public health achievements of the 20th century". In some countries where large, centralized water systems are uncommon, fluoride is delivered to the populace by fluoridating table salt. For the method of action for cavity prevention, see Fluoride therapy. Fluoridation of water has its critics . Fluoridated toothpaste is in common use. Meta-analysis show the efficacy of 500 ppm fluoride in toothpastes. However, no beneficial effect can be detected when more than one fluoride source is used for daily oral care. Laboratory reagent Fluoride salts are commonly used in biological assay processing to inhibit the activity of phosphatases, such as serine/threonine phosphatases. Fluoride mimics the nucleophilic hydroxide ion in these enzymes' active sites. Beryllium fluoride and aluminium fluoride are also used as phosphatase inhibitors, since these compounds are structural mimics of the phosphate group and can act as analogues of the transition state of the reaction. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for some minerals in 1997. Where there was not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) was used instead. AIs are typically matched to actual average consumption, with the assumption that there appears to be a need, and that need is met by what people consume. The current AI for women 19 years and older is 3.0 mg/day (includes pregnancy and lactation). The AI for men is 4.0 mg/day. The AI for children ages 1–18 increases from 0.7 to 3.0 mg/day. The major known risk of fluoride deficiency appears to be an increased risk of bacteria-caused tooth cavities. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of fluoride the UL is 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women ages 18 and older the AI is set at 2.9 mg/day (including pregnancy and lactation). For men, the value is 3.4 mg/day. For children ages 1–17 years, the AIs increase with age from 0.6 to 3.2 mg/day. These AIs are comparable to the U.S. AIs. The EFSA reviewed safety evidence and set an adult UL at 7.0 mg/day (lower for children). For U.S. food and dietary supplement labeling purposes, the amount of a vitamin or mineral in a serving is expressed as a percent of Daily Value (%DV). Although there is information to set Adequate Intake, fluoride does not have a Daily Value and is not required to be shown on food labels. Estimated daily intake Daily intakes of fluoride can vary significantly according to the various sources of exposure. Values ranging from 0.46 to 3.6–5.4 mg/day have been reported in several studies (IPCS, 1984). In areas where water is fluoridated this can be expected to be a significant source of fluoride, however fluoride is also naturally present in virtually all foods and beverages at a wide range of concentrations. The maximum safe daily consumption of fluoride is 10 mg/day for an adult (U.S.) or 7 mg/day (European Union). The upper limit of fluoride intake from all sources (fluoridated water, food, beverages, fluoride dental products and dietary fluoride supplements) is set at 0.10 mg/kg/day for infants, toddlers, and children through to 8 years old. For older children and adults, who are no longer at risk for dental fluorosis, the upper limit of fluoride is set at 10 mg/day regardless of weight. Safety Ingestion According to the U.S. Department of Agriculture, the Dietary Reference Intakes, which is the "highest level of daily nutrient intake that is likely to pose no risk of adverse health effects" specify 10 mg/day for most people, corresponding to 10 L of fluoridated water with no risk. For young children the values are smaller, ranging from 0.7 mg/d to 2.2 mg/d for infants. Water and food sources of fluoride include community water fluoridation, seafood, tea, and gelatin. Soluble fluoride salts, of which sodium fluoride is the most common, are toxic, and have resulted in both accidental and self-inflicted deaths from acute poisoning. The lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride per kg body weight). A case of a fatal poisoning of an adult with 4 grams of sodium fluoride is documented, and a dose of 120 g sodium fluoride has been survived. For sodium fluorosilicate (Na2SiF6), the median lethal dose (LD50) orally in rats is 125 mg/kg, corresponding to 12.5 g for a 100 kg adult. Treatment may involve oral administration of dilute calcium hydroxide or calcium chloride to prevent further absorption, and injection of calcium gluconate to increase the calcium levels in the blood. Hydrogen fluoride is more dangerous than salts such as NaF because it is corrosive and volatile, and can result in fatal exposure through inhalation or upon contact with the skin; calcium gluconate gel is the usual antidote. In the higher doses used to treat osteoporosis, sodium fluoride can cause pain in the legs and incomplete stress fractures when the doses are too high; it also irritates the stomach, sometimes so severely as to cause ulcers. Slow-release and enteric-coated versions of sodium fluoride do not have gastric side effects in any significant way, and have milder and less frequent complications in the bones. In the lower doses used for water fluoridation, the only clear adverse effect is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and is unlikely to represent any real effect on aesthetic appearance or on public health. Fluoride was known to enhance bone mineral density at the lumbar spine, but it was not effective for vertebral fractures and provoked more nonvertebral fractures. In areas that have naturally occurring high levels of fluoride in groundwater which is used for drinking water, both dental and skeletal fluorosis can be prevalent and severe. Hazard maps for fluoride in groundwater Around one-third of the human population drinks water from groundwater resources. Of this, about 10%, approximately 300 million people, obtain water from groundwater resources that are heavily contaminated with arsenic or fluoride. These trace elements derive mainly from minerals. Maps locating potential problematic wells are available. Topical Concentrated fluoride solutions are corrosive. Gloves made of nitrile rubber are worn when handling fluoride compounds. The hazards of solutions of fluoride salts depend on the concentration. In the presence of strong acids, fluoride salts release hydrogen fluoride, which is corrosive, especially toward glass. Other derivatives Organic and inorganic anions are produced from fluoride, including: Bifluoride, used as an etchant for glass Tetrafluoroberyllate Hexafluoroplatinate Tetrafluoroborate used in organometallic synthesis Hexafluorophosphate used as an electrolyte in commercial secondary batteries. Trifluoromethanesulfonate
Physical sciences
Halide salts
Chemistry
155664
https://en.wikipedia.org/wiki/Railway%20platform
Railway platform
A railway platform is an area in a train station alongside a railway track providing convenient access to trains. Almost all stations have some form of platform, with larger stations having multiple platforms. Grand Central Terminal in Midtown Manhattan hosts 44 platforms, more than any other rail station in the world. The world's longest station platform is at Hubballi Junction in India at . The Appalachian Trail station or Benson station in the United States, at the other extreme, has a platform which is only long enough for a single bench. Among some American train conductors, the word "platform" has entered usage as a verb meaning "to berth at a station", as in the announcement: "The last two cars of this train will not platform at East Rockaway". Height relative to trains The most basic form of platform consists of an area at the same level as the track, usually resulting in a fairly large height difference between the platform and the train floor. This would often not be considered a true platform. The more traditional platform is elevated relative to the track but often lower than the train floor, although ideally they should be at the same level. Occasionally the platform is higher than the train floor, where a train with a low floor serves a station built for trains with a high floor, for example at the Dutch stations of the DB Regionalbahn Westfalen (see ). On the London Underground some stations are served by both District line and Piccadilly line trains, and the Piccadilly trains have lower floors. A tram stop is often in the middle of the street; usually it has as a platform a refuge area of a similar height to that of the sidewalk, e.g. , and sometimes has no platform. The latter requires extra care by passengers and other traffic to avoid accidents. Both types of tram stops can be seen in the tram networks of Melbourne and Toronto. Sometimes a tram stop is served by ordinary trams with rather low floors and metro-like light rail vehicles with higher floors, and the tram stop has a dual-height platform. A railway station may be served by heavy-rail and light-rail vehicles with lower floors and have a dual- height platform, as on the RijnGouweLijn in the Netherlands. In all cases the platform must accommodate the loading gauge and conform to the structure gauge of the system. Types of platform Platform types include the bay platform, side platform (also called through platform), split platform and island platform. A bay platform is one at which the track terminates, i.e. a dead-end or siding. Trains serving a bay platform must reverse in or out. A side platform is the more usual type, alongside tracks where the train arrives from one end and leaves towards the other. An island platform has through platforms on both sides; it may be indented on one or both ends, with bay platforms. To reach an island platform there may be a bridge, a tunnel, or a level crossing. A variant on the side platform is the spanish solution which has platforms on both sides of a single through track. Modern station platforms can be constructed from a variety of materials such as glass-reinforced polymer, pre-cast concrete or expanded polystrene, depending on the underlying substructure. Identification Most stations have their platforms numbered consecutively from 1; a few stations, including , , King's Cross, , and (in the UK); and Lidcombe, Sydney (Australia), start from 0. At platforms 3 through to 12 are split along their length with odd numbered platforms facing north and east and even facing south and west, with a small signal halfway along the platform. Some, such as , use letters instead of numbers (this is to distinguish the platforms from numbered ones in the adjoining Waterloo main-line station for staff who work at both stations); some, such as Paris-Gare de Lyon, use letters for one group of platforms but numbers for the other. The actual meaning of the word platform depends on country and language. In some countries such as the United States, the word platform refers to the physical structure, while the place where a train can arrive is referred to as a "track" (e.g. "The train is arriving on Track 5"). In other countries, such as the UK, Ireland and most Commonwealth countries, platform refers specifically to the place where the train stops, which means that in such a case island platforms are allocated two separate numbers, one for each side. Some countries are in the process of switching from platform to track numbers, i.e. the Czech Republic and Poland. In locations where track numbers are used an island platform would be described as one platform with two tracks. Many stations also have numbered tracks or platforms which are used only for through traffic and do not have physical platform access. Facilities Some of the station facilities are often located on the platforms. Where the platforms are not adjacent to a station building, often some form of shelter or waiting room is provided, and employee cabins may also be present. The weather protection offered varies greatly, from little more than a roof with open sides, to a closed room with heating or air-conditioning. There may be benches, lighting, ticket counters, drinking fountains, shops, trash boxes, and static timetables or dynamic displays with information about the next train. There are often loudspeakers as part of a public address (PA) system. The PA system is often used where dynamic timetables or electronic displays are not present. A variety of information is presented, including destinations and times (for all trains, or only the more important long-distance trains), delays, cancellations, platform changes, changes in routes and destinations, the number of carriages in the train and the location of first class or luggage compartments, and supplementary fee or reservation requirements. Safety Some metro stations have platform screen doors between the platforms and the tracks. They provide more safety, and they allow the heating or air conditioning in the station to be separated from the ventilation in the tunnel, thus being more efficient and effective. They have been installed in most stations of the Singapore MRT and the Hong Kong MTR, and stations on the Jubilee Line Extension in London. Platforms should be sloped upwards slightly towards the platform edge to prevent wheeled objects such as trolleys, prams and wheelchairs from rolling away and into the path of the train. Many platforms have a cavity underneath an overhanging edge so that people who may fall off the platform can seek shelter from incoming trains. For security against theft or to secure stowaways, some countries have special security officers stationed at stations, just like the police, but specifically for railways, For example, in Indonesia and poland, there are special railway security officers. High-speed rail In high-speed rail, passing trains are a significant safety problem as the safe distance from the platform edge increases with the speed of the passing train. A study done by the United States Department of Transportation in 1999 found that trains passing station platforms at speeds of can pose safety concerns to passengers on the platforms who are away from the edge due to the aerodynamic effects created by pressure and induced airflow with speeds of to depending on the train body aerodynamic designs. Additionally, the airflow can cause debris to be blown out to the waiting passengers. If the passengers stand closer at , the risk increases with airflow that can reach speeds of to . In United Kingdom, a guideline for platform safety specifies that for the platforms with train passing speeds between and , there should be a yellow-line buffer zone of and other warning signs. If trains can pass at speeds higher than , the platforms should be inaccessible to passengers unless there are waiting rooms or screened areas to provide protection. The European Union has a regulation for platforms that are close to tracks with train passing speeds of or more should not be accessible to passengers unless there is a lower speed limit for trains that intend to stop at the station or there are barriers to limit access. Markings Platforms usually have some form of warnings or measures to keep passengers away from the tracks. The simplest measure is markings near the edge of the platform to demarcate the distance back that passengers should remain. Often a special tiled surface is used as well as a painted line, to help blind people using a walking aid, and help in preventing wheelchairs from rolling too near the platform edge. In the US, Americans with Disabilities Act of 1990 regulations require a detectable warning strip wide, consisting of truncated dome bumps in a visually-contrasting color, for the full length of the platform. Curvature Ideally platforms should be straight or slightly convex, so that the guard (if any) can see the whole train when preparing to close the doors. Platforms that have great curvature have blind spots that create a safety hazard. Mirrors or closed-circuit cameras may be used in these cases to view the whole platform. Also passenger carriages are straight, so doors will not always open directly onto a curved platform – often a platform gap is present. Usually such platforms will have warning signs, possibly auditory, such as London Underground's famous phrase "Mind the gap". There may be moveable gap filler sections within the platform, extending once the train has stopped and retracting after the doors have closed. The New York City Subway employs these at 14th Street–Union Square on the IRT Lexington Avenue Line and at Times Square on the 42nd Street Shuttle, and formerly at the South Ferry outer loop station on the IRT Broadway–Seventh Avenue Line. Notable examples Longest railway platforms Hubballi Junction, Karnataka, India: , Uttar Pradesh, India: , Kerala, India: , West Bengal, India: State Street Subway, Chicago, Illinois, US, , Uttar Pradesh India Auto Club Speedway station, Fontana, California, US: , Chhattisgarh, India: Cheriton Shuttle Terminal, Kent, United Kingdom: (longest in Europe) , Bern, Switzerland: Jhansi, Uttar Pradesh, India: Dearborn Street subway, Chicago, Illinois, US , Sonepur, Bihar, India: , Nadia district, West Bengal, India Flinders Street railway station, Melbourne, Victoria, Australia: Port Pirie (Mary Elie Street) railway station, South Australia: Sittard railway station, Netherlands: 's-Hertogenbosch railway station, Netherlands: , Netherlands: Greatest number of platforms Grand Central Terminal New York City, US: 44 Gare du Nord, France: 35 (31 above ground level + 4 underground) Munich Central Station, Germany: 34 (32 above ground level + 2 underground) Chicago Union Station, US: 30 , China: 30 Central railway station, Sydney, Australia: 26 (22 above ground level + 4 underground) Zürich Hauptbahnhof, Switzerland: 26 (16 above ground level + 10 underground) London Waterloo station, United Kingdom: 24 mainline (plus 8 at Waterloo tube station) Krung Thep Aphiwat Central Terminal (Bang Sue Grand Central), Bangkok, Thailand: 24 (+2 MRT Blue Line, 12 platforms are currently out of use) , India : 23 (+4 for Howrah metro station) ,India: 21 (+4 for Sealdah metro station) Leipzig, Germany: 24
Technology
Concepts of ground transport
null
155725
https://en.wikipedia.org/wiki/Sodium%20bicarbonate
Sodium bicarbonate
Sodium bicarbonate (IUPAC name: sodium hydrogencarbonate), commonly known as baking soda or bicarbonate of soda, is a chemical compound with the formula NaHCO3. It is a salt composed of a sodium cation (Na+) and a bicarbonate anion (HCO3−). Sodium bicarbonate is a white solid that is crystalline but often appears as a fine powder. It has a slightly salty, alkaline taste resembling that of washing soda (sodium carbonate). The natural mineral form is nahcolite, although it is more commonly found as a component of the mineral trona. As it has long been known and widely used, the salt has many different names such as baking soda, bread soda, cooking soda, brewing soda and bicarbonate of soda and can often be found near baking powder in stores. The term baking soda is more common in the United States, while bicarbonate of soda is more common in Australia, the United Kingdom, and New Zealand. Abbreviated colloquial forms such as sodium bicarb, bicarb soda, bicarbonate, and bicarb are common. The prefix bi- in "bicarbonate" comes from an outdated naming system predating molecular knowledge. It is based on the observation that there is twice as much carbonate (CO3−2) per sodium in sodium bicarbonate (NaHCO3) as there is in sodium carbonate (Na2CO3). The modern chemical formulas of these compounds now express their precise chemical compositions which were unknown when the name bi-carbonate of potash was coined (see also: bicarbonate). Uses Cooking In cooking, baking soda is primarily used in baking as a leavening agent. When it reacts with acid or is heated, carbon dioxide is released, which causes expansion of the batter and forms the characteristic texture and grain in cakes, quick breads, soda bread, and other baked and fried foods. When an acid is used, the acid–base reaction can be generically represented as follows: NaHCO3 + H+ → Na+ + CO2 + H2O Acidic materials that induce this reaction include hydrogen phosphates, cream of tartar, lemon juice, yogurt, buttermilk, cocoa, and vinegar. Baking soda may be used together with sourdough, which is acidic, making a lighter product with a less acidic taste. Since the reaction occurs slowly at room temperature, mixtures (cake batter, etc.) can be allowed to stand without rising until they are heated in the oven. Heat can also by itself cause sodium bicarbonate to act as a raising agent in baking because of thermal decomposition, releasing carbon dioxide at temperatures above , as follows: 2 NaHCO3 → Na2CO3 + H2O + CO2 When used this way on its own, without the presence of an acidic component (whether in the batter or by the use of a baking powder containing acid), only half the available CO2 is released (one CO2 molecule is formed for every two equivalents of NaHCO3). Additionally, in the absence of acid, thermal decomposition of sodium bicarbonate also produces sodium carbonate, which is strongly alkaline and gives the baked product a bitter, soapy taste and a yellow color. Baking powder Baking powder, also sold for cooking, contains around 30% of bicarbonate, and various acidic ingredients that are activated by the addition of water, without the need for additional acids in the cooking medium. Many forms of baking powder contain sodium bicarbonate combined with calcium acid phosphate, sodium aluminium phosphate, or cream of tartar. Baking soda is alkaline; the acid used in baking powder avoids a metallic taste when the chemical change during baking creates sodium carbonate. Food additive It is often used in conjunction with other bottled water food additives to add taste. Its European Union E number is E500. Pyrotechnics Sodium bicarbonate is one of the main components of the common "black snake" firework. The effect is caused by the thermal decomposition, which produces carbon dioxide gas to produce a long snake-like ash as a combustion product of the other main component, sucrose. Sodium bicarbonate also delays combustion reactions through the release of carbon dioxide and water, both of which are flame retardants, when heated. Mild disinfectant It has weak disinfectant properties and it may be an effective fungicide against some organisms. As baking soda will absorb musty smells, it has become a reliable method for used booksellers when making books less malodorous. Fire extinguisher Sodium bicarbonate can be used to extinguish small grease or electrical fires by being thrown over the fire, as heating of sodium bicarbonate releases carbon dioxide. However, it should not be applied to fires in deep fryers; the sudden release of gas may cause the grease to splatter. Sodium bicarbonate is used in BC dry chemical fire extinguishers as an alternative to the more corrosive monoammonium phosphate in ABC extinguishers. The alkaline nature of sodium bicarbonate makes it the only dry chemical agent, besides Purple-K, that was used in large-scale fire suppression systems installed in commercial kitchens. Sodium bicarbonate has several fire-extinguishing mechanisms that act simultaneously. It decomposes into water and carbon dioxide when heated, an endothermic reaction that deprives the fire of heat. In addition, it forms intermediates that can scavenge the free radicals which are responsible for the propagation of fire. With grease fires specifically, it also has a mild saponification effect, producing a soapy foam that can help smother the fire. Neutralization of acids Sodium bicarbonate reacts spontaneously with acids, releasing CO2 gas as a reaction product. It is commonly used to neutralize unwanted acid solutions or acid spills in chemical laboratories. It is not appropriate to use sodium bicarbonate to neutralize base even though it is amphoteric, reacting with both acids and bases. Sports supplement Sodium bicarbonate is taken as a sports supplement to improve muscular endurance. Studies conducted mostly in males have shown that sodium bicarbonate is most effective in enhancing performance in short-term, high-intensity activities. Agriculture Sodium bicarbonate can prevent the growth of fungi when applied on leaves, although it will not kill the fungus. Excessive amounts of sodium bicarbonate can cause discolouration of fruits (two percent solution) and chlorosis (one percent solution). Sodium bicarbonate is also commonly used as a free choice dietary supplement in sheep to help prevent bloat. Medical uses and health Sodium bicarbonate mixed with water can be used as an antacid to treat acid indigestion and heartburn. Its reaction with stomach acid produces salt, water, and carbon dioxide: NaHCO3 + HCl → NaCl + H2O + CO2(g) A mixture of sodium bicarbonate and polyethylene glycol such as PegLyte, dissolved in water and taken orally, is an effective gastrointestinal lavage preparation and laxative prior to gastrointestinal surgery, gastroscopy, etc. Intravenous sodium bicarbonate in an aqueous solution is sometimes used for cases of acidosis, or when insufficient sodium or bicarbonate ions are in the blood. In cases of respiratory acidosis, the infused bicarbonate ion drives the carbonic acid/bicarbonate buffer of plasma to the left, and thus raises the pH. For this reason, sodium bicarbonate is used in medically supervised cardiopulmonary resuscitation. Infusion of bicarbonate is indicated only when the blood pH is markedly low (< 7.1–7.0). HCO3− is used for treatment of hyperkalemia, as it will drive K+ back into cells during periods of acidosis. Since sodium bicarbonate can cause alkalosis, it is sometimes used to treat aspirin overdoses. Aspirin requires an acidic environment for proper absorption, and a basic environment will diminish aspirin absorption in cases of overdose. Sodium bicarbonate has also been used in the treatment of tricyclic antidepressant overdose. It can also be applied topically as a paste, with three parts baking soda to one part water, to relieve some kinds of insect bites and stings (as well as accompanying swelling). Some alternative practitioners, such as Tullio Simoncini, have promoted baking soda as a cancer cure, which the American Cancer Society has warned against due to both its unproven effectiveness and potential danger in use. Edzard Ernst has called the promotion of sodium bicarbonate as a cancer cure "one of the more sickening alternative cancer scams I have seen for a long time". Sodium bicarbonate can be added to local anaesthetics, to speed up the onset of their effects and make their injection less painful. It is also a component of Moffett's solution, used in nasal surgery. It has been proposed that acidic diets weaken bones. One systematic meta-analysis of the research shows no such effect. Another also finds that there is no evidence that alkaline diets improve bone health, but suggests that there "may be some value" to alkaline diets for other reasons. Antacid (such as baking soda) solutions have been prepared and used by protesters to alleviate the effects of exposure to tear gas during protests. Similarly to its use in baking, sodium bicarbonate is used together with a mild acid such as tartaric acid as the excipient in effervescent tablets: when such a tablet is dropped in a glass of water, the carbonate leaves the reaction medium as carbon dioxide gas (HCO3− + H+ → H2O + CO2↑ or, more precisely, HCO3− + H3O+ → 2 H2O + CO2↑). This makes the tablet disintegrate, leaving the medication suspended and/or dissolved in the water together with the resulting salt (in this example, sodium tartrate). Personal hygiene Sodium bicarbonate is also used as an ingredient in some mouthwashes. It has anticaries and abrasive properties. It works as a mechanical cleanser on the teeth and gums, neutralizes the production of acid in the mouth, and also acts as an antiseptic to help prevent infections. Sodium bicarbonate in combination with other ingredients can be used to make a dry or wet deodorant. Sodium bicarbonate may be used as a buffering agent, combined with table salt, when creating a solution for nasal irrigation. It is used in eye hygiene to treat blepharitis. This is done by adding a teaspoon of sodium bicarbonate to cool water that was recently boiled followed by gentle scrubbing of the eyelash base with a cotton swab dipped in the solution. Veterinary uses Sodium bicarbonate is used as a cattle feed supplement, in particular as a buffering agent for the rumen. Cleaning agent Sodium bicarbonate is used in a process to remove paint and corrosion called sodablasting. As a blasting medium, sodium bicarbonate is used to remove surface contamination from softer and less resilient substrates such as aluminium, copper, or timber that could be damaged by silica sand abrasive media. A manufacturer recommends a paste made from baking soda with minimal water as a gentle scouring powder. Such a paste can be useful in removing surface rust because the rust forms a water-soluble compound when in a concentrated alkaline solution. Cold water should be used since hot-water solutions can corrode steel. Sodium bicarbonate attacks the thin protective oxide layer that forms on aluminium, making it unsuitable for cleaning this metal. A solution in warm water will remove the tarnish from silver when the silver is in contact with a piece of aluminium foil. Baking soda is commonly added to washing machines as a replacement for water softener and to remove odors from clothes. When diluted with warm water, it is also almost as effective in removing heavy tea and coffee stains from cups as sodium hydroxide. During the Manhattan Project to develop the nuclear bomb in the early 1940s, the chemical toxicity of uranium was an issue. Uranium oxides were found to stick very well to cotton cloth and did not wash out with soap or laundry detergent. However, the uranium would wash out with a 2% solution of sodium bicarbonate. Clothing can become contaminated with toxic dust of depleted uranium (DU), which is very dense, hence it is used for counterweights in a civilian context and in armour-piercing projectiles. DU is not removed by normal laundering; washing with about of baking soda in 2 gallons (7.5 L) of water will help wash it out. Odor control It is often claimed that baking soda is an effective odor remover and recommended that an open box be kept in the refrigerator to absorb odor. This idea was promoted by the leading U.S. brand of baking soda, Arm & Hammer, in an advertising campaign starting in 1972. Though this campaign is considered a classic of marketing, leading within a year to more than half of American refrigerators containing a box of baking soda, there is little evidence that it is effective in this application. Education An educational science experiment known as the "Baking Soda and Vinegar Volcano" uses the acid-base reaction with vinegar acid to mimic a volcanic eruption. The rapid production of CO2 causes the liquid to foam up and overflow its container. Other ingredients such as dish soap and food coloring can be added to enhance the visual effect. If this reaction is performed inside of a closed vessel (such as a bottle) with no way for gas to escape, it can cause an explosion if the pressure is high enough. Chemistry Sodium bicarbonate is an amphoteric compound. Aqueous solutions are mildly alkaline due to the formation of carbonic acid and hydroxide ion: HCO + H2O → + OH− Sodium bicarbonate can sometimes be used as a mild neutralization agent and a safer alternative to strong bases like sodium hydroxide. Reaction of sodium bicarbonate and an acid produces a salt and carbonic acid, which readily decomposes to carbon dioxide and water: NaHCO3 + HCl → NaCl + H2O+CO2 H2CO3 → H2O + CO2(g) Sodium bicarbonate reacts with acetic acid (found in vinegar), producing sodium acetate, water, and carbon dioxide: NaHCO3 + CH3COOH → CH3COONa + H2O + CO2(g) Sodium bicarbonate reacts with bases such as sodium hydroxide to form carbonates: NaHCO3 + NaOH → Na2CO3 + H2O Thermal decomposition At temperatures from , sodium bicarbonate gradually decomposes into sodium carbonate, water, and carbon dioxide. The conversion is faster at : 2 NaHCO3 → Na2CO3 + H2O + CO2 Most bicarbonates undergo this dehydration reaction. Further heating converts the carbonate into the oxide (above ): Na2CO3 → Na2O + CO2 The generation of carbon dioxide and water partially explain the fire-extinguishing properties of NaHCO3, although other factors like heat absorption and radical scavenging are more significant. Natural occurrence In nature, sodium bicarbonate occurs almost exclusively as either nahcolite or trona. Trona is more common, as nahcolite is more soluble in water and the chemical equilibrium between the two minerals favors trona. Significant nahcolite deposits are in the United States, Botswana and Kenya, Uganda, Turkey, and Mexico. The biggest trona deposits are in the Green River basin in Wyoming. Nahcolite is sometimes found as a component of oil shale. Stability and shelf life If kept cool (room temperature) and dry (an airtight container is recommended to keep out moist air), sodium bicarbonate can be kept without a significant amount of decomposition for at least two or three years. History The word natron has been in use in many languages throughout modern times (in the forms of anatron, natrum and natron) and originated (like Spanish, French and English natron as well as 'sodium') via Arabic naṭrūn (or anatrūn; cf. the Lower Egyptian “Natrontal” Wadi El Natrun, where a mixture of sodium carbonate and sodium hydrogen carbonate for the dehydration of mummies was used ) from Greek nítron (νίτρον) (Herodotus; Attic lítron (λίτρον)), which can be traced back to ancient Egyptian ntr. The Greek nítron (soda, saltpeter) was also used in Latin (sal) nitrum and in German Salniter (the source of Nitrogen, Nitrat etc.). The word saleratus, from Latin sal æratus (meaning "aerated salt"), was widely used in the 19th century for both sodium bicarbonate and potassium bicarbonate. In 1791, French chemist Nicolas Leblanc produced sodium carbonate (also known as soda ash). Pharmacist Valentin Rose the Younger is credited with the discovery of sodium bicarbonate in 1801 in Berlin. In 1846, two American bakers, John Dwight and Austin Church, established the first factory in the United States to produce baking soda from sodium carbonate and carbon dioxide. Saleratus, potassium or sodium bicarbonate, is mentioned in the novel Captains Courageous by Rudyard Kipling as being used extensively in the 1800s in commercial fishing to prevent freshly caught fish from spoiling. In 1919, US Senator Lee Overman declared that bicarbonate of soda could cure the Spanish flu. In the midst of the debate on 26 January 1919, he interrupted the discussion to announce the discovery of a cure. "I want to say, for the benefit of those who are making this investigation," he reported, "that I was told by a judge of a superior court in the mountain country of North Carolina they have discovered a remedy for this disease." The purported cure implied a critique of modern science and an appreciation for the simple wisdom of simple people. "They say that common baking soda will cure the disease," he continued, "that they have cured it with it, that they have no deaths up there at all; they use common baking soda, which cures the disease." Production Sodium bicarbonate is produced industrially from sodium carbonate: Na2CO3 + CO2 + H2O → 2 NaHCO3 It is produced on the scale of about 100,000 tonnes/year (as of 2001) with a worldwide production capacity of 2.4 million tonnes per year (as of 2002). Commercial quantities of baking soda are also produced by a similar method: soda ash, mined in the form of the ore trona, is dissolved in water and treated with carbon dioxide. Sodium bicarbonate precipitates as a solid from this solution. Regarding the Solvay process, sodium bicarbonate is an intermediate in the reaction of sodium chloride, ammonia, and carbon dioxide. The product however shows low purity (75pc). NaCl + CO2 + NH3 + H2O → NaHCO3 + NH4Cl Although of no practical value, NaHCO3 may be obtained by the reaction of carbon dioxide with an aqueous solution of sodium hydroxide: CO2 + NaOH → NaHCO3 Mining Naturally occurring deposits of nahcolite (NaHCO3) are found in the Eocene-age (55.8–33.9 Mya) Green River Formation, Piceance Basin in Colorado. Nahcolite was deposited as beds during periods of high evaporation in the basin. It is commercially mined using common underground mining techniques such as bore, drum, and longwall mining in a fashion very similar to coal mining. It is also produced by solution mining, pumping heated water through nahcolite beds and crystallizing the dissolved nahcolite through a cooling crystallization process. Since nahcolite is sometimes found in shale, it can be produced as a co-product of shale oil extraction, where it is recovered by solution mining. In popular culture Sodium bicarbonate, as "bicarbonate of soda", was a frequent source of punch lines for Groucho Marx in Marx Brothers movies. In Duck Soup, Marx plays the leader of a nation at war. In one scene, he receives a message from the battlefield that his general is reporting a gas attack, and Groucho tells his aide: "Tell him to take a teaspoonful of bicarbonate of soda and a half a glass of water." In A Night at the Opera, Groucho's character addresses the opening night crowd at an opera by saying of the lead tenor: "Signor Lassparri comes from a very famous family. His mother was a well-known bass singer. His father was the first man to stuff spaghetti with bicarbonate of soda, thus causing and curing indigestion at the same time." In the Joseph L. Mankewicz classic All About Eve, the Max Fabian character (Gregory Ratoff) has an extended scene with Margo Channing (Bette Davis) in which, suffering from heartburn, he requests and then drinks bicarbonate of soda, eliciting a prominent burp. Channing promises to always keep a box of bicarb with Max's name on it.
Physical sciences
Salts
null
155726
https://en.wikipedia.org/wiki/Sodium%20carbonate
Sodium carbonate
Sodium carbonate (also known as washing soda, soda ash and soda crystals) is the inorganic compound with the formula and its various hydrates. All forms are white, odourless, water-soluble salts that yield alkaline solutions in water. Historically, it was extracted from the ashes of plants grown in sodium-rich soils, and because the ashes of these sodium-rich plants were noticeably different from ashes of wood (once used to produce potash), sodium carbonate became known as "soda ash". It is produced in large quantities from sodium chloride and limestone by the Solvay process, as well as by carbonating sodium hydroxide which is made using the chloralkali process. Hydrates Sodium carbonate is obtained as three hydrates and as the anhydrous salt: sodium carbonate decahydrate (natron), Na2CO3·10H2O, which readily effloresces to form the monohydrate. sodium carbonate heptahydrate (not known in mineral form), Na2CO3·7H2O. sodium carbonate monohydrate (thermonatrite), Na2CO3·H2O. Also known as crystal carbonate. anhydrous sodium carbonate (natrite), also known as calcined soda, is formed by heating the hydrates. It is also formed when sodium hydrogencarbonate is heated (calcined) e.g. in the final step of the Solvay process. The decahydrate is formed from water solutions crystallizing in the temperature range −2.1 to +32.0 °C, the heptahydrate in the narrow range 32.0 to 35.4 °C and above this temperature the monohydrate forms. In dry air the decahydrate and heptahydrate lose water to give the monohydrate. Other hydrates have been reported, e.g. with 2.5 units of water per sodium carbonate unit ("Penta hemihydrate"). Washing soda Sodium carbonate decahydrate (Na2CO3·10H2O), also known as washing soda, is the most common hydrate of sodium carbonate containing 10 molecules of water of crystallization. Soda ash is dissolved in water and crystallized to get washing soda. It is one of the few metal carbonates that is soluble in water. Applications Some common applications of sodium carbonate include: As a cleansing agent for domestic purposes like washing clothes. Sodium carbonate is a component of many dry soap powders. It has detergent properties through the process of saponification, which converts fats and grease to water-soluble salts (specifically, soaps). It is used for lowering the hardness of water (see ). It is used in the manufacture of glass, soap, and paper (see ). It is used in the manufacture of sodium compounds like borax (sodium borate). Glass manufacture Sodium carbonate serves as a flux for silica (SiO2, melting point 1,713 °C), lowering the melting point of the mixture to something achievable without special materials. This "soda glass" is mildly water-soluble, so some calcium carbonate is added to the melt mixture to make the glass insoluble. Bottle and window glass ("soda–lime glass" with transition temperature ~570 °C) is made by melting such mixtures of sodium carbonate, calcium carbonate, and silica sand (silicon dioxide (SiO2)). When these materials are heated, the carbonates release carbon dioxide. In this way, sodium carbonate is a source of sodium oxide. Soda–lime glass has been the most common form of glass for centuries. It is also a key input for tableware glass manufacturing. Water softening Hard water usually contains calcium or magnesium ions. Sodium carbonate is used for removing these ions and replacing them with sodium ions. Sodium carbonate is a water-soluble source of carbonate. The calcium and magnesium ions form insoluble solid precipitates upon treatment with carbonate ions: The water is softened because it no longer contains dissolved calcium ions and magnesium ions. Food additive and cooking Sodium carbonate has several uses in cuisine, largely because it is a stronger base than baking soda (sodium bicarbonate) but weaker than lye (which may refer to sodium hydroxide or, less commonly, potassium hydroxide). Alkalinity affects gluten production in kneaded doughs, and also improves browning by reducing the temperature at which the Maillard reaction occurs. To take advantage of the former effect, sodium carbonate is therefore one of the components of , a solution of alkaline salts used to give Japanese ramen noodles their characteristic flavour and chewy texture; a similar solution is used in Chinese cuisine to make lamian, for similar reasons. Cantonese bakers similarly use sodium carbonate as a substitute for lye-water to give moon cakes their characteristic texture and improve browning. In German cuisine (and Central European cuisine more broadly), breads such as pretzels and lye rolls traditionally treated with lye to improve browning can be treated instead with sodium carbonate; sodium carbonate does not produce quite as strong a browning as lye, but is much safer and easier to work with. Sodium carbonate is used in the production of sherbet powder. The cooling and fizzing sensation results from the endothermic reaction between sodium carbonate and a weak acid, commonly citric acid, releasing carbon dioxide gas, which occurs when the sherbet is moistened by saliva. Sodium carbonate also finds use in the food industry as a food additive (European Food Safety Authority number E500) as an acidity regulator, anticaking agent, raising agent, and stabilizer. It is also used in the production of to stabilize the pH of the final product. While it is less likely to cause chemical burns than lye, care must still be taken when working with sodium carbonate in the kitchen, as it is corrosive to aluminum cookware, utensils, and foil. Other applications Sodium carbonate is also used as a relatively strong base in various fields. As a common alkali, it is preferred in many chemical processes because it is cheaper than sodium hydroxide and far safer to handle. Its mildness especially recommends its use in domestic applications. For example, it is used as a pH regulator to maintain stable alkaline conditions necessary for the action of the majority of photographic film developing agents. It is also a common additive in swimming pools and aquarium water to maintain a desired pH and carbonate hardness (KH). In dyeing with fiber-reactive dyes, sodium carbonate (often under a name such as soda ash fixative or soda ash activator) is used as mordant to ensure proper chemical bonding of the dye with cellulose (plant) fiber. It is also used in the froth flotation process to maintain a favourable pH as a float conditioner besides CaO and other mildly basic compounds. Precursor to other compounds Sodium (NaHCO3) or baking soda, also a component in fire extinguishers, is often generated from sodium carbonate. Although NaHCO3 is itself an intermediate product of the Solvay process, the heating needed to remove the ammonia that contaminates it decomposes some NaHCO3, making it more economical to react finished Na2CO3 with CO2: In a related reaction, sodium carbonate is used to make sodium bisulfite (NaHSO3), which is used for the "sulfite" method of separating lignin from cellulose. This reaction is exploited for removing sulfur dioxide from flue gases in power stations: This application has become more common, especially where stations have to meet stringent emission controls. Sodium carbonate is used by the cotton industry to neutralize the sulfuric acid needed for acid delinting of fuzzy cottonseed. It is also used to form carbonates of other metals by ion exchange, often with the other metals' sulphates. Miscellaneous Sodium carbonate is used by the brick industry as a wetting agent to reduce the amount of water needed to extrude the clay. In casting, it is referred to as "bonding agent" and is used to allow wet alginate to adhere to gelled alginate. Sodium carbonate is used in toothpastes, where it acts as a foaming agent and an abrasive, and to temporarily increase mouth pH. Sodium carbonate is also used in the processing and tanning of animal hides. Physical properties The integral enthalpy of solution of sodium carbonate is −28.1 kJ/mol for a 10% w/w aqueous solution. The Mohs hardness of sodium carbonate monohydrate is 1.3. Occurrence as natural mineral Sodium carbonate is soluble in water, and can occur naturally in arid regions, especially in mineral deposits (evaporites) formed when seasonal lakes evaporate. Deposits of the mineral natron have been mined from dry lake bottoms in Egypt since ancient times, when natron was used in the preparation of mummies and in the early manufacture of glass. The anhydrous mineral form of sodium carbonate is quite rare and called natrite. Sodium carbonate also erupts from Ol Doinyo Lengai, Tanzania's unique volcano, and it is presumed to have erupted from other volcanoes in the past, but due to these minerals' instability at the Earth's surface, are likely to be eroded. All three mineralogical forms of sodium carbonate, as well as trona, trisodium hydrogendi carbonate dihydrate, are also known from ultra-alkaline pegmatitic rocks, that occur for example in the Kola Peninsula in Russia. Extra terrestrially, known sodium carbonate is rare. Deposits have been identified as the source of bright spots on Ceres, interior material that has been brought to the surface. While there are carbonates on Mars, and these are expected to include sodium carbonate, deposits have yet to be confirmed, this absence is explained by some as being due to a global dominance of low pH in previously aqueous Martian soil. Production The initial large-scale chemical procedure was established in England in 1823 to manufacture soda ash. Mining Trona, also known as trisodium hydrogendicarbonate dihydrate (Na3HCO3CO3·2H2O), is mined in several areas of the US and provides nearly all the US consumption of sodium carbonate. Large natural deposits found in 1938, such as the one near Green River, Wyoming, have made mining more economical than industrial production in North America. There are important reserves of trona in Turkey; two million tons of soda ash have been extracted from the reserves near Ankara. Barilla and kelp Several "halophyte" (salt-tolerant) plant species and seaweed species can be processed to yield an impure form of sodium carbonate, and these sources predominated in Europe and elsewhere until the early 19th century. The land plants (typically glassworts or saltworts) or the seaweed (typically Fucus species) were harvested, dried, and burned. The ashes were then "lixivated" (washed with water) to form an alkali solution. This solution was boiled dry to create the final product, which was termed "soda ash"; this very old name derives from the Arabic word soda, in turn applied to Salsola soda, one of the many species of seashore plants harvested for production. "Barilla" is a commercial term applied to an impure form of potash obtained from coastal plants or kelp. The sodium carbonate concentration in soda ash varied very widely, from 2–3 percent for the seaweed-derived form ("kelp"), to 30 percent for the best barilla produced from saltwort plants in Spain. Plant and seaweed sources for soda ash, and also for the related alkali "potash", became increasingly inadequate by the end of the 18th century, and the search for commercially viable routes to synthesizing soda ash from salt and other chemicals intensified. Leblanc process In 1792, the French chemist Nicolas Leblanc patented a process for producing sodium carbonate from salt, sulfuric acid, limestone, and coal. In the first step, sodium chloride is treated with sulfuric acid in the Mannheim process. This reaction produces sodium sulfate (salt cake) and hydrogen chloride: The salt cake and crushed limestone (calcium carbonate) was reduced by heating with coal. This conversion entails two parts. First is the carbothermic reaction whereby the coal, a source of carbon, reduces the sulfate to sulfide: The second stage is the reaction to produce sodium carbonate and calcium sulfide: This mixture is called black ash. The soda ash is extracted from the black ash with water. Evaporation of this extract yields solid sodium carbonate. This extraction process was termed lixiviating. The hydrochloric acid produced by the Leblanc process was a major source of air pollution, and the calcium sulfide byproduct also presented waste disposal issues. However, it remained the major production method for sodium carbonate until the late 1880s. Solvay process In 1861, the Belgian industrial chemist Ernest Solvay developed a method for making sodium carbonate by first reacting sodium chloride, ammonia, water, and carbon dioxide to generate sodium bicarbonate and ammonium chloride: The resulting sodium bicarbonate was then converted to sodium carbonate by heating it, releasing water and carbon dioxide: Meanwhile, the ammonia was regenerated from the ammonium chloride byproduct by treating it with the lime (calcium oxide) left over from carbon dioxide generation: The Solvay process recycles its ammonia. It consumes only brine and limestone, and calcium chloride is its only waste product. The process is substantially more economical than the Leblanc process, which generates two waste products, calcium sulfide and hydrogen chloride. The Solvay process quickly came to dominate sodium carbonate production worldwide. By 1900, 90% of sodium carbonate was produced by the Solvay process, and the last Leblanc process plant closed in the early 1920s. The second step of the Solvay process, heating sodium bicarbonate, is used on a small scale by home cooks and in restaurants to make sodium carbonate for culinary purposes (including pretzels and alkali noodles). The method is appealing to such users because sodium bicarbonate is widely sold as baking soda, and the temperatures required ( to ) to convert baking soda to sodium carbonate are readily achieved in conventional kitchen ovens. Hou's process This process was developed by Chinese chemist Hou Debang in the 1930s. The earlier steam reforming by-product carbon dioxide was pumped through a saturated solution of sodium chloride and ammonia to produce sodium bicarbonate by these reactions: The sodium bicarbonate was collected as a precipitate due to its low solubility and then heated up to approximately or to yield pure sodium carbonate similar to last step of the Solvay process. More sodium chloride is added to the remaining solution of ammonium and sodium chlorides; also, more ammonia is pumped at 30–40 °C to this solution. The solution temperature is then lowered to below 10 °C. Solubility of ammonium chloride is higher than that of sodium chloride at 30 °C and lower at 10 °C. Due to this temperature-dependent solubility difference and the common-ion effect, ammonium chloride is precipitated in a sodium chloride solution. The Chinese name of Hou's process, lianhe zhijian fa (), means "coupled manufacturing alkali method": Hou's process is coupled to the Haber process and offers better atom economy by eliminating the production of calcium chloride, since any ammonia generated gets used by the reaction. The by-product ammonium chloride can be sold as a fertilizer.
Physical sciences
Salts
null
155739
https://en.wikipedia.org/wiki/Biological%20pest%20control
Biological pest control
Biological control or biocontrol is a method of controlling pests, whether pest animals such as insects and mites, weeds, or pathogens affecting animals or plants by using other organisms. It relies on predation, parasitism, herbivory, or other natural mechanisms, but typically also involves an active human management role. It can be an important component of integrated pest management (IPM) programs. There are three basic strategies for biological control: classical (importation), where a natural enemy of a pest is introduced in the hope of achieving control; inductive (augmentation), in which a large population of natural enemies are administered for quick pest control; and inoculative (conservation), in which measures are taken to maintain natural enemies through regular reestablishment. Natural enemies of insects play an important part in limiting the densities of potential pests. Biological control agents such as these include predators, parasitoids, pathogens, and competitors. Biological control agents of plant diseases are most often referred to as antagonists. Biological control agents of weeds include seed predators, herbivores, and plant pathogens. Biological control can have side-effects on biodiversity through attacks on non-target species by any of the above mechanisms, especially when a species is introduced without a thorough understanding of the possible consequences. History The term "biological control" was first used by Harry Scott Smith at the 1919 meeting of the Pacific Slope Branch of the American Association of Economic Entomologists, in Riverside, California. It was brought into more widespread use by the entomologist Paul H. DeBach (1914–1993) who worked on citrus crop pests throughout his life. However, the practice has previously been used for centuries. The first report of the use of an insect species to control an insect pest comes from "Nanfang Caomu Zhuang" (南方草木狀 Plants of the Southern Regions) (), attributed to Western Jin dynasty botanist Ji Han (嵇含, 263–307), in which it is mentioned that "Jiaozhi people sell ants and their nests attached to twigs looking like thin cotton envelopes, the reddish-yellow ant being larger than normal. Without such ants, southern citrus fruits will be severely insect-damaged". The ants used are known as huang gan (huang = yellow, gan = citrus) ants (Oecophylla smaragdina). The practice was later reported by Ling Biao Lu Yi (late Tang dynasty or Early Five Dynasties), in Ji Le Pian by Zhuang Jisu (Southern Song dynasty), in the Book of Tree Planting by Yu Zhen Mu (Ming dynasty), in the book Guangdong Xing Yu (17th century), Lingnan by Wu Zhen Fang (Qing dynasty), in Nanyue Miscellanies by Li Diao Yuan, and others. Biological control techniques as we know them today started to emerge in the 1870s. During this decade, in the US, the Missouri State Entomologist C. V. Riley and the Illinois State Entomologist W. LeBaron began within-state redistribution of parasitoids to control crop pests. The first international shipment of an insect as a biological control agent was made by Charles V. Riley in 1873, shipping to France the predatory mites Tyroglyphus phylloxera to help fight the grapevine phylloxera (Daktulosphaira vitifoliae) that was destroying grapevines in France. The United States Department of Agriculture (USDA) initiated research in classical biological control following the establishment of the Division of Entomology in 1881, with C. V. Riley as Chief. The first importation of a parasitoidal wasp into the United States was that of the braconid Cotesia glomerata in 1883–1884, imported from Europe to control the invasive cabbage white butterfly, Pieris rapae. In 1888–1889 the vedalia beetle, Novius cardinalis, a lady beetle, was introduced from Australia to California to control the cottony cushion scale, Icerya purchasi. This had become a major problem for the newly developed citrus industry in California, but by the end of 1889, the cottony cushion scale population had already declined. This great success led to further introductions of beneficial insects into the US. In 1905 the USDA initiated its first large-scale biological control program, sending entomologists to Europe and Japan to look for natural enemies of the spongy moth, Lymantria dispar dispar, and the brown-tail moth, Euproctis chrysorrhoea, invasive pests of trees and shrubs. As a result, nine parasitoids (solitary wasps) of the spongy moth, seven of the brown-tail moth, and two predators of both moths became established in the US. Although the spongy moth was not fully controlled by these natural enemies, the frequency, duration, and severity of its outbreaks were reduced and the program was regarded as successful. This program also led to the development of many concepts, principles, and procedures for the implementation of biological control programs. Prickly pear cacti were introduced into Queensland, Australia as ornamental plants, starting in 1788. They quickly spread to cover over 25 million hectares of Australia by 1920, increasing by 1 million hectares per year. Digging, burning, and crushing all proved ineffective. Two control agents were introduced to help control the spread of the plant, the cactus moth Cactoblastis cactorum, and the scale insect Dactylopius. Between 1926 and 1931, tens of millions of cactus moth eggs were distributed around Queensland with great success, and by 1932, most areas of prickly pear had been destroyed. The first reported case of a classical biological control attempt in Canada involves the parasitoidal wasp Trichogramma minutum. Individuals were caught in New York State and released in Ontario gardens in 1882 by William Saunders, a trained chemist and first Director of the Dominion Experimental Farms, for controlling the invasive currantworm Nematus ribesii. Between 1884 and 1908, the first Dominion Entomologist, James Fletcher, continued introductions of other parasitoids and pathogens for the control of pests in Canada. Types of biological pest control There are three basic biological pest control strategies: importation (classical biological control), augmentation and conservation. Importation Importation or classical biological control involves the introduction of a pest's natural enemies to a new locale where they do not occur naturally. Early instances were often unofficial and not based on research, and some introduced species became serious pests themselves. To be most effective at controlling a pest, a biological control agent requires a colonizing ability which allows it to keep pace with changes to the habitat in space and time. Control is greatest if the agent has temporal persistence so that it can maintain its population even in the temporary absence of the target species, and if it is an opportunistic forager, enabling it to rapidly exploit a pest population. One of the earliest successes was in controlling Icerya purchasi (cottony cushion scale) in Australia, using a predatory insect Rodolia cardinalis (the vedalia beetle). This success was repeated in California using the beetle and a parasitoidal fly, Cryptochaetum iceryae. Other successful cases include the control of Antonina graminis in Texas by Neodusmetia sangwani in the 1960s. Damage from Hypera postica, the alfalfa weevil, a serious introduced pest of forage, was substantially reduced by the introduction of natural enemies. 20 years after their introduction the population of weevils in the alfalfa area treated for alfalfa weevil in the Northeastern United States remained 75 percent down. Alligator weed was introduced to the United States from South America. It takes root in shallow water, interfering with navigation, irrigation, and flood control. The alligator weed flea beetle and two other biological controls were released in Florida, greatly reducing the amount of land covered by the plant. Another aquatic weed, the giant salvinia (Salvinia molesta) is a serious pest, covering waterways, reducing water flow and harming native species. Control with the salvinia weevil (Cyrtobagous salviniae) and the salvinia stem-borer moth (Samea multiplicalis) is effective in warm climates, and in Zimbabwe, a 99% control of the weed was obtained over a two-year period. Small, commercially-reared parasitoidal wasps, Trichogramma ostriniae, provide limited and erratic control of the European corn borer (Ostrinia nubilalis), a serious pest. Careful formulations of the bacterium Bacillus thuringiensis are more effective. The O. nubilalis integrated control releasing Tricogramma brassicae (egg parasitoid) and later Bacillus thuringiensis subs. kurstaki (larvicide effect) reduce pest damages more than insecticide treatments The population of Levuana iridescens, the Levuana moth, a serious coconut pest in Fiji, was brought under control by a classical biological control program in the 1920s. Augmentation Augmentation involves the supplemental release of natural enemies that occur in a particular area, boosting the naturally occurring populations there. In inoculative release, small numbers of the control agents are released at intervals to allow them to reproduce, in the hope of setting up longer-term control and thus keeping the pest down to a low level, constituting prevention rather than cure. In inundative release, in contrast, large numbers are released in the hope of rapidly reducing a damaging pest population, correcting a problem that has already arisen. Augmentation can be effective, but is not guaranteed to work, and depends on the precise details of the interactions between each pest and control agent. An example of inoculative release occurs in the horticultural production of several crops in greenhouses. Periodic releases of the parasitoidal wasp, Encarsia formosa, are used to control greenhouse whitefly, while the predatory mite Phytoseiulus persimilis is used for control of the two-spotted spider mite. The egg parasite Trichogramma is frequently released inundatively to control harmful moths. New way for inundative releases are now introduced i.e. use of drones. Egg parasitoids are able to find the eggs of the target host by means of several cues. Kairomones were found on moth scales. Similarly, Bacillus thuringiensis and other microbial insecticides are used in large enough quantities for a rapid effect. Recommended release rates for Trichogramma in vegetable or field crops range from 5,000 to 200,000 per acre (1 to 50 per square metre) per week according to the level of pest infestation. Similarly, nematodes that kill insects (that are entomopathogenic) are released at rates of millions and even billions per acre for control of certain soil-dwelling insect pests. Conservation The conservation of existing natural enemies in an environment is the third method of biological pest control. Natural enemies are already adapted to the habitat and to the target pest, and their conservation can be simple and cost-effective, as when nectar-producing crop plants are grown in the borders of rice fields. These provide nectar to support parasitoids and predators of planthopper pests and have been demonstrated to be so effective (reducing pest densities by 10- or even 100-fold) that farmers sprayed 70% less insecticides and enjoyed yields boosted by 5%. Predators of aphids were similarly found to be present in tussock grasses by field boundary hedges in England, but they spread too slowly to reach the centers of fields. Control was improved by planting a meter-wide strip of tussock grasses in field centers, enabling aphid predators to overwinter there. Cropping systems can be modified to favor natural enemies, a practice sometimes referred to as habitat manipulation. Providing a suitable habitat, such as a shelterbelt, hedgerow, or beetle bank where beneficial insects such as parasitoidal wasps can live and reproduce, can help ensure the survival of populations of natural enemies. Things as simple as leaving a layer of fallen leaves or mulch in place provides a suitable food source for worms and provides a shelter for insects, in turn being a food source for such beneficial mammals as hedgehogs and shrews. Compost piles and stacks of wood can provide shelter for invertebrates and small mammals. Long grass and ponds support amphibians. Not removing dead annuals and non-hardy plants in the autumn allow insects to make use of their hollow stems during winter. In California, prune trees are sometimes planted in grape vineyards to provide an improved overwintering habitat or refuge for a key grape pest parasitoid. The providing of artificial shelters in the form of wooden caskets, boxes or flowerpots is also sometimes undertaken, particularly in gardens, to make a cropped area more attractive to natural enemies. For example, earwigs are natural predators that can be encouraged in gardens by hanging upside-down flowerpots filled with straw or wood wool. Green lacewings can be encouraged by using plastic bottles with an open bottom and a roll of cardboard inside. Birdhouses enable insectivorous birds to nest; the most useful birds can be attracted by choosing an opening just large enough for the desired species. In cotton production, the replacement of broad-spectrum insecticides with selective control measures such as Bt cotton can create a more favorable environment for natural enemies of cotton pests due to reduced insecticide exposure risk. Such predators or parasitoids can control pests not affected by the Bt protein. Reduced prey quality and abundance associated with increased control from Bt cotton can also indirectly decrease natural enemy populations in some cases, but the percentage of pests eaten or parasitized in Bt and non-Bt cotton are often similar. Biological control agents Predators Predators are mainly free-living species that directly consume a large number of prey during their whole lifetime. Given that many major crop pests are insects, many of the predators used in biological control are insectivorous species. Lady beetles, and in particular their larvae which are active between May and July in the northern hemisphere, are voracious predators of aphids, and also consume mites, scale insects and small caterpillars. The spotted lady beetle (Coleomegilla maculata) is also able to feed on the eggs and larvae of the Colorado potato beetle (Leptinotarsa decemlineata). The larvae of many hoverfly species principally feed upon aphids, one larva devouring up to 400 in its lifetime. Their effectiveness in commercial crops has not been studied. The running crab spider Philodromus cespitum also prey heavily on aphids, and act as a biological control agent in European fruit orchards. Several species of entomopathogenic nematode are important predators of insect and other invertebrate pests. Entomopathogenic nematodes form a stress–resistant stage known as the infective juvenile. These spread in the soil and infect suitable insect hosts. Upon entering the insect they move to the hemolymph where they recover from their stagnated state of development and release their bacterial symbionts. The bacterial symbionts reproduce and release toxins, which then kill the host insect. Phasmarhabditis hermaphrodita is a microscopic nematode that kills slugs. Its complex life cycle includes a free-living, infective stage in the soil where it becomes associated with a pathogenic bacteria such as Moraxella osloensis. The nematode enters the slug through the posterior mantle region, thereafter feeding and reproducing inside, but it is the bacteria that kill the slug. The nematode is available commercially in Europe and is applied by watering onto moist soil. Entomopathogenic nematodes have a limited shelf life because of their limited resistance to high temperature and dry conditions. The type of soil they are applied to may also limit their effectiveness. Species used to control spider mites include the predatory mites Phytoseiulus persimilis, Neoseilus californicus, and Amblyseius cucumeris, the predatory midge Feltiella acarisuga, and a ladybird Stethorus punctillum. The bug Orius insidiosus has been successfully used against the two-spotted spider mite and the western flower thrips (Frankliniella occidentalis). Predators including Cactoblastis cactorum (mentioned above) can also be used to destroy invasive plant species. As another example, the poison hemlock moth (Agonopterix alstroemeriana) can be used to control poison hemlock (Conium maculatum). During its larval stage, the moth strictly consumes its host plant, poison hemlock, and can exist at hundreds of larvae per individual host plant, destroying large swathes of the hemlock. For rodent pests, cats are effective biological control when used in conjunction with reduction of "harborage"/hiding locations. While cats are effective at preventing rodent "population explosions", they are not effective for eliminating pre-existing severe infestations. Barn owls are also sometimes used as biological rodent control. Although there are no quantitative studies of the effectiveness of barn owls for this purpose, they are known rodent predators that can be used in addition to or instead of cats; they can be encouraged into an area with nest boxes. In Honduras, where the mosquito Aedes aegypti was transmitting dengue fever and other infectious diseases, biological control was attempted by a community action plan; copepods, baby turtles, and juvenile tilapia were added to the wells and tanks where the mosquito breeds and the mosquito larvae were eliminated. Even amongst arthropods usually thought of as obligate predators of animals (especially other arthropods), floral food sources (nectar and to a lesser degree pollen) are often useful adjunct sources. It had been noticed in one study that adult Adalia bipunctata (predator and common biocontrol of Ephestia kuehniella) could survive on flowers but never completed its life cycle, so a meta-analysis was done to find such an overall trend in previously published data, if it existed. In some cases floral resources are outright necessary. Overall, floral resources (and an imitation, i.e. sugar water) increase longevity and fecundity, meaning even predatory population numbers can depend on non-prey food abundance. Thus biocontrol population maintenance – and success – may depend on nearby flowers. Parasitoids Parasitoids lay their eggs on or in the body of an insect host, which is then used as a food for developing larvae. The host is ultimately killed. Most insect parasitoids are wasps or flies, and many have a very narrow host range. The most important groups are the ichneumonid wasps, which mainly use caterpillars as hosts; braconid wasps, which attack caterpillars and a wide range of other insects including aphids; chalcidoid wasps, which parasitize eggs and larvae of many insect species; and tachinid flies, which parasitize a wide range of insects including caterpillars, beetle adults and larvae, and true bugs. Parasitoids are most effective at reducing pest populations when their host organisms have limited refuges to hide from them. Parasitoids are among the most widely used biological control agents. Commercially, there are two types of rearing systems: short-term daily output with high production of parasitoids per day, and long-term, low daily output systems. In most instances, production will need to be matched with the appropriate release dates when susceptible host species at a suitable phase of development will be available. Larger production facilities produce on a yearlong basis, whereas some facilities produce only seasonally. Rearing facilities are usually a significant distance from where the agents are to be used in the field, and transporting the parasitoids from the point of production to the point of use can pose problems. Shipping conditions can be too hot, and even vibrations from planes or trucks can adversely affect parasitoids. Encarsia formosa is a small parasitoid wasp attacking whiteflies, sap-feeding insects which can cause wilting and black sooty moulds in glasshouse vegetable and ornamental crops. It is most effective when dealing with low level infestations, giving protection over a long period of time. The wasp lays its eggs in young whitefly 'scales', turning them black as the parasite larvae pupate. Gonatocerus ashmeadi (Hymenoptera: Mymaridae) has been introduced to control the glassy-winged sharpshooter Homalodisca vitripennis (Hemiptera: Cicadellidae) in French Polynesia and has successfully controlled ~95% of the pest density. The eastern spruce budworm is an example of a destructive insect in fir and spruce forests. Birds are a natural form of biological control, but the Trichogramma minutum, a species of parasitic wasp, has been investigated as an alternative to more controversial chemical controls. There are a number of recent studies pursuing sustainable methods for controlling urban cockroaches using parasitic wasps. Since most cockroaches remain in the sewer system and sheltered areas which are inaccessible to insecticides, employing active-hunter wasps is a strategy to try and reduce their populations. Pathogens Pathogenic micro-organisms include bacteria, fungi, and viruses. They kill or debilitate their host and are relatively host-specific. Various microbial insect diseases occur naturally, but may also be used as biological pesticides. When naturally occurring, these outbreaks are density-dependent in that they generally only occur as insect populations become denser. The use of pathogens against aquatic weeds was unknown until a groundbreaking 1972 proposal by Zettler and Freeman. Up to that point biocontrol of any kind had not been used against any water weeds. In their review of the possibilities, they noted the lack of interest and information thus far, and listed what was known of pests-of-pests – whether pathogens or not. They proposed that this should be relatively straightfoward to apply in the same way as other biocontrols. And indeed in the decades since, the same biocontrol methods that are routine on land have become common in the water. Bacteria Bacteria used for biological control infect insects via their digestive tracts, so they offer only limited options for controlling insects with sucking mouth parts such as aphids and scale insects. Bacillus thuringiensis, a soil-dwelling bacterium, is the most widely applied species of bacteria used for biological control, with at least four sub-species used against Lepidopteran (moth, butterfly), Coleopteran (beetle) and Dipteran (true fly) insect pests. The bacterium is available to organic farmers in sachets of dried spores which are mixed with water and sprayed onto vulnerable plants such as brassicas and fruit trees. Genes from B. thuringiensis have also been incorporated into transgenic crops, making the plants express some of the bacterium's toxins, which are proteins. These confer resistance to insect pests and thus reduce the necessity for pesticide use. If pests develop resistance to the toxins in these crops, B. thuringiensis will become useless in organic farming also. The bacterium Paenibacillus popilliae which causes milky spore disease has been found useful in the control of Japanese beetle, killing the larvae. It is very specific to its host species and is harmless to vertebrates and other invertebrates. Bacillus spp., fluorescent Pseudomonads, and Streptomycetes are controls of various fungal pathogens. Colombia mosquito control The largest-ever deployment of Wolbachia-infected A. aegypti mosquitoes reduced dengue incidence by 94–97% in the Colombian cities of Bello, Medellín, and Itagüí. The project was executed by non-profit World Mosquito Program (WMP). Wolbachia prevents mosquitos from transmitting viruses such as dengue and zika. The insects pass the bacteria on to their offspring. The project covered a combined area of , home to 3.3 million people. Most of the project area reached the target of infecting 60% of local mosquitoes. The technique is not endorsed by WHO. Fungi Entomopathogenic fungi, which cause disease in insects, include at least 14 species that attack aphids. Beauveria bassiana is mass-produced and used to manage a wide variety of insect pests including whiteflies, thrips, aphids and weevils. Lecanicillium spp. are deployed against white flies, thrips and aphids. Metarhizium spp. are used against pests including beetles, locusts and other grasshoppers, Hemiptera, and spider mites. Paecilomyces fumosoroseus is effective against white flies, thrips and aphids; Purpureocillium lilacinus is used against root-knot nematodes, and 89 Trichoderma species against certain plant pathogens. Trichoderma viride has been used against Dutch elm disease, and has shown some effect in suppressing silver leaf, a disease of stone fruits caused by the pathogenic fungus Chondrostereum purpureum. Pathogenic fungi may be controlled by other fungi, or bacteria or yeasts, such as: Gliocladium spp., mycoparasitic Pythium spp., binucleate types of Rhizoctonia spp., and Laetisaria spp. The fungi Cordyceps and Metacordyceps are deployed against a wide spectrum of arthropods. Entomophaga is effective against pests such as the green peach aphid. Several members of Chytridiomycota and Blastocladiomycota have been explored as agents of biological control. From Chytridiomycota, Synchytrium solstitiale is being considered as a control agent of the yellow star thistle (Centaurea solstitialis) in the United States. Viruses Baculoviruses are specific to individual insect host species and have been shown to be useful in biological pest control. For example, the Lymantria dispar multicapsid nuclear polyhedrosis virus has been used to spray large areas of forest in North America where larvae of the spongy moth are causing serious defoliation. The moth larvae are killed by the virus they have eaten and die, the disintegrating cadavers leaving virus particles on the foliage to infect other larvae. A mammalian virus, the rabbit haemorrhagic disease virus was introduced to Australia to attempt to control the European rabbit populations there. It escaped from quarantine and spread across the country, killing large numbers of rabbits. Very young animals survived, passing immunity to their offspring in due course and eventually producing a virus-resistant population. Introduction into New Zealand in the 1990s was similarly successful at first, but a decade later, immunity had developed and populations had returned to pre-RHD levels. RNA mycoviruses are controls of various fungal pathogens. Oomycota Lagenidium giganteum is a water-borne mold that parasitizes the larval stage of mosquitoes. When applied to water, the motile spores avoid unsuitable host species and search out suitable mosquito larval hosts. This mold has the advantages of a dormant phase, resistant to desiccation, with slow-release characteristics over several years. Unfortunately, it is susceptible to many chemicals used in mosquito abatement programmes. Competitors The legume vine Mucuna pruriens is used in the countries of Benin and Vietnam as a biological control for problematic Imperata cylindrica grass: the vine is extremely vigorous and suppresses neighbouring plants by out-competing them for space and light. Mucuna pruriens is said not to be invasive outside its cultivated area. Desmodium uncinatum can be used in push-pull farming to stop the parasitic plant, witchweed (Striga). The Australian bush fly, Musca vetustissima, is a major nuisance pest in Australia, but native decomposers found in Australia are not adapted to feeding on cow dung, which is where bush flies breed. Therefore, the Australian Dung Beetle Project (1965–1985), led by George Bornemissza of the Commonwealth Scientific and Industrial Research Organisation, released forty-nine species of dung beetle, to reduce the amount of dung and therefore also the potential breeding sites of the fly. Combined use of parasitoids and pathogens In cases of massive and severe infection of invasive pests, techniques of pest control are often used in combination. An example is the emerald ash borer, Agrilus planipennis, an invasive beetle from China, which has destroyed tens of millions of ash trees in its introduced range in North America. As part of the campaign against it, from 2003 American scientists and the Chinese Academy of Forestry searched for its natural enemies in the wild, leading to the discovery of several parasitoid wasps, namely Tetrastichus planipennisi, a gregarious larval endoparasitoid, Oobius agrili, a solitary, parthenogenic egg parasitoid, and Spathius agrili, a gregarious larval ectoparasitoid. These have been introduced and released into the United States of America as a possible biological control of the emerald ash borer. Initial results for Tetrastichus planipennisi have shown promise, and it is now being released along with Beauveria bassiana, a fungal pathogen with known insecticidal properties. Secondary plants In addition, biological pest control sometimes makes use of plant defenses to reduce crop damage by herbivores. Techniques include polyculture, the planting together of two or more species such as a primary crop and a secondary plant, which may also be a crop. This can allow the secondary plant's defensive chemicals to protect the crop planted with it. Target pests Fungal pests Botrytis cinerea on lettuce, by Fusarium spp. and Penicillium claviforme, on grape and strawberry by Trichoderma spp., on strawberry by Cladosporium herbarum, on Chinese cabbage by Bacillus brevis, and on various other crops by various yeasts and bacteria. Sclerotinia sclerotiorum by several fungal biocontrols. Fungal pod infection of snap bean by Trichoderma hamatum if before or concurrent with infection. Cryphonectria parasitica, Gaeumannomyces graminis, Sclerotinia spp., and Ophiostoma novo-ulmi by viruses. Various powdery mildews and rusts by various Bacillus spp. and fluorescent Pseudomonads. Colletotrichum orbiculare will suppress further infection by itself if manipulated to produce plant-induced systemic resistance by infected the lowest leaf. Difficulties Many of the most important pests are exotic, invasive species that severely impact agriculture, horticulture, forestry, and urban environments. They tend to arrive without their co-evolved parasites, pathogens and predators, and by escaping from these, populations may soar. Importing the natural enemies of these pests may seem a logical move but this may have unintended consequences; regulations may be ineffective and there may be unanticipated effects on biodiversity, and the adoption of the techniques may prove challenging because of a lack of knowledge among farmers and growers. Side effects Biological control can affect biodiversity through predation, parasitism, pathogenicity, competition, or other attacks on non-target species. An introduced control does not always target only the intended pest species; it can also target native species. In Hawaii during the 1940s parasitic wasps were introduced to control a lepidopteran pest and the wasps are still found there today. This may have a negative impact on the native ecosystem; however, host range and impacts need to be studied before declaring their impact on the environment. Vertebrate animals tend to be generalist feeders, and seldom make good biological control agents; many of the classic cases of "biocontrol gone awry" involve vertebrates. For example, the cane toad (Rhinella marina) was intentionally introduced to Australia to control the greyback cane beetle (Dermolepida albohirtum), and other pests of sugar cane. 102 toads were obtained from Hawaii and bred in captivity to increase their numbers until they were released into the sugar cane fields of the tropic north in 1935. It was later discovered that the toads could not jump very high and so were unable to eat the cane beetles which stayed on the upper stalks of the cane plants. However, the toad thrived by feeding on other insects and soon spread very rapidly; it took over native amphibian habitat and brought foreign disease to native toads and frogs, dramatically reducing their populations. Also, when it is threatened or handled, the cane toad releases poison from parotoid glands on its shoulders; native Australian species such as goannas, tiger snakes, dingos and northern quolls that attempted to eat the toad were harmed or killed. However, there has been some recent evidence that native predators are adapting, both physiologically and through changing their behaviour, so in the long run, their populations may recover. Rhinocyllus conicus, a seed-feeding weevil, was introduced to North America to control exotic musk thistle (Carduus nutans) and Canadian thistle (Cirsium arvense). However, the weevil also attacks native thistles, harming such species as the endemic Platte thistle (Cirsium neomexicanum) by selecting larger plants (which reduced the gene pool), reducing seed production and ultimately threatening the species' survival. Similarly, the weevil Larinus planus was also used to try to control the Canadian thistle, but it damaged other thistles as well. This included one species classified as threatened. The small Asian mongoose (Herpestus javanicus) was introduced to Hawaii in order to control the rat population. However, the mongoose was diurnal, and the rats emerged at night; the mongoose, therefore, preyed on the endemic birds of Hawaii, especially their eggs, more often than it ate the rats, and now both rats and mongooses threaten the birds. This introduction was undertaken without understanding the consequences of such an action. No regulations existed at the time, and more careful evaluation should prevent such releases now. The sturdy and prolific eastern mosquitofish (Gambusia holbrooki) is a native of the southeastern United States and was introduced around the world in the 1930s and '40s to feed on mosquito larvae and thus combat malaria. However, it has thrived at the expense of local species, causing a decline of endemic fish and frogs through competition for food resources, as well as through eating their eggs and larvae. In Australia, control of the mosquitofish is the subject of discussion; in 1989 researchers A. H. Arthington and L. L. Lloyd stated that "biological population control is well beyond present capabilities". Grower education A potential obstacle to the adoption of biological pest control measures is that growers may prefer to stay with the familiar use of pesticides. However, pesticides have undesired effects, including the development of resistance among pests, and the destruction of natural enemies; these may in turn enable outbreaks of pests of other species than the ones originally targeted, and on crops at a distance from those treated with pesticides. One method of increasing grower adoption of biocontrol methods involves letting them learn by doing, for example showing them simple field experiments, enabling them to observe the live predation of pests, or demonstrations of parasitised pests. In the Philippines, early-season sprays against leaf folder caterpillars were common practice, but growers were asked to follow a 'rule of thumb' of not spraying against leaf folders for the first 30 days after transplanting; participation in this resulted in a reduction of insecticide use by 1/3 and a change in grower perception of insecticide use. Related techniques Related to biological pest control is the technique of introducing sterile individuals into the native population of some organism. This technique is widely practised with insects: a large number of males sterilized by radiation are released into the environment, which proceed to compete with the native males for females. Those females that copulate with the sterile males will lay infertile eggs, resulting in a decrease in the size of the population. Over time, with repeated introductions of sterile males, this could result in a significant decrease in the size of the organism's population. A similar technique has recently been applied to weeds using irradiated pollen, resulting in deformed seeds that do not sprout.
Technology
Pest and disease control
null
155747
https://en.wikipedia.org/wiki/Travel
Travel
Travel is the movement of people between distant geographical locations. Travel can be done by foot, bicycle, automobile, train, boat, bus, airplane, ship or other means, with or without luggage, and can be one way or round trip. Travel can also include relatively short stays between successive movements, as in the case of tourism. Etymology The origin of the word "travel" is most likely lost to history. The term "travel" may originate from the Old French word travail, which means 'work'. According to the Merriam-Webster dictionary, the first known use of the word travel was in the 14th century. It also states that the word comes from Middle English , (which means to torment, labor, strive, journey) and earlier from Old French (which means to work strenuously, toil). In English, people still occasionally use the words , which means struggle. According to Simon Winchester in his book The Best Travelers' Tales (2004), the words travel and travail both share an even more ancient root: a Roman instrument of torture called the (in Latin it means "three stakes", as in to impale). This link may reflect the extreme difficulty of travel in ancient times. Travel in modern times may or may not be much easier, depending upon the destination. Travel to Mount Everest, the Amazon rainforest, extreme tourism, and adventure travel are more difficult forms of travel. Travel can also be more difficult depending on the method of travel, such as by bus, cruise ship, or even by bullock cart. Purpose and motivation Reasons for traveling include recreation, holidays, rejuvenation, tourism or vacationing, research travel, the gathering of information, visiting people, volunteer travel for charity, migration to begin life somewhere else, religious pilgrimages and mission trips, business travel, trade, commuting, obtaining health care, waging or fleeing war, for the enjoyment of traveling, or other reasons. Travelers may use human-powered transport such as walking or bicycling; or vehicles, such as public transport, automobiles, trains, ferries, boats, cruise ships and airplanes. Motives for travel include: Pleasure Relaxation Discovery and exploration Adventure Intercultural communications Taking personal time for building interpersonal relationships. Avoiding stress Forming memories Cultural experiences Volunteering Festivals and events History Travel dates back to antiquity where wealthy Greeks and Romans would travel for leisure to their summer homes and villas in cities such as Pompeii and Baiae. While early travel tended to be slower, more dangerous, and more dominated by trade and migration, cultural and technological advances over many years have tended to mean that travel has become easier and more accessible. Humankind has come a long way in transportation since Christopher Columbus sailed to the New World from Spain in 1492, an expedition which took over 10 weeks to arrive at the final destination; to the 21st century when aircraft allows travel from Spain to the United States overnight. Travel in the Middle Ages offered hardships and challenges, though it was important to the economy and to society. The wholesale sector depended (for example) on merchants dealing with/through caravans or sea-voyagers, end-user retailing often demanded the services of many itinerant peddlers wandering from village to hamlet, gyrovagues (wandering monks) and wandering friars brought theology and pastoral support to neglected areas, traveling minstrels toured, and armies ranged far and wide in various crusades and in sundry other wars. Pilgrimages were common in both the European and Islamic world and involved streams of travelers both locally and internationally. In the late 16th century, it became fashionable for young European aristocrats and wealthy upper-class men to travel to significant European cities as part of their education in the arts and literature. This was known as the Grand Tour, and included cities such as London, Paris, Venice, Florence, and Rome. However, the French Revolution brought with it the end of the Grand Tour. Travel by water often provided more comfort and speed than land-travel, at least until the advent of a network of railways in the 19th century. Travel for the purpose of tourism is reported to have started around this time when people began to travel for fun as travel was no longer a hard and challenging task. This was capitalized on by people like Thomas Cook selling tourism packages where trains and hotels were booked together. Airships and airplanes took over much of the role of long-distance surface travel in the 20th century, notably after the Second World War where there was a surplus of both aircraft and pilots. Air travel has become so ubiquitous in the 21st century that one woman, Alexis Alford, visited all 196 countries before the age of 21. Geographic types Travel may be local, regional, national (domestic) or international. In some countries, non-local internal travel may require an internal passport, while international travel typically requires a passport and visa. Tours are a common type of travel. Examples of travel tours are expedition cruises, small group tours, and river cruises. Safety Authorities emphasize the importance of taking precautions to ensure travel safety. When traveling abroad, the odds favor a safe and incident-free trip, however, travelers can be subject to difficulties, crime and violence. Some safety considerations include being aware of one's surroundings, avoiding being the target of a crime, leaving copies of one's passport and itinerary information with trusted people, obtaining medical insurance valid in the country being visited and registering with one's national embassy when arriving in a foreign country. Many countries do not recognize drivers' licenses from other countries; however most countries accept international driving permits. Automobile insurance policies issued in one's own country are often invalid in foreign countries, and it is often a requirement to obtain temporary auto insurance valid in the country being visited. It is also advisable to become oriented with the driving rules and regulations of destination countries. Wearing a seat belt is highly advisable for safety reasons; many countries have penalties for violating seatbelt laws. There are three main statistics which may be used to compare the safety of various forms of travel (based on a Department of the Environment, Transport and the Regions survey in October 2000):
Technology
Basics_11
null
155750
https://en.wikipedia.org/wiki/ATPase
ATPase
ATPases (, Adenosine 5'-TriPhosphatase, adenylpyrophosphatase, ATP monophosphatase, triphosphatase, SV40 T-antigen, ATP hydrolase, complex V (mitochondrial electron transport), (Ca2+ + Mg2+)-ATPase, HCO3−-ATPase, adenosine triphosphatase) are a class of enzymes that catalyze the decomposition of ATP into ADP and a free phosphate ion or the inverse reaction. This dephosphorylation reaction releases energy, which the enzyme (in most cases) harnesses to drive other chemical reactions that would not otherwise occur. This process is widely used in all known forms of life. Some such enzymes are integral membrane proteins (anchored within biological membranes), and move solutes across the membrane, typically against their concentration gradient. These are called transmembrane ATPases. Functions Transmembrane ATPases import metabolites necessary for cell metabolism and export toxins, wastes, and solutes that can hinder cellular processes. An important example is the sodium-potassium pump (Na+/K+ATPase) that maintains the cell membrane potential. Another example is the hydrogen potassium ATPase (H+/K+ATPase or gastric proton pump) that acidifies the contents of the stomach. ATPase is genetically conserved in animals; therefore, cardenolides which are toxic steroids produced by plants that act on ATPases, make general and effective animal toxins that act dose dependently. Besides exchangers, other categories of transmembrane ATPase include co-transporters and pumps (however, some exchangers are also pumps). Some of these, like the Na+/K+ATPase, cause a net flow of charge, but others do not. These are called electrogenic transporters and electroneutral transporters, respectively. Structure The Walker motifs are a telltale protein sequence motif for nucleotide binding and hydrolysis. Beyond this broad function, the Walker motifs can be found in almost all natural ATPases, with the notable exception of tyrosine kinases. The Walker motifs commonly form a Beta sheet-turn-Alpha helix that is self-organized as a Nest (protein structural motif). This is thought to be because modern ATPases evolved from small NTP-binding peptides that had to be self-organized. Protein design has been able to replicate the ATPase function (weakly) without using natural ATPase sequences or structures. Importantly, while all natural ATPases have some beta-sheet structure, the designed "Alternative ATPase" lacks beta sheet structure, demonstrating that this life-essential function is possible with sequences and structures not found in nature. Mechanism ATPase (also called F0F1-ATP Synthase) is a charge-transferring complex that catalyzes ATP to perform ATP synthesis by moving ions through the membrane. The coupling of ATP hydrolysis and transport is a chemical reaction in which a fixed number of solute molecules are transported for each ATP molecule hydrolyzed; for the Na+/K+ exchanger, this is three Na+ ions out of the cell and two K+ ions inside per ATP molecule hydrolyzed. Transmembrane ATPases make use of ATP's chemical potential energy by performing mechanical work: they transport solutes in the opposite direction of their thermodynamically preferred direction of movement—that is, from the side of the membrane with low concentration to the side with high concentration. This process is referred to as active transport. For instance, inhibiting vesicular H+-ATPases would result in a rise in the pH within vesicles and a drop in the pH of the cytoplasm. All of the ATPases share a common basic structure. Each rotary ATPase is composed of two major components: F0/A0/V0 and F1/A1/V1. They are connected by 1-3 stalks to maintain stability, control rotation, and prevent them from rotating in the other direction. One stalk is utilized to transmit torque. The number of peripheral stalks is dependent on the type of ATPase: F-ATPases have one, A-ATPases have two, and V-ATPases have three. The F1 catalytic domain is located on the N-side (negative-side) of the membrane and is involved in the synthesis and degradation of ATP and is involved in oxidative phosphorylation. The F0 transmembrane domain is involved in the movement of ions across the membrane. The bacterial F0F1-ATPase consists of the soluble F1 domain and the transmembrane F0 domain, which is composed of several subunits with varying stoichiometry. There are two subunits, γ, and ε, that form the central stalk and they are linked to F0. F0 contains a c-subunit oligomer in the shape of a ring (c-ring). The α subunit is close to the subunit b2 and makes up the stalk that connects the transmembrane subunits to the α3β3 and δ subunits. F-ATP synthases are identical in appearance and function except for the mitochondrial F0F1-ATP synthase, which contains 7-9 additional subunits. The electrochemical potential is what causes the c-ring to rotate in a clockwise direction for ATP synthesis. This causes the central stalk and the catalytic domain to change shape. Rotating the c-ring causes three ATP molecules to be made, which then causes H+ to move from the P-side (positive-side) of the membrane to the N-side (negative-side) of the membrane. The counterclockwise rotation of the c-ring is driven by ATP hydrolysis and ions move from the N-side to the P-side, which helps to build up electrochemical potential. Transmembrane ATP synthases The ATP synthase of mitochondria and chloroplasts is an anabolic enzyme that harnesses the energy of a transmembrane proton gradient as an energy source for adding an inorganic phosphate group to a molecule of adenosine diphosphate (ADP) to form a molecule of adenosine triphosphate (ATP). This enzyme works when a proton moves down the concentration gradient, giving the enzyme a spinning motion. This unique spinning motion bonds ADP and P together to create ATP. ATP synthase can also function in reverse, that is, use energy released by ATP hydrolysis to pump protons against their electrochemical gradient. Classification There are different types of ATPases, which can differ in function (ATP synthesis and/or hydrolysis), structure (F-, V- and A-ATPases contain rotary motors) and in the type of ions they transport. Rotary ATPases F-ATPases (F1FO-ATPases) in mitochondria, chloroplasts and bacterial plasma membranes are the prime producers of ATP, using the proton gradient generated by oxidative phosphorylation (mitochondria) or photosynthesis (chloroplasts). F-ATPases lacking a delta/OSCP subunit move sodium ions instead. They are proposed to be called N-ATPases, since they seem to form a distinct group that is further apart from usual F-ATPases than A-ATPases are from V-ATPases. V-ATPases (V1VO-ATPases) are primarily found in eukaryotic vacuoles, catalysing ATP hydrolysis to transport solutes and lower pH in organelles like proton pump of lysosome. A-ATPases (A1AO-ATPases) are found in Archaea and some extremophilic bacteria. They are arranged like V-ATPases, but function like F-ATPases mainly as ATP synthases. Many homologs that are not necessarily rotaty exist. See . P-ATPases (E1E2-ATPases) are found in bacteria, fungi and in eukaryotic plasma membranes and organelles, and function to transport a variety of different ions across membranes. E-ATPases are cell-surface enzymes that hydrolyze a range of NTPs, including extracellular ATP. Examples include ecto-ATPases, CD39s, and ecto-ATP/Dases, all of which are members of a "GDA1 CD39" superfamily. AAA proteins are a family of ring-shaped P-loop NTPases. P-ATPase P-ATPases (sometime known as E1-E2 ATPases) are found in bacteria and also in eukaryotic plasma membranes and organelles. Its name is due to short time attachment of inorganic phosphate at the aspartate residues at the time of activation. Function of P-ATPase is to transport a variety of different compounds, like ions and phospholipids, across a membrane using ATP hydrolysis for energy. There are many different classes of P-ATPases, which transports a specific type of ion. P-ATPases may be composed of one or two polypeptides, and can usually take two main conformations, E1 and E2. Human genes Na+/K+ transporting: ATP1A1, ATP1A2, ATP1A3, ATP1A4, ATP1B1, ATP1B2, ATP1B3, ATP1B4 Ca++ transporting: ATP2A1, ATP2A2, ATP2A3, ATP2B1, ATP2B2, ATP2B3, ATP2B4, ATP2C1, ATP2C2 Mg++ transporting: ATP3 H+/K+ exchanging: ATP4A H+ transporting, mitochondrial: ATP5A1, ATP5B, ATP5C1, ATP5C2, ATP5D, ATP5E, ATP5F1, ATP5MC1, ATP5G2, ATP5G3, ATP5H, ATP5I, ATP5J, ATP5J2, ATP5L, ATP5L2, ATP5O, ATP5S, MT-ATP6, MT-ATP8 H+ transporting, lysosomal: ATP6AP1, ATP6AP2, ATP6V1A, ATP6V1B1, ATP6V1B2, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, ATP6V1E2, ATP6V1F, ATP6V1G1, ATP6V1G2, ATP6V1G3, ATP6V1H, ATP6V0A1, ATP6V0A2, ATP6V0A4, ATP6V0B, ATP6V0C, ATP6V0D1, ATP6V0D2, ATP6V0E Cu++ transporting: ATP7A, ATP7B Class I, type 8: ATP8A1, ATP8B1, ATP8B2, ATP8B3, ATP8B4 Class II, type 9: ATP9A, ATP9B Class V, type 10: ATP10A, ATP10B, ATP10D Class VI, type 11: ATP11A, ATP11B, ATP11C H+/K+ transporting, nongastric: ATP12A type 13: ATP13A1, ATP13A2, ATP13A3, ATP13A4, ATP13A5
Biology and health sciences
Cell processes
Biology
155758
https://en.wikipedia.org/wiki/Gravity%20assist
Gravity assist
A gravity assist, gravity assist maneuver, swing-by, or generally a gravitational slingshot in orbital mechanics, is a type of spaceflight flyby which makes use of the relative movement (e.g. orbit around the Sun) and gravity of a planet or other astronomical object to alter the path and speed of a spacecraft, typically to save propellant and reduce expense. Gravity assistance can be used to accelerate a spacecraft, that is, to increase or decrease its speed or redirect its path. The "assist" is provided by the motion of the gravitating body as it pulls on the spacecraft. Any gain or loss of kinetic energy and linear momentum by a passing spacecraft is correspondingly lost or gained by the gravitational body, in accordance with Newton's Third Law. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of Earth's Moon, and it was used by interplanetary probes from Mariner 10 onward, including the two Voyager probes' notable flybys of Jupiter and Saturn. Explanation A gravity assist around a planet changes a spacecraft's velocity (relative to the Sun) by entering and leaving the gravitational sphere of influence of a planet. The sum of the kinetic energies of both bodies remains constant (see elastic collision). A slingshot maneuver can therefore be used to change the spaceship's trajectory and speed relative to the Sun. A close terrestrial analogy is provided by a tennis ball bouncing off the front of a moving train. Imagine standing on a train platform, and throwing a ball at 30 km/h toward a train approaching at 50 km/h. The driver of the train sees the ball approaching at 80 km/h and then departing at 80 km/h after the ball bounces elastically off the front of the train. Because of the train's motion, however, that departure is at 130 km/h relative to the train platform; the ball has added twice the train's velocity to its own. Translating this analogy into space: in the planet reference frame, the spaceship has a vertical velocity of v relative to the planet. After the slingshot occurs the spaceship is leaving on a course 90 degrees to that which it arrived on. It will still have a velocity of v, but in the horizontal direction. In the Sun reference frame, the planet has a horizontal velocity of v, and by using the Pythagorean Theorem, the spaceship initially has a total velocity of v. After the spaceship leaves the planet, it will have a velocity of v + v = 2v, gaining approximately 0.6v. This oversimplified example cannot be refined without additional details regarding the orbit, but if the spaceship travels in a path which forms a hyperbola, it can leave the planet in the opposite direction without firing its engine. This example is one of many trajectories and gains of speed the spaceship can experience. This explanation might seem to violate the conservation of energy and momentum, apparently adding velocity to the spacecraft out of nothing, but the spacecraft's effects on the planet must also be taken into consideration to provide a complete picture of the mechanics involved. The linear momentum gained by the spaceship is equal in magnitude to that lost by the planet, so the spacecraft gains velocity and the planet loses velocity. However, the planet's enormous mass compared to the spacecraft makes the resulting change in its speed negligibly small even when compared to the orbital perturbations planets undergo due to interactions with other celestial bodies on astronomically short timescales. For example, one metric ton is a typical mass for an interplanetary space probe whereas Jupiter has a mass of almost 2 x 1024 metric tons. Therefore, a one-ton spacecraft passing Jupiter will theoretically cause the planet to lose approximately 5 x 10−25 km/s of orbital velocity for every km/s of velocity relative to the Sun gained by the spacecraft. For all practical purposes the effects on the planet can be ignored in the calculation. Realistic portrayals of encounters in space require the consideration of three dimensions. The same principles apply as above except adding the planet's velocity to that of the spacecraft requires vector addition as shown below. Due to the reversibility of orbits, gravitational slingshots can also be used to reduce the speed of a spacecraft. Both Mariner 10 and MESSENGER performed this maneuver to reach Mercury. If more speed is needed than available from gravity assist alone, a rocket burn near the periapsis (closest planetary approach) uses the least fuel. A given rocket burn always provides the same change in velocity (Δv), but the change in kinetic energy is proportional to the vehicle's velocity at the time of the burn. Therefore the maximum kinetic energy is obtained when the burn occurs at the vehicle's maximum velocity (periapsis). The Oberth effect describes this technique in more detail. Historical origins In his paper "To Those Who Will Be Reading in Order to Build" (), published in 1938 but dated 1918–1919, Yuri Kondratyuk suggested that a spacecraft traveling between two planets could be accelerated at the beginning and end of its trajectory by using the gravity of the two planets' moons. The portion of his manuscript considering gravity-assists received no later development and was not published until the 1960s. In his 1925 paper "Problems of Flight by Jet Propulsion: Interplanetary Flights" (), Friedrich Zander showed a deep understanding of the physics behind the concept of gravity assist and its potential for the interplanetary exploration of the solar system. Italian engineer Gaetano Crocco was first to calculate an interplanetary journey considering multiple gravity-assists. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of the Moon. The maneuver relied on research performed under the direction of Mstislav Keldysh at the Keldysh Institute of Applied Mathematics. In 1961, Michael Minovitch, UCLA graduate student who worked at NASA's Jet Propulsion Laboratory (JPL), developed a gravity assist technique, that would later be used for the Gary Flandro's Planetary Grand Tour idea. During the summer of 1964 at the NASA JPL, Gary Flandro was assigned the task of studying techniques for exploring the outer planets of the solar system. In this study he discovered the rare alignment of the outer planets (Jupiter, Saturn, Uranus, and Neptune) and conceived the Planetary Grand Tour multi-planet mission utilizing gravity assist to reduce mission duration from forty years to less than ten. Purpose A spacecraft traveling from Earth to an inner planet will increase its relative speed because it is falling toward the Sun, and a spacecraft traveling from Earth to an outer planet will decrease its speed because it is leaving the vicinity of the Sun. Although the orbital speed of an inner planet is greater than that of the Earth, a spacecraft traveling to an inner planet, even at the minimum speed needed to reach it, is still accelerated by the Sun's gravity to a speed notably greater than the orbital speed of that destination planet. If the spacecraft's purpose is only to fly by the inner planet, then there is typically no need to slow the spacecraft. However, if the spacecraft is to be inserted into orbit about that inner planet, then there must be some way to slow it down. Similarly, while the orbital speed of an outer planet is less than that of the Earth, a spacecraft leaving the Earth at the minimum speed needed to travel to some outer planet is slowed by the Sun's gravity to a speed far less than the orbital speed of that outer planet. Therefore, there must be some way to accelerate the spacecraft when it reaches that outer planet if it is to enter orbit about it. Rocket engines can certainly be used to increase and decrease the speed of the spacecraft. However, rocket thrust takes propellant, propellant has mass, and even a small change in velocity (known as Δv, or "delta-v", the delta symbol being used to represent a change and "v" signifying velocity) translates to a far larger requirement for propellant needed to escape Earth's gravity well. This is because not only must the primary-stage engines lift the extra propellant, they must also lift the extra propellant beyond that which is needed to lift that additional propellant. The liftoff mass requirement increases exponentially with an increase in the required delta-v of the spacecraft. Because additional fuel is needed to lift fuel into space, space missions are designed with a tight propellant "budget", known as the "delta-v budget". The delta-v budget is in effect the total propellant that will be available after leaving the earth, for speeding up, slowing down, stabilization against external buffeting (by particles or other external effects), or direction changes, if it cannot acquire more propellant. The entire mission must be planned within that capability. Therefore, methods of speed and direction change that do not require fuel to be burned are advantageous, because they allow extra maneuvering capability and course enhancement, without spending fuel from the limited amount which has been carried into space. Gravity assist maneuvers can greatly change the speed of a spacecraft without expending propellant, and can save significant amounts of propellant, so they are a very common technique to save fuel. Limits The main practical limit to the use of a gravity assist maneuver is that planets and other large masses are seldom in the right places to enable a voyage to a particular destination. For example, the Voyager missions which started in the late 1970s were made possible by the "Grand Tour" alignment of Jupiter, Saturn, Uranus and Neptune. A similar alignment will not occur again until the middle of the 22nd century. That is an extreme case, but even for less ambitious missions there are years when the planets are scattered in unsuitable parts of their orbits. Another limitation is the atmosphere, if any, of the available planet. The closer the spacecraft can approach, the faster its periapsis speed as gravity accelerates the spacecraft, allowing for more kinetic energy to be gained from a rocket burn. However, if a spacecraft gets too deep into the atmosphere, the energy lost to drag can exceed that gained from the planet's velocity. On the other hand, the atmosphere can be used to accomplish aerobraking. There have also been theoretical proposals to use aerodynamic lift as the spacecraft flies through the atmosphere. This maneuver, called an aerogravity assist, could bend the trajectory through a larger angle than gravity alone, and hence increase the gain in energy. Even in the case of an airless body, there is a limit to how close a spacecraft may approach. The magnitude of the achievable change in velocity depends on the spacecraft's approach velocity and the planet's escape velocity at the point of closest approach (limited by either the surface or the atmosphere.) Interplanetary slingshots using the Sun itself are not possible because the Sun is at rest relative to the Solar System as a whole. However, thrusting when near the Sun has the same effect as the powered slingshot described as the Oberth effect. This has the potential to magnify a spacecraft's thrusting power enormously, but is limited by the spacecraft's ability to resist the heat. A rotating black hole might provide additional assistance, if its spin axis is aligned the right way. General relativity predicts that a large spinning frame-dragging—close to the object, space itself is dragged around in the direction of the spin. Any ordinary rotating object produces this effect. Although attempts to measure frame dragging about the Sun have produced no clear evidence, experiments performed by Gravity Probe B have detected frame-dragging effects caused by Earth. General relativity predicts that a spinning black hole is surrounded by a region of space, called the ergosphere, within which standing still (with respect to the black hole's spin) is impossible, because space itself is dragged at the speed of light in the same direction as the black hole's spin. The Penrose process may offer a way to gain energy from the ergosphere, although it would require the spaceship to dump some "ballast" into the black hole, and the spaceship would have had to expend energy to carry the "ballast" to the black hole. Notable examples of use Luna 3 The gravity assist maneuver was first attempted in 1959 for Luna 3, to photograph the far side of the Moon. The satellite did not gain speed, but its orbit was changed in a way that allowed successful transmission of the photos. Pioneer 10 NASA's Pioneer 10 is a space probe launched in 1972 that completed the first mission to the planet Jupiter. Thereafter, Pioneer 10 became the first of five artificial objects to achieve the escape velocity needed to leave the Solar System. In December 1973, Pioneer 10 spacecraft was the first one to use the gravitational slingshot effect to reach escape velocity to leave Solar System. Pioneer 11 Pioneer 11 was launched by NASA in 1973, to study the asteroid belt, the environment around Jupiter and Saturn, solar winds, and cosmic rays. It was the first probe to encounter Saturn, the second to fly through the asteroid belt, and the second to fly by Jupiter. To get to Saturn, the spacecraft got a gravity assist on Jupiter. Mariner 10 The Mariner 10 probe was the first spacecraft to use the gravitational slingshot effect to reach another planet, passing by Venus on 5 February 1974 on its way to becoming the first spacecraft to explore Mercury. Voyager 1 Voyager 1 was launched by NASA on September 5, 1977. It gained the energy to escape the Sun's gravity by performing slingshot maneuvers around Jupiter and Saturn. Having operated for as of , the spacecraft still communicates with the Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of from Earth as of January 12, 2020, it is the most distant human-made object from Earth. Voyager 2 Voyager 2 was launched by NASA on August 20, 1977, to study the outer planets. Its trajectory took longer to reach Jupiter and Saturn than its twin spacecraft but enabled further encounters with Uranus and Neptune. Galileo The Galileo spacecraft was launched by NASA in 1989 and on its route to Jupiter got three gravity assists, one from Venus (February 10, 1990), and two from Earth (December 8, 1990 and December 8, 1992). Spacecraft reached Jupiter in December 1995. Gravity assists also allowed Galileo to flyby two asteroids, 243 Ida and 951 Gaspra. Ulysses In 1990, NASA launched the ESA spacecraft Ulysses to study the polar regions of the Sun. All the planets orbit approximately in a plane aligned with the equator of the Sun. Thus, to enter an orbit passing over the poles of the Sun, the spacecraft would have to eliminate the speed it inherited from the Earth's orbit around the Sun and gain the speed needed to orbit the Sun in the pole-to-pole plane. It was achieved by a gravity assist from Jupiter on February 8, 1992. MESSENGER The MESSENGER mission (launched in August 2004) made extensive use of gravity assists to slow its speed before orbiting Mercury. The MESSENGER mission included one flyby of Earth, two flybys of Venus, and three flybys of Mercury before finally arriving at Mercury in March 2011 with a velocity low enough to permit orbit insertion with available fuel. Although the flybys were primarily orbital maneuvers, each provided an opportunity for significant scientific observations. Cassini The Cassini–Huygens spacecraft was launched from Earth on 15 October 1997, followed by gravity assist flybys of Venus (26 April 1998 and 21 June 1999), Earth (18 August 1999), and Jupiter (30 December 2000). Transit to Saturn took 6.7 years, the spacecraft arrived at 1 July 2004. Its trajectory was called "the Most Complex Gravity-Assist Trajectory Flown to Date" in 2019. After entering orbit around Saturn, the Cassini spacecraft used multiple Titan gravity assists to achieve significant changes in the inclination of its orbit as well so that instead of staying nearly in the equatorial plane, the spacecraft's flight path was inclined well out of the plane of the rings. A typical Titan encounter changed the spacecraft's velocity by 0.75 km/s, and the spacecraft made 127 Titan encounters. These encounters enabled an orbital tour with a wide range of periapsis and apoapsis distances, various alignments of the orbit with respect to the Sun, and orbital inclinations from 0° to 74°. The multiple flybys of Titan also allowed Cassini to flyby other moons, such as Rhea and Enceladus. Rosetta The Rosetta probe, launched in March 2004, used four gravity assist maneuvers (including one just 250 km from the surface of Mars, and three assists from Earth) to accelerate throughout the inner Solar System. That enabled it to flyby the asteroids 21 Lutetia and 2867 Šteins as well as eventually match the velocity of the 67P/Churyumov–Gerasimenko comet at the rendezvous point in August 2014. New Horizons New Horizons was launched by NASA in 2006, and reached Pluto in 2015. In 2007 it performed a gravity assist on Jupiter. Juno The Juno spacecraft was launched on August 5, 2011 (UTC). The trajectory used a gravity assist speed boost from Earth, accomplished by an Earth flyby in October 2013, two years after its launch on August 5, 2011. In that way Juno changed its orbit (and speed) toward its final goal, Jupiter, after only five years. Parker Solar Probe The Parker Solar Probe, launched by NASA in 2018, has seven planned Venus gravity assists. Each gravity assist brings the Parker Solar Probe progressively closer to the Sun. As of 2022, the spacecraft has performed five of its seven assists. The Parker Solar Probe's mission will make the closest approach to the Sun by any space mission. The mission's final planned gravity assist maneuver, completed on November 6, 2024, prepared it for three final solar flybys reaching just 3.8 million miles of the surface of the sun on December 24, 2024 (see figure). Solar Orbiter Solar Orbiter was launched by ESA in 2020. In its initial cruise phase, which lasts until November 2021, Solar Orbiter performed two gravity-assist manoeuvres around Venus and one around Earth to alter the spacecraft's trajectory, guiding it towards the innermost regions of the Solar System. The first close solar pass will take place on 26 March 2022 at around a third of Earth's distance from the Sun. BepiColombo BepiColombo is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) to the planet Mercury. It was launched on 20 October 2018. It will use the gravity assist technique with Earth once, with Venus twice, and six times with Mercury. It will arrive in 2025. BepiColombo is named after Giuseppe (Bepi) Colombo who was a pioneer thinker with this way of maneuvers. Lucy Lucy was launched by NASA on 16 October 2021. It gained one gravity assist from Earth on the 16th of October, 2022, and after a flyby of the main-belt asteroid 152830 Dinkinesh it will gain another in 2024. In 2025, it will fly by the inner main-belt asteroid 52246 Donaldjohanson. In 2027, it will arrive at the Trojan cloud (the Greek camp of asteroids that orbits about 60° ahead of Jupiter), where it will fly by four Trojans, 3548 Eurybates (with its satellite), 15094 Polymele, 11351 Leucus, and 21900 Orus. After these flybys, Lucy will return to Earth in 2031 for another gravity assist toward the Trojan cloud (the Trojan camp which trails about 60° behind Jupiter), where it will visit the binary Trojan 617 Patroclus with its satellite Menoetius in 2033. In fiction In the novel 2001: A Space Odyssey – but not the movie – Discovery performs such a manoeuvre to gain speed as it goes around Jupiter. As Arthur C. Clarke made clear at various times, the location of TMA-2 was switched from near Saturn (in the novel) to near Jupiter (in the movie).
Physical sciences
Orbital mechanics
null
155760
https://en.wikipedia.org/wiki/Hohmann%20transfer%20orbit
Hohmann transfer orbit
In astronautics, the Hohmann transfer orbit () is an orbital maneuver used to transfer a spacecraft between two orbits of different altitudes around a central body. For example, a Hohmann transfer could be used to raise a satellite's orbit from low Earth orbit to geostationary orbit. In the idealized case, the initial and target orbits are both circular and coplanar. The maneuver is accomplished by placing the craft into an elliptical transfer orbit that is tangential to both the initial and target orbits. The maneuver uses two impulsive engine burns: the first establishes the transfer orbit, and the second adjusts the orbit to match the target. The Hohmann maneuver often uses the lowest possible amount of impulse (which consumes a proportional amount of delta-v, and hence propellant) to accomplish the transfer, but requires a relatively longer travel time than higher-impulse transfers. In some cases where one orbit is much larger than the other, a bi-elliptic transfer can use even less impulse, at the cost of even greater travel time. The maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book Die Erreichbarkeit der Himmelskörper (The Attainability of Celestial Bodies). Hohmann was influenced in part by the German science fiction author Kurd Lasswitz and his 1897 book Two Planets. When used for traveling between celestial bodies, a Hohmann transfer orbit requires that the starting and destination points be at particular locations in their orbits relative to each other. Space missions using a Hohmann transfer must wait for this required alignment to occur, which opens a launch window. For a mission between Earth and Mars, for example, these launch windows occur every 26 months. A Hohmann transfer orbit also determines a fixed time required to travel between the starting and destination points; for an Earth-Mars journey this travel time is about 9 months. When transfer is performed between orbits close to celestial bodies with significant gravitation, much less delta-v is usually required, as the Oberth effect may be employed for the burns. They are also often used for these situations, but low-energy transfers which take into account the thrust limitations of real engines, and take advantage of the gravity wells of both planets can be more fuel efficient. Example The diagram shows a Hohmann transfer orbit to bring a spacecraft from a lower circular orbit into a higher one. It is an elliptic orbit that is tangential both to the lower circular orbit the spacecraft is to leave (cyan, labeled 1 on diagram) and the higher circular orbit that it is to reach (red, labeled 3 on diagram). The transfer orbit (yellow, labeled 2 on diagram) is initiated by firing the spacecraft's engine to add energy and raise the apoapsis. When the spacecraft reaches the apoapsis, a second engine firing adds energy to raise the periapsis, putting the spacecraft in the larger circular orbit. Due to the reversibility of orbits, a similar Hohmann transfer orbit can be used to bring a spacecraft from a higher orbit into a lower one; in this case, the spacecraft's engine is fired in the opposite direction to its current path, slowing the spacecraft and lowering the periapsis of the elliptical transfer orbit to the altitude of the lower target orbit. The engine is then fired again at the lower distance to slow the spacecraft into the lower circular orbit. The Hohmann transfer orbit is based on two instantaneous velocity changes. Extra fuel is required to compensate for the fact that the bursts take time; this is minimized by using high-thrust engines to minimize the duration of the bursts. For transfers in Earth orbit, the two burns are labelled the perigee burn and the apogee burn (or apogee kick); more generally, for bodies that are not the Earth, they are labelled periapsis and apoapsis burns. Alternatively, the second burn to circularize the orbit may be referred to as a circularization burn. Type I and Type II An ideal Hohmann transfer orbit transfers between two circular orbits in the same plane and traverses exactly 180° around the primary. In the real world, the destination orbit may not be circular, and may not be coplanar with the initial orbit. Real world transfer orbits may traverse slightly more, or slightly less, than 180° around the primary. An orbit which traverses less than 180° around the primary is called a "Type I" Hohmann transfer, while an orbit which traverses more than 180° is called a "Type II" Hohmann transfer. Transfer orbits can go more than 360° around the primary. These multiple-revolution transfers are sometimes referred to as Type III and Type IV, where a Type III is a Type I plus 360°, and a Type IV is a Type II plus 360°. Uses A Hohmann transfer orbit can be used to transfer an object's orbit toward another object, as long as they co-orbit a more massive body. In the context of Earth and the Solar System, this includes any object which orbits the Sun. An example of where a Hohmann transfer orbit could be used is to bring an asteroid, orbiting the Sun, into contact with the Earth. Calculation For a small body orbiting another much larger body, such as a satellite orbiting Earth, the total energy of the smaller body is the sum of its kinetic energy and potential energy, and this total energy also equals half the potential at the average distance (the semi-major axis): Solving this equation for velocity results in the vis-viva equation, where: is the speed of an orbiting body, is the standard gravitational parameter of the primary body, assuming is not significantly bigger than (which makes ), (for Earth, this is μ~3.986E14 m3 s−2) is the distance of the orbiting body from the primary focus, is the semi-major axis of the body's orbit. Therefore, the delta-v (Δv) required for the Hohmann transfer can be computed as follows, under the assumption of instantaneous impulses: to enter the elliptical orbit at from the circular orbit, where is the aphelion of the resulting elliptical orbit, and to leave the elliptical orbit at to the circular orbit, where and are respectively the radii of the departure and arrival circular orbits; the smaller (greater) of and corresponds to the periapsis distance (apoapsis distance) of the Hohmann elliptical transfer orbit. Typically, is given in units of m3/s2, as such be sure to use meters, not kilometers, for and . The total is then: Whether moving into a higher or lower orbit, by Kepler's third law, the time taken to transfer between the orbits is (one half of the orbital period for the whole ellipse), where is length of semi-major axis of the Hohmann transfer orbit. In application to traveling from one celestial body to another it is crucial to start maneuver at the time when the two bodies are properly aligned. Considering the target angular velocity being angular alignment α (in radians) at the time of start between the source object and the target object shall be Example Consider a geostationary transfer orbit, beginning at r1 = 6,678 km (altitude 300 km) and ending in a geostationary orbit with r2 = 42,164 km (altitude 35,786 km). In the smaller circular orbit the speed is 7.73 km/s; in the larger one, 3.07 km/s. In the elliptical orbit in between the speed varies from 10.15 km/s at the perigee to 1.61 km/s at the apogee. Therefore the Δv for the first burn is 10.15 − 7.73 = 2.42 km/s, for the second burn 3.07 − 1.61 = 1.46 km/s, and for both together 3.88 km/s. This is greater than the Δv required for an escape orbit: 10.93 − 7.73 = 3.20 km/s. Applying a Δv at the Low Earth orbit (LEO) of only 0.78 km/s more (3.20−2.42) would give the rocket the escape velocity, which is less than the Δv of 1.46 km/s required to circularize the geosynchronous orbit. This illustrates the Oberth effect that at large speeds the same Δv provides more specific orbital energy, and energy increase is maximized if one spends the Δv as quickly as possible, rather than spending some, being decelerated by gravity, and then spending some more to overcome the deceleration (of course, the objective of a Hohmann transfer orbit is different). Worst case, maximum delta-v As the example above demonstrates, the Δv required to perform a Hohmann transfer between two circular orbits is not the greatest when the destination radius is infinite. (Escape speed is times orbital speed, so the Δv required to escape is  − 1 (41.4%) of the orbital speed.) The Δv required is greatest (53.0% of smaller orbital speed) when the radius of the larger orbit is 15.5817... times that of the smaller orbit. This number is the positive root of , which is . For higher orbit ratios the required for the second burn decreases faster than the first increases. Application to interplanetary travel When used to move a spacecraft from orbiting one planet to orbiting another, the Oberth effect allows to use less delta-v than the sum of the delta-v for separate manoeuvres to escape the first planet, followed by a Hohmann transfer to the second planet, followed by insertion into an orbit around the other planet. For example, consider a spacecraft travelling from Earth to Mars. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. During the burn the rocket engine applies its delta-v, but the kinetic energy increases as a square law, until it is sufficient to escape the planet's gravitational potential, and then burns more so as to gain enough energy to get into the Hohmann transfer orbit (around the Sun). Because the rocket engine is able to make use of the initial kinetic energy of the propellant, far less delta-v is required over and above that needed to reach escape velocity, and the optimum situation is when the transfer burn is made at minimum altitude (low periapsis) above the planet. The delta-v needed is only 3.6 km/s, only about 0.4 km/s more than needed to escape Earth, even though this results in the spacecraft going 2.9 km/s faster than the Earth as it heads off for Mars (see table below). At the other end, the spacecraft must decelerate for the gravity of Mars to capture it. This capture burn should optimally be done at low altitude to also make best use of the Oberth effect. Therefore, relatively small amounts of thrust at either end of the trip are needed to arrange the transfer compared to the free space situation. However, with any Hohmann transfer, the alignment of the two planets in their orbits is crucial – the destination planet and the spacecraft must arrive at the same point in their respective orbits around the Sun at the same time. This requirement for alignment gives rise to the concept of launch windows. The term lunar transfer orbit (LTO) is used for the Moon. It is possible to apply the formula given above to calculate the Δv in km/s needed to enter a Hohmann transfer orbit to arrive at various destinations from Earth (assuming circular orbits for the planets). In this table, the column labeled "Δv to enter Hohmann orbit from Earth's orbit" gives the change from Earth's velocity to the velocity needed to get on a Hohmann ellipse whose other end will be at the desired distance from the Sun. The column labeled "LEO height" gives the velocity needed (in a non-rotating frame of reference centered on the earth) when 300 km above the Earth's surface. This is obtained by adding to the specific kinetic energy the square of the escape velocity (10.9 km/s) from this height. The column "LEO" is simply the previous speed minus the LEO orbital speed of 7.73 km/s. Note that in most cases, Δv from LEO is less than the Δv to enter Hohmann orbit from Earth's orbit. To get to the Sun, it is actually not necessary to use a Δv of 24 km/s. One can use 8.8 km/s to go very far away from the Sun, then use a negligible Δv to bring the angular momentum to zero, and then fall into the Sun. This can be considered a sequence of two Hohmann transfers, one up and one down. Also, the table does not give the values that would apply when using the Moon for a gravity assist. There are also possibilities of using one planet, like Venus which is the easiest to get to, to assist getting to other planets or the Sun. Comparison to other transfers Bi-elliptic transfer The bi-elliptic transfer consists of two half-elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit. While they require one more engine burn than a Hohmann transfer and generally require a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen. The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934. Low-thrust transfer Low-thrust engines can perform an approximation of a Hohmann transfer orbit, by creating a gradual enlargement of the initial circular orbit through carefully timed engine firings. This requires a change in velocity (delta-v) that is greater than the two-impulse transfer orbit and takes longer to complete. Engines such as ion thrusters are more difficult to analyze with the delta-v model. These engines offer a very low thrust and at the same time, much higher delta-v budget, much higher specific impulse, lower mass of fuel and engine. A 2-burn Hohmann transfer maneuver would be impractical with such a low thrust; the maneuver mainly optimizes the use of fuel, but in this situation there is relatively plenty of it. If only low-thrust maneuvers are planned on a mission, then continuously firing a low-thrust, but very high-efficiency engine might generate a higher delta-v and at the same time use less propellant than a conventional chemical rocket engine. Going from one circular orbit to another by gradually changing the radius simply requires the same delta-v as the difference between the two speeds. Such maneuver requires more delta-v than a 2-burn Hohmann transfer maneuver, but does so with continuous low thrust rather than the short applications of high thrust. The amount of propellant mass used measures the efficiency of the maneuver plus the hardware employed for it. The total delta-v used measures the efficiency of the maneuver only. For electric propulsion systems, which tend to be low-thrust, the high efficiency of the propulsive system usually compensates for the higher delta-V compared to the more efficient Hohmann maneuver. Transfer orbits using electrical propulsion or low-thrust engines optimize the transfer time to reach the final orbit and not the delta-v as in the Hohmann transfer orbit. For geostationary orbit, the initial orbit is set to be supersynchronous and by thrusting continuously in the direction of the velocity at apogee, the transfer orbit transforms to a circular geosynchronous one. This method however takes much longer to achieve due to the low thrust injected into the orbit. Interplanetary Transport Network In 1997, a set of orbits known as the Interplanetary Transport Network (ITN) was published, providing even lower propulsive delta-v (though much slower and longer) paths between different orbits than Hohmann transfer orbits. The Interplanetary Transport Network is different in nature than Hohmann transfers because Hohmann transfers assume only one large body whereas the Interplanetary Transport Network does not. The Interplanetary Transport Network is able to achieve the use of less propulsive delta-v by employing gravity assist from the planets.
Physical sciences
Orbital mechanics
Astronomy
155823
https://en.wikipedia.org/wiki/Sievert
Sievert
The sievert (symbol: Sv) is a unit in the International System of Units (SI) intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage. The sievert is important in dosimetry and radiation protection. It is named after Rolf Maximilian Sievert, a Swedish medical physicist renowned for work on radiation dose measurement and research into the biological effects of radiation. The sievert is used for radiation dose quantities such as equivalent dose and effective dose, which represent the risk of external radiation from sources outside the body, and committed dose, which represents the risk of internal irradiation due to inhaled or ingested radioactive substances. According to the International Commission on Radiological Protection (ICRP), one sievert results in a 5.5% probability of eventually developing fatal cancer based on the disputed linear no-threshold model of ionizing radiation exposure. To calculate the value of stochastic health risk in sieverts, the physical quantity absorbed dose is converted into equivalent dose and effective dose by applying factors for radiation type and biological context, published by the ICRP and the International Commission on Radiation Units and Measurements (ICRU). One sievert equals 100 rem, which is an older, CGS radiation unit. Conventionally, deterministic health effects due to acute tissue damage that is certain to happen, produced by high dose rates of radiation, are compared to the physical quantity absorbed dose measured by the unit gray (Gy). Definition CIPM definition of the sievert The SI definition given by the International Committee for Weights and Measures (CIPM) says: "The quantity dose equivalent H is the product of the absorbed dose D of ionizing radiation and the dimensionless factor Q (quality factor) defined as a function of linear energy transfer by the ICRU" H = Q × D The value of Q is not defined further by CIPM, but it requires the use of the relevant ICRU recommendations to provide this value. The CIPM also says that "in order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H". In summary: gray: quantity D—absorbed dose 1 Gy = 1 joule/kilogram—a physical quantity. 1 Gy is the deposit of a joule of radiation energy per kilogram of matter or tissue. sievert: quantity H—equivalent dose 1 Sv = 1 joule/kilogram—a biological effect. The sievert represents the equivalent biological effect of the deposit of a joule of radiation energy in a kilogram of human tissue. The ratio to absorbed dose is denoted by Q. ICRP definition of the sievert The ICRP definition of the sievert is: "The sievert is the special name for the SI unit of equivalent dose, effective dose, and operational dose quantities. The unit is joule per kilogram." The sievert is used for a number of dose quantities which are described in this article and are part of the international radiological protection system devised and defined by the ICRP and ICRU. External dose quantities When the sievert is used to represent the stochastic effects of external ionizing radiation on human tissue, the radiation doses received are measured in practice by radiometric instruments and dosimeters and are called operational quantities. To relate these actual received doses to likely health effects, protection quantities have been developed to predict the likely health effects using the results of large epidemiological studies. Consequently, this has required the creation of a number of different dose quantities within a coherent system developed by the ICRU working with the ICRP. The external dose quantities and their relationships are shown in the accompanying diagram. The ICRU is primarily responsible for the operational dose quantities, based upon the application of ionising radiation metrology, and the ICRP is primarily responsible for the protection quantities, based upon modelling of dose uptake and biological sensitivity of the human body. Naming conventions The ICRU/ICRP dose quantities have specific purposes and meanings, but some use common words in a different order. There can be confusion between, for instance, equivalent dose and dose equivalent. Although the CIPM definition states that the linear energy transfer function (Q) of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities effective and equivalent dose which are calculated from more complex computational models and are distinguished by not having the phrase dose equivalent in their name. Only the operational dose quantities which still use Q for calculation retain the phrase dose equivalent. However, there are joint ICRU/ICRP proposals to simplify this system by changes to the operational dose definitions to harmonise with those of protection quantities. These were outlined at the 3rd International Symposium on Radiological Protection in October 2015, and if implemented would make the naming of operational quantities more logical by introducing "dose to lens of eye" and "dose to local skin" as equivalent doses. In the USA there are differently named dose quantities which are not part of the ICRP nomenclature. Physical quantities These are directly measurable physical quantities in which no allowance has been made for biological effects. Radiation fluence is the number of radiation particles impinging per unit area per unit time, kerma is the ionising effect on air of gamma rays and X-rays and is used for instrument calibration, and absorbed dose is the amount of radiation energy deposited per unit mass in the matter or tissue under consideration. Operational quantities Operational quantities are measured in practice, and are the means of directly measuring dose uptake due to exposure, or predicting dose uptake in a measured environment. In this way they are used for practical dose control, by providing an estimate or upper limit for the value of the protection quantities related to an exposure. They are also used in practical regulations and guidance. The calibration of individual and area dosimeters in photon fields is performed by measuring the collision "air kerma free in air" under conditions of secondary electron equilibrium. Then the appropriate operational quantity is derived applying a conversion coefficient that relates the air kerma to the appropriate operational quantity. The conversion coefficients for photon radiation are published by the ICRU. Simple (non-anthropomorphic) "phantoms" are used to relate operational quantities to measured free-air irradiation. The ICRU sphere phantom is based on the definition of an ICRU 4-element tissue-equivalent material which does not really exist and cannot be fabricated. The ICRU sphere is a theoretical 30 cm diameter "tissue equivalent" sphere consisting of a material with a density of 1 g·cm−3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. This material is specified to most closely approximate human tissue in its absorption properties. According to the ICRP, the ICRU "sphere phantom" in most cases adequately approximates the human body as regards the scattering and attenuation of penetrating radiation fields under consideration. Thus radiation of a particular energy fluence will have roughly the same energy deposition within the sphere as it would in the equivalent mass of human tissue. To allow for back-scattering and absorption of the human body, the "slab phantom" is used to represent the human torso for practical calibration of whole body dosimeters. The slab phantom is depth to represent the human torso. The joint ICRU/ICRP proposals outlined at the 3rd International Symposium on Radiological Protection in October 2015 to change the definition of operational quantities would not change the present use of calibration phantoms or reference radiation fields. Protection quantities Protection quantities are calculated models, and are used as "limiting quantities" to specify exposure limits to ensure, in the words of ICRP, "that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". These quantities cannot be measured in practice but their values are derived using models of external dose to internal organs of the human body, using anthropomorphic phantoms. These are 3D computational models of the body which take into account a number of complex effects such as body self-shielding and internal scattering of radiation. The calculation starts with organ absorbed dose, and then applies radiation and tissue weighting factors. As protection quantities cannot practically be measured, operational quantities must be used to relate them to practical radiation instrument and dosimeter responses. Instrument and dosimetry response This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. Such instruments are calibrated using radiation metrology techniques which will trace them to a national radiation standard, and thereby relate them to an operational quantity. The readings of instruments and dosimeters are used to prevent the uptake of excessive dose and to provide records of dose uptake to satisfy radiation safety legislation; such as in the UK, the Ionising Radiations Regulations 1999. Calculating protection dose quantities The sievert is used in external radiation protection for equivalent dose (the external-source, whole-body exposure effects, in a uniform field), and effective dose (which depends on the body parts irradiated). These dose quantities are weighted averages of absorbed dose designed to be representative of the stochastic health effects of radiation, and use of the sievert implies that appropriate weighting factors have been applied to the absorbed dose measurement or calculation (expressed in grays). The ICRP calculation provides two weighting factors to enable the calculation of protection quantities.  1. The radiation factor WR, which is specific for radiation type R – This is used in calculating the equivalent dose HT which can be for the whole body or for individual organs.  2. The tissue weighting factor WT, which is specific for tissue type T being irradiated. This is used with WR to calculate the contributory organ doses to arrive at an effective dose E for non-uniform irradiation. When a whole body is irradiated uniformly only the radiation weighting factor WR is used, and the effective dose equals the whole body equivalent dose. But if the irradiation of a body is partial or non-uniform the tissue factor WT is used to calculate dose to each organ or tissue. These are then summed to obtain the effective dose. In the case of uniform irradiation of the human body, these summate to 1, but in the case of partial or non-uniform irradiation, they will summate to a lower value depending on the organs concerned; reflecting the lower overall health effect. The calculation process is shown on the accompanying diagram. This approach calculates the biological risk contribution to the whole body, taking into account complete or partial irradiation, and the radiation type or types. The values of these weighting factors are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, based on averages of those obtained for the human population. Radiation type weighting factor WR Since different radiation types have different biological effects for the same deposited energy, a corrective radiation weighting factor WR, which is dependent on the radiation type and on the target tissue, is applied to convert the absorbed dose measured in the unit gray to determine the equivalent dose. The result is given the unit sievert. The equivalent dose is calculated by multiplying the absorbed energy, averaged by mass over an organ or tissue of interest, by a radiation weighting factor appropriate to the type and energy of radiation. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy dose. where is the equivalent dose absorbed by tissue T, is the absorbed dose in tissue T by radiation type R and is the radiation weighting factor defined by regulation. Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv. This may seem to be a paradox. It implies that the energy of the incident radiation field in joules has increased by a factor of 20, thereby violating the laws of conservation of energy. However, this is not the case. The sievert is used only to convey the fact that a gray of absorbed alpha particles would cause twenty times the biological effect of a gray of absorbed x-rays. It is this biological component that is being expressed when using sieverts rather than the actual energy delivered by the incident absorbed radiation. Tissue type weighting factor WT The second weighting factor is the tissue factor WT, but it is used only if there has been non-uniform irradiation of a body. If the body has been subject to uniform irradiation, the effective dose equals the whole body equivalent dose, and only the radiation weighting factor WR is used. But if there is partial or non-uniform body irradiation the calculation must take account of the individual organ doses received, because the sensitivity of each organ to irradiation depends on their tissue type. This summed dose from only those organs concerned gives the effective dose for the whole body. The tissue weighting factor is used to calculate those individual organ dose contributions. The ICRP values for WT are given in the table shown here. The article on effective dose gives the method of calculation. The absorbed dose is first corrected for the radiation type to give the equivalent dose, and then corrected for the tissue receiving the radiation. Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionally large relative to the fraction of body mass they represent. Other tissues like the hard bone surface are particularly insensitive to radiation and are assigned a disproportionally low weighting factor. In summary, the sum of tissue-weighted doses to each irradiated organ or tissue of the body adds up to the effective dose for the body. The use of effective dose enables comparisons of overall dose received regardless of the extent of body irradiation. Operational quantities The operational quantities are used in practical applications for monitoring and investigating external exposure situations. They are defined for practical operational measurements and assessment of doses in the body. Three external operational dose quantities were devised to relate operational dosimeter and instrument measurements to the calculated protection quantities. Also devised were two phantoms, The ICRU "slab" and "sphere" phantoms which relate these quantities to incident radiation quantities using the Q(L) calculation. Ambient dose equivalent This is used for area monitoring of penetrating radiation and is usually expressed as the quantity H*(10). This means the radiation is equivalent to that found 10 mm within the ICRU sphere phantom in the direction of origin of the field. An example of penetrating radiation is gamma rays. Directional dose equivalent This is used for monitoring of low penetrating radiation and is usually expressed as the quantity H'''(0.07). This means the radiation is equivalent to that found at a depth of 0.07 mm in the ICRU sphere phantom. Examples of low penetrating radiation are alpha particles, beta particles and low-energy photons. This dose quantity is used for the determination of equivalent dose to such as the skin, lens of the eye. In radiological protection practice value of omega is usually not specified as the dose is usually at a maximum at the point of interest. Personal dose equivalent This is used for individual dose monitoring, such as with a personal dosimeter worn on the body. The recommended depth for assessment is 10 mm which gives the quantity Hp(10). Proposals for changing the definition of protection dose quantities In order to simplify the means of calculating operational quantities and assist in the comprehension of radiation dose protection quantities, ICRP Committee 2 & ICRU Report Committee 26 started in 2010 an examination of different means of achieving this by dose coefficients related to Effective Dose or Absorbed Dose. Specifically; 1. For area monitoring of effective dose of whole body it would be:H = Φ × conversion coefficient The driver for this is that H∗(10) is not a reasonable estimate of effective dose due to high energy photons, as a result of the extension of particle types and energy ranges to be considered in ICRP report 116. This change would remove the need for the ICRU sphere and introduce a new quantity called Emax. 2. For individual monitoring, to measure deterministic effects on eye lens and skin, it would be:D = Φ × conversion coefficient for absorbed dose. The driver for this is the need to measure the deterministic effect, which it is suggested, is more appropriate than stochastic effect. This would calculate equivalent dose quantities Hlens and Hskin. This would remove the need for the ICRU Sphere and the Q-L function. Any changes would replace ICRU report 51, and part of report 57. A final draft report was issued in July 2017 by ICRU/ICRP for consultation. Internal dose quantities The sievert is used for human internal dose quantities in calculating committed dose. This is dose from radionuclides which have been ingested or inhaled into the human body, and thereby "committed" to irradiate the body for a period of time. The concepts of calculating protection quantities as described for external radiation applies, but as the source of radiation is within the tissue of the body, the calculation of absorbed organ dose uses different coefficients and irradiation mechanisms. The ICRP defines Committed effective dose, E(t) as the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t'' is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children. The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients". A committed dose from an internal source is intended to carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source, or the same amount of effective dose applied to part of the body. Health effects Ionizing radiation has deterministic and stochastic effects on human health. Deterministic (acute tissue effect) events happen with certainty, with the resulting health conditions occurring in every individual who received the same high dose. Stochastic (cancer induction and genetic) events are inherently random, with most individuals in a group failing to ever exhibit any causal negative health effects after exposure, while an indeterministic random minority do, often with the resulting subtle negative health effects being observable only after large detailed epidemiology studies. The use of the sievert implies that only stochastic effects are being considered, and to avoid confusion deterministic effects are conventionally compared to values of absorbed dose expressed by the SI unit gray (Gy). Stochastic effects Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of nuclear regulators, governments and the UNSCEAR is that the incidence of cancers due to ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 5.5% per sievert. This is known as the linear no-threshold model (LNT model). Some argue that this LNT model is now outdated and should be replaced with a threshold below which the body's natural cell processes repair damage and/or replace damaged cells. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. Deterministic effects The deterministic (acute tissue damage) effects that can lead to acute radiation syndrome only occur in the case of acute high doses (≳ 0.1 Gy) and high dose rates (≳ 0.1 Gy/h) and are conventionally not measured using the unit sievert, but use the unit gray (Gy). A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose. ICRP dose limits The ICRP recommends a number of limits for dose uptake in table 8 of report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for the following groups: Planned exposure – limits given for occupational, medical and public Emergency exposure – limits given for occupational and public exposure Existing exposure – All persons exposed For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period, and for the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures. For comparison, natural radiation levels inside the United States Capitol are such that a human body would receive an additional dose rate of 0.85 mSv/a, close to the regulatory limit, because of the uranium content of the granite structure. According to the conservative ICRP model, someone who spent 20 years inside the capitol building would have an extra one in a thousand chance of getting cancer, over and above any other existing risk (calculated as: 20 a·0.85 mSv/a·0.001 Sv/mSv·5.5%/Sv ≈ 0.1%). However, that "existing risk" is much higher; an average American would have a 10% chance of getting cancer during this same 20-year period, even without any exposure to artificial radiation (see natural Epidemiology of cancer and cancer rates). Dose examples Significant radiation doses are not frequently encountered in everyday life. The following examples can help illustrate relative magnitudes; these are meant to be examples only, not a comprehensive list of possible radiation doses. An "acute dose" is one that occurs over a short and finite period of time, while a "chronic dose" is a dose that continues for an extended period of time so that it is better described by a dose rate. Dose examples Dose rate examples All conversions between hours and years have assumed continuous presence in a steady field, disregarding known fluctuations, intermittent exposure and radioactive decay. Converted values are shown in parentheses. "/a" is "per annum", which means per year. "/h" means "per hour".
Physical sciences
Radioactivity
null
155829
https://en.wikipedia.org/wiki/Curie%20%28unit%29
Curie (unit)
The curie (symbol Ci) is a non-SI unit of radioactivity originally defined in 1910. According to a notice in Nature at the time, it was to be named in honour of Pierre Curie, but was considered at least by some to be in honour of Marie Curie as well, and is in later literature considered to be named for both. It was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)", but is currently defined as 1 Ci = decays per second after more accurate measurements of the activity of Ra (which has a specific activity of ). In 1975 the General Conference on Weights and Measures gave the becquerel (Bq), defined as one nuclear decay per second, official status as the SI unit of activity. Therefore: 1 Ci = = 37 GBq and 1 Bq ≅ ≅ 27 pCi While its continued use is discouraged by the National Institute of Standards and Technology (NIST) and other bodies, the curie is still widely used throughout government, industry and medicine in the United States and in other countries. At the 1910 meeting, which originally defined the curie, it was proposed to make it equivalent to 10 nanograms of radium (a practical amount). But Marie Curie, after initially accepting this, changed her mind and insisted on one gram of radium. According to Bertram Boltwood, Marie Curie thought that "the use of the name 'curie' for so infinitesimally small [a] quantity of anything was altogether inappropriate". The power emitted in radioactive decay corresponding to one curie can be calculated by multiplying the decay energy by approximately 5.93 mW / MeV. A radiotherapy machine may have roughly 1000 Ci of a radioisotope such as caesium-137 or cobalt-60. This quantity of radioactivity can produce serious health effects with only a few minutes of close-range, unshielded exposure. Radioactive decay can lead to the emission of particulate radiation or electromagnetic radiation. Ingesting even small quantities of some particulate emitting radionuclides may be fatal. For example, the median lethal dose (LD-50) for ingested polonium-210 is 240 μCi; about 53.5 nanograms. The typical human body contains roughly 0.1 μCi (14 mg) of naturally occurring potassium-40. A human body containing of carbon (see Composition of the human body) would also have about 24 nanograms or 0.1 μCi of carbon-14. Together, these would result in a total of approximately 0.2 μCi or 7400 decays per second inside the person's body (mostly from beta decay but some from gamma decay). As a measure of quantity Units of activity (the curie and the becquerel) also refer to a quantity of radioactive atoms. Because the probability of decay is a fixed physical quantity, for a known number of atoms of a particular radionuclide, a predictable number will decay in a given time. The number of decays that will occur in one second in one gram of atoms of a particular radionuclide is known as the specific activity of that radionuclide. The activity of a sample decreases with time because of decay. The rules of radioactive decay may be used to convert activity to an actual number of atoms. They state that 1 Ci of radioactive atoms would follow the expression N (atoms) × λ (s) = 1 Ci = 3.7 × 10 Bq, and so N = 3.7 × 10 Bq / λ, where λ is the decay constant in s−1. Here are some examples, ordered by half-life: Radiation related quantities The following table shows radiation quantities in SI and non-SI units:
Physical sciences
Radioactivity
Basics and measurement
155835
https://en.wikipedia.org/wiki/Becquerel
Becquerel
The becquerel (; symbol: Bq) is the unit of radioactivity in the International System of Units (SI). One becquerel is defined as an activity of one per second, on average, for aperiodic activity events referred to a radionuclide. For applications relating to human health this is a small quantity, and SI multiples of the unit are commonly used. The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre and Marie Curie in 1903 for their work in discovering radioactivity. Definition 1 Bq = 1 s−1 A special name was introduced for the reciprocal second (s) to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example, 1 μs would mean 10 disintegrations per second: , whereas 1 μBq would mean 1 disintegration per 1 million seconds. Other names considered were hertz (Hz), a special name already in use for the reciprocal second (for periodic events of any kind), and fourier (Fr; after Joseph Fourier). The hertz is now only used for periodic phenomena. While 1 Hz replaces the deprecated term cycle per second, 1 Bq refers to one event per second on average for aperiodic radioactive decays. The gray (Gy) and the becquerel (Bq) were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured with the rad. Decay activity was given with the curie before 1946 and often with the rutherford between 1946 and 1975. Unit capitalization and prefixes As with every International System of Units (SI) unit named after a person, the first letter of its symbol is uppercase (Bq). However, when an SI unit is spelled out in English, it should always begin with a lowercase letter (becquerel)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. Like any SI unit, Bq can be prefixed; commonly used multiples are kBq (kilobecquerel, ), MBq (megabecquerel, , equivalent to 1 rutherford), GBq (gigabecquerel, ), TBq (terabecquerel, ), and PBq (petabecquerel, ). Large prefixes are common for practical uses of the unit. Examples For practical applications, 1 Bq is a small unit. For example, there is roughly 0.017 g of potassium-40 in a typical human body, producing about 4,400 decays per second (Bq). The activity of radioactive americium in a home smoke detector is about 37 kBq (1 μCi). The global inventory of carbon-14 is estimated to be (8.5 EBq, 8.5 exabecquerel). These examples are useful for comparing the amount of activity of these radioactive materials, but should not be confused with the amount of exposure to ionizing radiation that these materials represent. The level of exposure and thus the absorbed dose received are what should be considered when assessing the effects of ionizing radiation on humans. Relation to the curie The becquerel succeeded the curie (Ci), an older, non-SI unit of radioactivity based on the activity of 1 gram of radium-226. The curie is defined as , or 37 GBq. Conversion factors: 1 Ci = = 37 GBq 1 μCi = = 37 kBq 1 Bq = = 1 MBq = 0.027 mCi Relation to other radiation-related quantities The following table shows radiation quantities in SI and non-SI units. W (formerly 'Q' factor) is a factor that scales the biological effect for different types of radiation, relative to x-rays (e.g. 1 for beta radiation, 20 for alpha radiation, and a complicated function of energy for neutrons). In general, conversion between rates of emission, the density of radiation, the fraction absorbed, and the biological effects, requires knowledge of the geometry between source and target, the energy and the type of the radiation emitted, among other factors.
Physical sciences
Radioactivity
null
155869
https://en.wikipedia.org/wiki/Lux
Lux
The lux (symbol: lx) is the unit of illuminance, or luminous flux per unit area, in the International System of Units (SI). It is equal to one lumen per square metre. In photometry, this is used as a measure of the irradiance, as perceived by the spectrally unequally responding human eye, of light that hits or passes through a surface. It is analogous to the radiometric unit watt per square metre, but with the power at each wavelength weighted according to the luminosity function, a model of human visual brightness perception, standardized by the CIE and ISO. In English, "lux" is used as both the singular and plural form. The word is derived from the Latin word for "light", lux. Explanation Illuminance Illuminance is a measure of how much luminous flux is spread over a given area. One can think of luminous flux (with the unit lumen) as a measure of the total "amount" of visible light present, and the illuminance as a measure of the intensity of illumination on a surface. A given amount of light will illuminate a surface more dimly if it is spread over a larger area, so illuminance is inversely proportional to area when the luminous flux is held constant. One lux is equal to one lumen per square metre: 1 lx = 1 lm/m2 = 1 cd·sr/m2. A flux of 1000 lumens, spread uniformly over an area of 1 square metre, lights up that square metre with an illuminance of 1000 lux. However, the same 1000 lumens spread out over 10 square metres produces a dimmer illuminance of only 100 lux. Achieving an illuminance of 500 lx might be possible in a home kitchen with a single fluorescent light fixture with an output of . To light a factory floor with dozens of times the area of the kitchen would require dozens of such fixtures. Thus, lighting a larger area to the same illuminance (lux) requires a greater luminous flux (lumen). As with other named SI units, SI prefixes can be used. For example, 1 kilolux (klx) is 1000 lx. Here are some examples of the illuminance provided under various conditions: The illuminance provided by a light source on a surface perpendicular to the direction to the source is a measure of the strength of that source as perceived from that location. For instance, a star of apparent magnitude 0 provides 2.08 microlux (μlx) at the Earth's surface. A barely perceptible magnitude 6 star provides 8 nanolux (nlx). The unobscured Sun provides an illumination of up to 100 kilolux (klx) on the Earth's surface, the exact value depending on time of year and atmospheric conditions. This direct normal illuminance is related to the solar illuminance constant Esc, equal to (see Sunlight and Solar constant). The illuminance on a surface depends on how the surface is tilted with respect to the source. For example, a pocket flashlight aimed at a wall will produce a given level of illumination if aimed perpendicular to the wall, but if the flashlight is aimed at increasing angles to the perpendicular (maintaining the same distance), the illuminated spot becomes larger and so is less highly illuminated. When a surface is tilted at an angle to a source, the illumination provided on the surface is reduced because the tilted surface subtends a smaller solid angle from the source, and therefore it receives less light. For a point source, the illumination on the tilted surface is reduced by a factor equal to the cosine of the angle between a ray coming from the source and the normal to the surface. In practical lighting problems, given information on the way light is emitted from each source and the distance and geometry of the lighted area, a numerical calculation can be made of the illumination on a surface by adding the contributions of every point on every light source. Relationship between illuminance and irradiance Like all photometric units, the lux has a corresponding "radiometric" unit. The difference between any photometric unit and its corresponding radiometric unit is that radiometric units are based on physical power, with all wavelengths being weighted equally, while photometric units take into account the fact that the human eye's image-forming visual system is more sensitive to some wavelengths than others, and accordingly every wavelength is given a different weight. The weighting factor is known as the luminosity function. The lux is one lumen per square metre (lm/m2), and the corresponding radiometric unit, which measures irradiance, is the watt per square metre (W/m2). There is no single conversion factor between lux and W/m2; there is a different conversion factor for every wavelength, and it is not possible to make a conversion unless one knows the spectral composition of the light. The peak of the luminosity function is at 555 nm (green); the eye's image-forming visual system is more sensitive to light of this wavelength than any other. For monochromatic light of this wavelength, the amount of illuminance for a given amount of irradiance is maximum: 683.002 lx per 1 W/m2; the irradiance needed to make 1 lx at this wavelength is about 1.464 mW/m2. Other wavelengths of visible light produce fewer lux per watt-per-meter-squared. The luminosity function falls to zero for wavelengths outside the visible spectrum. For a light source with mixed wavelengths, the number of lumens per watt can be calculated by means of the luminosity function. In order to appear reasonably "white", a light source cannot consist solely of the green light to which the eye's image-forming visual photoreceptors are most sensitive, but must include a generous mixture of red and blue wavelengths, to which they are much less sensitive. This means that white (or whitish) light sources produce far fewer lumens per watt than the theoretical maximum of 683.002 lm/W. The ratio between the actual number of lumens per watt and the theoretical maximum is expressed as a percentage known as the luminous efficiency. For example, a typical incandescent light bulb has a luminous efficiency of only about 2%. In reality, individual eyes vary slightly in their luminosity functions. However, photometric units are precisely defined and precisely measurable. They are based on an agreed-upon standard luminosity function based on measurements of the spectral characteristics of image-forming visual photoreception in many individual human eyes. Use in video-camera specifications Specifications for video cameras such as camcorders and surveillance cameras often include a minimal illuminance level in lux at which the camera will record a satisfactory image. A camera with good low-light capability will have a lower lux rating. Still cameras do not use such a specification, since longer exposure times can generally be used to make pictures at very low illuminance levels, as opposed to the case in video cameras, where a maximal exposure time is generally set by the frame rate. Non-SI units of illuminance The corresponding unit in English and American traditional units is the foot-candle. One foot candle is about 10.764 lx. Since one foot-candle is the illuminance cast on a surface by a one-candela source one foot away, a lux could be thought of as a "metre-candle", although this term is discouraged because it does not conform to SI standards for unit names. One phot (ph) equals 10 kilolux (10 klx). One nox (nx) equals 1 millilux (1 mlx) at light color 2042 K or 2046 K (formerly 2360 K). In astronomy, apparent magnitude is a measure of the illuminance of a star on the Earth's atmosphere. A star with apparent magnitude 0 is 2.54 microlux outside the earth's atmosphere, and 82% of that (2.08 microlux) under clear skies. A magnitude 6 star (just barely visible under good conditions) would be 8.3 nanolux. A standard candle (one candela) a kilometre away would provide an illuminance of 1 microlux—about the same as a magnitude 1 star. Legacy Unicode symbol Unicode includes a symbol for "lx": . It is a legacy code to accommodate old code pages in some Asian languages. Use of this code is not recommended in new documents. SI photometry units
Physical sciences
Light
null
155904
https://en.wikipedia.org/wiki/Supercar
Supercar
A supercar, also known as an exotic car, is a type of automobile generally described at its most basic as a street-legal sports car with race track-like power, speed, and handling, plus a certain subjective cachet linked to pedigree, exclusivity, or both. The term 'supercar' is frequently used for the extreme fringe of powerful, low-bodied mid-engine luxury sportscars. A low car has both a low, handling-favorable center of gravity, and less frontal area than a front engined car, reducing its aerodynamic drag and enabling a higher top speed. Since the 2000s, the term hypercar has come into use for the highest performance supercars. Supercars commonly serve as the flagship model within a vehicle manufacturer's sports car range, and typically feature various performance-related technology derived from motorsports. Some examples include the Ferrari 458 Italia, Lamborghini Aventador, and McLaren 720S. By contrast, automotive journalism typically reserves the predicate 'hypercar' for (very) limited, (two- to low 4-figure) production-number cars, built over and above the marque's typical product line-up and carrying 21st century sales prices often exceeding a million euros, dollars or pounds: examples would include the 1270 unit Porsche's Carrera GT, Ford GTs, and the Ferrari F40/F50/Enzo lineage. Very few car makers, like Bugatti and Koenigsegg, only make hypercars. In the United States, the term "supercars" was used already during the 1960s for the highest performance muscle cars. As of 2024, "supercars" is still used in Australia to refer to Australian muscle cars. History Europe The Lamborghini Miura, introduced in 1966 by the Italian manufacturer, is often said to be the first supercar. By the 1970s and 1980s the term was in regular use for such a car, if not precisely defined. One interpretation up until the 1990s was to use it for mid-engine two-seat cars with at least eight cylinders (but typically a V12 engine), a power output of at least and a top speed of at least . Other interpretations state that "it must be very fast, with sporting handling to match", "it should be sleek and eye-catching" and its price should be "one in a rarefied atmosphere of its own"; exclusivity – in terms of limited production volumes, such as those of the most elite models made by Ferrari or Lamborghini – is also an important characteristic for some using the term. Some European manufacturers, such as McLaren, Pagani, and Koenigsegg, specialize in only producing supercars. United States During the 1960s the highest performance American muscle cars were referred to by some as supercars, sometimes spelled with a capital S. Its use reflected the intense competition for primacy in that market segment between U.S. manufacturers, retroactively characterized as the "horsepower wars". Already by 1965 the May issue of the American magazine Car Life included multiple references to supercars and "the supercar club", and a 1968 issue of Car & Driver magazine describes a "Supercar street racer gang" market segment. The "S/C" in the model name of the AMC S/C Rambler produced in 1969 as a street-legal racer is an abbreviation for "SuperCar". Since the decline of the muscle car in the 1970s, the word supercar has been more broadly internationalized, coming to mean an "exotic" car that has high performance; interpretations of the term are span from limited-production models produced by small manufacturers for performance enthusiasts to (less frequently) standard production cars modified for exceptional performance. The 1990s and 2000s saw a rise in American supercars with similar characteristics to their European counterparts. Some American "Big Three" (i.e. General Motors, Chrysler, and Ford, the historic giants of America's Detroit-based auto-industry) sports cars which have been referred to as supercars include contemporary Chevrolet Corvettes, the Dodge Viper, and the Ford GT. Supercars made by smaller American manufacturers include the Saleen S7, SSC Ultimate Aero, SSC Tuatara, Hennessey Venom GT, and Hennessey Venom F5. Japan During the early 1990s, Japan began to gain global recognition for making high-performance sports cars; the automotive media seized on the lightweight, mid-engined, rear-wheel-drive, V6 Honda NSX produced from 1990 to 2005 as Japan's "first". While matching contemporary European supercars in performance and features, the NSX was praised for being more reliable and user-friendly. In the 21st century, other Japanese makers produced their own supercars. From 2010 to 2012, Lexus offered the Lexus LFA, a two-seat front-engine coupe powered by a V10 engine producing . The 2009–present Nissan GT-R has also been praised as a modern supercar that also delivers every day practicality. It features a twin-turbo V6 producing between , and has been lauded for its acceleration and handling through its all-wheel-drive drivetrain and dual-clutch transmission. The second generation Honda NSX supercar made from 2016 to 2022 upped the ante for Honda by using all-wheel drive, a hybrid powertrain (producing up to ), turbocharging, and a dual-clutch transmission. Hypercar A more recent term for high-performance sportscars is "hypercar", which is sometimes used to describe the highest performing supercars. An extension of "supercar", it too lacks a set definition. One offered by automotive magazine The Drive is "a limited-production, top-of-the-line supercar"; prices can reach or exceed US$1 million, and already had by 2017. Some observers consider the tubular framed, first-ever production fuel-injection, world's fastest street-legal, 1954 Mercedes-Benz 300 SL "Gullwing" as the first hypercar; others the revolutionary, first-ever mid-engined 1967 Lamborghini Miura; others yet the 1993 McLaren F1 or 2005 Bugatti Veyron. With a recent shift towards electrification, many recent hypercars use a hybrid drivetrain, a trend started in 2013 by the McLaren P1, Porsche 918 Spyder, and LaFerrari, then continued in 2016 with the Koenigsegg Regera, in 2017 with the Mercedes-AMG One, and the McLaren Speedtail. Modern hypercars such as Pininfarina Battista, NIO EP9, Rimac Nevera, and Lotus Evija have also gone full-electric. Hypercars have also been used as a base for the Le Mans Hypercar class after rule changes come into effect from 2021.
Technology
Motorized road transport
null
156310
https://en.wikipedia.org/wiki/Radicle
Radicle
In botany, the radicle is the first part of a seedling (a growing plant embryo) to emerge from the seed during the process of germination. The radicle is the embryonic root of the plant, and grows downward in the soil (the shoot emerges from the plumule). Above the radicle is the embryonic stem or hypocotyl, supporting the cotyledon(s). It is the embryonic root inside the seed. It is the first thing to emerge from a seed and down into the ground to allow the seed to suck up water and send out its leaves so that it can start photosynthesizing. The radicle emerges from a seed through the micropyle. Radicles in seedlings are classified into two main types. Those pointing away from the seed coat scar or hilum are classified as antitropous, and those pointing towards the hilum are syntropous. If the radicle begins to decay, the seedling undergoes pre-emergence damping off. This disease appears on the radicle as darkened spots. Eventually, it causes death of the seedling. The plumule is the baby shoot. It grows after the radicle. In 1880, Charles Darwin published a book about plants he had studied, The Power of Movement in Plants, where he mentions the radicle.
Biology and health sciences
Plant anatomy and morphology: General
Biology
156339
https://en.wikipedia.org/wiki/Corporate%20farming
Corporate farming
Corporate farming is the practice of large-scale agriculture on farms owned or greatly influenced by large companies. This includes corporate ownership of farms and the sale of agricultural products, as well as the roles of these companies in influencing agricultural education, research, and public policy through funding initiatives and lobbying efforts. The definition and effects of corporate farming on agriculture are widely debated, though sources that describe large businesses in agriculture as "corporate farms" may portray them negatively. Definitions and usage The varied and fluid meanings of "corporate farming" have resulted in conflicting definitions of the term, with implications in particular for legal definitions. Legal definitions Most legal definitions of corporate farming in the United States pertain to tax laws, anti-corporate farming laws, and census data collection. These definitions mostly reference farm income, indicating farms over a certain threshold as corporate farms, as well as ownership of the farm, specifically targeting farms that do not pass ownership through family lines. Common definitions In public discourse, the term "corporate farming" lacks a firmly established definition and is variously applied. However, several features of the term's usage frequently arise: It is largely used as a pejorative with strong negative connotations. It most commonly refers to corporations that are large-scale farms, market agricultural technologies (in particular pesticides, fertilizers, and GMO's), have significant economic and political influence, or some combination of the three. It is usually used in opposition to family farms and new agricultural movements, such as sustainable agriculture and the local food movement. Family farms "Family farm" and "corporate farm" are often defined as mutually exclusive terms, with the two having different interests. This mostly stems from the widespread assumption that family farms are small farms while corporate farms are large-scale operations. While it is true that the majority of small farms are family owned, many large farms are also family businesses, including some of the largest farms in the US. According to the Food and Agricultural Organization of the United Nations (FAO), a family farm "is a means of organizing agricultural, forestry, fisheries, pastoral and aquaculture production which is managed and operated by a family and predominantly reliant on family labour, both women's and men's. The family and the farm are linked, coevolve and combine economic, environmental, reproductive, social and cultural functions." Additionally, there are large economic and legal incentives for family farmers to incorporate their businesses. Contract farming Farming contracts are agreements between a farmer and a buyer that stipulates what the farmer will grow and how much they will grow usually in return for guaranteed purchase of the product or financial support in purchase of inputs (e.g. feed for livestock growers). In most instances of contract farming, the farm is family owned while the buyer is a larger corporation. This makes it difficult to distinguish the contract farmers from "corporate farms," because they are family farms but with significant corporate influence. This subtle distinction left a loop-hole in many state laws that prohibited corporate farming, effectively allowing corporations to farm in these states as long as they contracted with local farm owners. Non-farm entities Many people also choose to include non-farming entities in their definitions of corporate farming. Beyond just the farm contractors mentioned above, these types of companies commonly considered part of the term include Cargill, Monsanto, and DuPont Pioneer among others. These corporations do not have production farms, meaning they do not produce a significant amount of farm products. However, their role in producing and selling agricultural supplies and their purchase and processing of farm products often leads to them being grouped with corporate farms. While this is technically incorrect, it is widely considered substantively accurate because including these companies in the term "corporate farming" is necessary to describe their real influence over agriculture. Arguments against corporate farming Family farms maintain traditions including environmental stewardship and taking longer views than companies seeking profits. Family farmers may have greater knowledge about soil and crop types, terrains, weather and other features specific to particular local areas of land can be passed from parent to child over generations, which would be harder for corporate managers to grasp. North America In Canada, 17.4 percent of farms are owned by family corporations and 2.4 percent by non-family corporations. In Canada (as in some other jurisdictions) conversion of a sole proprietorship family farm to a family corporation can have tax planning benefits, and in some cases, the difference in combined provincial and federal taxation rates is substantial. Also, for farm families with significant off-farm income, incorporating the farm can provide some shelter from high personal income tax rates. Another important consideration can be some protection of the corporate shareholders from liability. Incorporating a family farm can also be useful as a succession tool, among other reasons because it can maintain a family farm as a viable operation where subdivision of the farm into smaller operations among heirs might result in farm sizes too small to be viable. The 2012 US Census of Agriculture indicates that 5.06 percent of US farms are corporate farms. These include family corporations (4.51 percent) and non-family corporations (0.55 percent). Of the family farm corporations, 98 percent are small corporations, with 10 or fewer stockholders. Of the non-family farm corporations, 90 percent are small corporations, with 10 or fewer stockholders. Non-family corporate farms account for 1.36 percent of US farmland area. Family farms (including family corporate farms) account for 96.7 percent of US farms and 89 percent of US farmland area; a USDA study estimated that family farms accounted for 85 percent of US gross farm income in 2011. Other farmland in the US is accounted for by several other categories, including single proprietorships where the owner is not the farm operator, non-family partnerships, estates, trusts, cooperatives, collectives, institutional, research, experimental and American Indian Reservation farms. In the US, the average size of a non-family corporate farm is 1078 acres, i.e. smaller than the average family corporate farm (1249 acres) and smaller than the average partnership farm (1131 acres). US farm laws To date, nine US states have enacted laws that restrict or prohibit corporate farming. The first of these laws were enacted in the 1930s by Kansas and North Dakota respectively. In the 1970s, similar laws were passed in Iowa, Minnesota, Missouri, South Dakota and Wisconsin. In 1982, after failure to pass an anti–corporate farming law, the citizens of Nebraska enacted by initiative a similar amendment into their state constitution. The citizens of South Dakota similarly amended their state constitution in 1998. All nine laws have similar content. They all restrict corporate ability to own and operate on farmland. They all outline exceptions for specific types of corporations. Generally, family farm corporations are exempted, although certain conditions may have to be fulfilled for such exemption (e.g. one or more of: shareholders within a specified degree of kinship owning a majority of voting stock, no shareholders other than natural persons, limited number of shareholders, at least one family member residing on the farm). However, the laws vary significantly in how they define a corporate farm, and in the specific restrictions. Definitions of a farm can include any and all farm operations, or be dependent on the source of income, as in Iowa, where 60 percent of income must come from farm products. Additionally, these laws can target a corporation's use of the land, meaning that companies can own but not farm the land, or they may outright prohibit corporations from buying and owning farmland. The precise wording of these laws has significant impact on how corporations can participate in agriculture in these states with the ultimate goal of protecting and empowering the family farm. Europe Family farms across Europe are heavily protected by EU regulations, which have been driven in particular by French farmers and the French custom splitting land inheritance between children to produce many very small family farms. In regions such as East Anglia, UK, some agribusiness is practiced through company ownership, but most large UK land estates are still owned by wealthy families such as traditional aristocrats, as encouraged by favourable inheritance tax rules. Most farming in the Soviet Union and its Eastern Bloc satellite states was collectivized. After the dissolution of those states via the revolutions of 1989 and the dissolution of the Soviet Union, decades of decollectivization and land reform have occurred, with the details varying substantially by country. Asia Pakistan As Pakistan's population surged, it gradually turned from a net food exporter to a net food importer, straining Pakistan's economy and food security. In response, the Pakistani military has led an initiative to set up corporate farming, a project called the Green Pakistan Initiative, and therefore drastically grow more essential food supplies for both sustenance and exports. Africa Corporate farming has begun to take hold in some African countries, where listed companies such as Zambeef, Zambia are operated by MBAs as large businesses. In some cases, this has caused debates about land ownership where shares have been bought by international investors, especially from China. Middle East Some oil-rich middle east countries operate corporate farming including large-scale irrigation of desert lands for cropping, sometimes through partially or fully state-owned companies, especially with regards to water resource management.
Technology
Agriculture, labor and economy
null
156455
https://en.wikipedia.org/wiki/Araucaria%20araucana
Araucaria araucana
Araucaria araucana, commonly called the monkey puzzle tree, monkey tail tree, piñonero, pewen or pehuen pine, is an evergreen tree growing to a trunk diameter of and a height of . It is native to central and southern Chile and western Argentina. It is the hardiest species in the conifer genus Araucaria. Because of the prevalence of similar species in ancient prehistory, it is sometimes called an animate fossil. It is also the official tree of Chile and of the neighboring Argentine province of Neuquén. The IUCN changed its conservation status to Endangered in 2013 as logging, forest fires, and grazing caused its population to dwindle. Description The leaves are thick, tough, and scale-like, triangular, long, broad at the base, and with sharp edges and tips. According to the scientist Christopher Lusk, the leaves have an average lifespan of 24 years and so cover most of the tree except for the older branches. It is usually dioecious, with the male and female cones on separate trees, though occasional individuals bear cones of both sexes. The male (pollen) cones are oblong and cucumber-shaped, long at first, expanding to long by broad at pollen release. It is wind pollinated. The female (seed) cones, which mature in autumn about 18 months after pollination, are globose, large, in diameter, and hold about 200 seeds. The cones disintegrate at maturity to release the long nut-like seeds. The thick bark of Araucaria araucana may be an adaptation to wildfire. Habitat The tree's native habitat is the lower slopes of the Chilean and Argentine south-central Andes, approximately between and 1,700 m (5,600 ft). In the Chilean Coast Range A. araucana can be found as far south as Villa Las Araucarias (latitude 38°30' S) at an altitude of 640 m asl. Juvenile trees exhibit a broadly pyramidal or conical habit which naturally develops into the distinctive umbrella form of mature specimens as the tree ages. It prefers well-drained, slightly acidic, volcanic soil, but will tolerate almost any soil type provided it drains well. Seedlings are often not competitive enough to survive unless grown in a canopy gap or exposed isolated area. It is almost never found together with Chusquea culeou, Nothofagus dombeyi, and Nothofagus pumilio, because they typically outcompete A. araucana. Seed dispersal Araucaria araucana is a masting species, and rodents are important consumers and dispersers of its seeds. The long-haired grass mouse, Abrothrix longipilis, is the most important animal responsible for dispersing the seeds of A.araucana. This rodent buries seeds whole in locations favorable for seed germination, unlike other animals. Another important seed dispersal agent is the parakeet species Enicognathus ferrugineus. Adult trees are highly resistant to large ecological disturbances caused by volcanic activity, after events like these the parakeets play their role by dispersing the seeds far from affected territory. Threats Logging, long a major threat, was finally banned in 1990. Large fires burned thousands of acres of Araucaria forest in 2001–2002, and areas of national parks have also burned, destroying trees over 1300 years old. Overgrazing and invasive trees are also threats. Extensive human harvesting of piñones (Araucaria seeds) can prevent new trees from growing. A Global Trees Campaign project that planted 2000 trees found a 90percent 10-year survival rate. Another major threat to the survival of A. araucana, is the presence of non-native seed eating species, in particular mammals, which have been shown to severely restrict the reproduction of the tree in comparison to native seed eaters. However it is still unclear as to how large a role these invasive species play in threatening this species of tree. One study in particular found that native species played a larger role in preventing reproduction through seed destruction. However this may be due to the relatively recent introduction of the selected species, causing their population to be smaller than other invasive species. A study conducted found that cattle ranching by small landowners and larger timber companies within the range of A. araucana severely affects regeneration of seedlings. Cultivation and uses Araucaria araucana is a popular garden tree, planted for the unusual effect of its thick, "reptilian" branches with very symmetrical appearance. It prefers temperate climates with abundant rainfall, tolerating temperatures down to about . It is far and away the hardiest member of its genus, and can grow well in western and central Europe (north to the Faroe Islands and Smøla in western Norway), the west coast of North America (north to Baranof Island in Alaska), and locally on the east coast, as far north as Long Island, and in New Zealand, southeastern Australia and south east Ireland. It is tolerant of coastal salt spray, but does not tolerate exposure to pollution. Its seeds (, ) are edible, similar to large pine nuts, and are harvested by indigenous peoples in Argentina and Chile. The tree has some potential to be a food crop in other areas in the future, thriving in climates with cool oceanic summers, e.g., western Scotland, where other nut crops do not grow well. A group of six female trees with one male for pollination could yield several thousand seeds per year. Since the cones drop, harvesting is easy. The tree, however, does not yield seeds until it is around 30 to 40 years old, which discourages investment in planting orchards (although yields at maturity can be immense); once established, individuals can achieve ages beyond 1,000 years. Pest losses to rodents and feral Sus scrofa limits the yields for human consumption and forage fattening of livestock by A. araucana mast. A. araucana has a high degree of inter-year variability in mast volume, and this variation is synchronous within a given area. This evolved to take advantage of predator satiety. Once valued because of its long, straight trunk, its current rarity and vulnerable status mean its wood is now rarely used; it is also sacred to some indigenous Mapuche. Timber from these trees, was used for railway sleepers in order to access many industrial areas around the port of Chile. Before the tree became protected by law in 1971, lumber mills in Araucanía Region specialized in Chilean pine. The species is protected under Appendix I of the Convention on International Trade in Endangered Species (CITES) meaning international trade (including in parts and derivatives) is regulated by the CITES permitting system and commercial trade in wild sourced specimens is prohibited. Many young specimens and seeds were brought or sent back to the UK by Cornish miners in the nineteenth century, during the Cornish diaspora, and as a result Cornwall is reckoned to have a high genetic diversity of the species. Christopher Nigel Page, a botanist working at Camborne School of Mines, University of Exeter planted specimens in disused china clay pits in the St Austell area as part of his research into regreening former extractive minerals sites, which he presented in 2017 in the UK Parliament, with Professor Hylke Glass, also of CSM, as co-author. Naming First identified by Europeans in Chile in the 1780s, it was named Pinus araucana by Molina in 1782. In 1789, de Jussieu erected a new genus called Araucaria based on the species, and in 1797, Pavón published a new description of the species which he called Araucaria imbricata (an illegitimate name, as it did not use Molina's older species epithet). Finally, in 1873, after several further redescriptions, Koch published the combination Araucaria araucana, validating Molina's species name. The name araucana is derived from the native Araucanians who used the nuts (seeds) of the tree in Chile – a group of Araucanians living in the Andes, the Pehuenches, owe their name to their diet based on the harvesting of the A. araucaria seeds; hence from pewen or its Hispanicized spelling pehuen which means Araucaria and che means people in Mapudungun. They believe the pewen was given by a deity or gwenachen to nourish their offspring; many pewen gathering festivals (ngillatun) are celebrated in both Chile and Argentina in gratitude to the tree's sustenance. The origin of the popular English language name "monkey puzzle" lies in its early cultivation in Britain in about 1850, when the species was still very rare in gardens and not widely known. Sir William Molesworth, the owner of a young specimen at Pencarrow garden near Bodmin in Cornwall, was showing it to a group of friends, when one of them – the noted barrister and Benthamist Charles Austin – remarked, "It would puzzle a monkey to climb that". As the species had no existing popular name, first "monkey puzzler", then "monkey puzzle" stuck. Pencarrow in the current century has an avenue of mature Monkey Puzzles. Relatives The nearest extant relative is Araucaria angustifolia, a South American Araucaria from Brazil which differs in the width of the leaves. Members of other sections of the genus Araucaria occur in Pacific Islands and in Australia, and include Araucaria cunninghamii, hoop pine, Araucaria heterophylla, the Norfolk Island pine and Araucaria bidwillii, bunya pine. The recently found 'Wollemi pine', Wollemia, discovered in southeast Australia, is classed in the plant family Araucariaceae. Their common ancestry dates to a time when Australia, Antarctica, and South America were linked by land – all three continents were once part of the supercontinent known as Gondwana. Gallery
Biology and health sciences
Pinophyta (Conifers)
Plants
6944639
https://en.wikipedia.org/wiki/Brittle%E2%80%93ductile%20transition%20zone
Brittle–ductile transition zone
The brittle-ductile transition zone (hereafter the "transition zone") is the zone of the Earth's crust that marks the transition from the upper, more brittle crust to the lower, more ductile crust. For quartz and feldspar-rich rocks in continental crust, the transition zone occurs at an approximate depth of 20 km, at temperatures of 250–400 °C. At this depth, rock becomes less likely to fracture, and more likely to deform ductilely by creep because the brittle strength of a material increases with confining pressure, while its ductile strength decreases with increasing temperature. Depth of the Transition Zone The transition zone occurs at the depth in the Earth's lithosphere where the downward-increasing brittle strength equals the upward-increasing ductile strength, giving a characteristic "saw-tooth" crustal strength profile. The transition zone is, therefore, the strongest part of the crust and the depth at which most shallow earthquakes occur. Its depth depends on both strain rate and temperature gradient; it is shallower for slow deformation and/or high heat flow and deeper for fast deformation and/or low heat flow. Crustal composition and age also affect the depth: it is shallower (~10–20 km) in warm, young crust and deeper (~20–30 km) in cool, old crust. Changes in Physical Properties The transition zone also marks a shift in the electrical conductivity of the crust. The upper region of the Earth's crust, which is about 10–15 km thick, is highly conductive due to electronic-conducting structures which are commonly distributed throughout this region. In contrast, the lower region of the crust is highly resistive and its electrical conductivity is determined by physical factors such as depth and temperature. Although the transition zone generally marks a shift from brittle rock to ductile rock, exceptions exist in certain conditions. If stress is applied rapidly, rock below the transition zone may fracture. Above the transition zone, the rock may deform ductilely if pore fluids are present and stress is applied gradually. Examples exposed on land Sections of fault zones once active in the transition zone, and now exposed at the surface, typically have a complex overprinting of brittle and ductile rock types. Cataclasites or pseudotachylite breccias with mylonite clasts are common, as are ductilely deformed cataclasites and pseudotachylites. These sections become exposed in geologically active regions where the transition zone is located the seismic zone, where most shallow earthquakes occur. A major example of this phenomenon is the Salzach-Ennstal-Mariazell-Puchberg (SEMP) fault system in the Austrian Alps. Along this fault line, researchers have directly observed changes in structure and strength profiles in transition zone.
Physical sciences
Structural geology
Earth science
2964983
https://en.wikipedia.org/wiki/Induction%20furnace
Induction furnace
An induction furnace is an electrical furnace in which the heat is applied by induction heating of metal. Induction furnace capacities range from less than one kilogram to one hundred tons, and are used to melt iron and steel, copper, aluminum, and precious metals. The advantage of the induction furnace is a clean, energy-efficient and well-controlled melting process, compared to most other means of metal melting. Most modern foundries use this type of furnace, and many iron foundries are replacing cupola furnaces with induction furnaces to melt cast iron, as the former emit much dust and other pollutants. Induction furnaces do not require an arc, as in an electric arc furnace, or combustion, as in a blast furnace. As a result, the temperature of the charge (the material entered into the furnace for heating, not to be confused with electric charge) is no higher than required to melt it; this can prevent the loss of valuable alloying elements. The one major drawback to induction furnace usage in a foundry is the lack of refining capacity: charge materials must be free of oxides and be of a known composition, and some alloying elements may be lost due to oxidation, so they must be re-added to the melt. Types In the coreless type, metal is placed in a crucible surrounded by a water-cooled alternating current solenoid coil. A channel-type induction furnace has a loop of molten metal, which forms a single-turn secondary winding through an iron core. Operation An induction furnace consists of a nonconductive crucible holding the charge of metal to be melted, surrounded by a coil of copper wire. A powerful alternating current flows through the wire. The coil creates a rapidly reversing magnetic field that penetrates the metal. The magnetic field induces eddy currents, circular electric currents, inside the metal, by electromagnetic induction. The eddy currents, flowing through the electrical resistance of the bulk metal, heat it by Joule heating. In ferromagnetic materials like iron, the material may also be heated by magnetic hysteresis, the reversal of the molecular magnetic dipoles in the metal. Once melted, the eddy currents cause vigorous stirring of the melt, assuring good mixing. An advantage of induction heating is that the heat is generated within the furnace's charge itself rather than applied by a burning fuel or other external heat source, which can be important in applications where contamination is an issue. Operating frequencies range from utility frequency (50 or 60 Hz) to 400 kHz or higher, usually depending on the material being melted, the capacity (volume) of the furnace and the melting speed required. Generally, the smaller the volume of the melts, the higher the frequency of the furnace used; this is due to the skin depth which is a measure of the distance an alternating current can penetrate beneath the surface of a conductor. For the same conductivity, the higher frequencies have a shallow skin depth—that is less penetration into the melt. Lower frequencies can generate stirring or turbulence in the metal. A preheated, one-ton furnace melting iron can melt cold charge to tapping readiness within an hour. Power supplies range from 10 kW to 42 MW, with melt sizes of 20 kg to 65 tons of metal respectively. An operating induction furnace usually emits a hum or whine (due to fluctuating magnetic forces and magnetostriction), the pitch of which can be used by operators to identify whether the furnace is operating correctly or at what power level. Refractory lining There is a disposable refractory lining used during casting.
Technology
Metallurgy
null
2968115
https://en.wikipedia.org/wiki/Cup%20%28unit%29
Cup (unit)
The cup is a cooking measure of volume, commonly associated with cooking and serving sizes. In the US, it is traditionally equal to . Because actual drinking cups may differ greatly from the size of this unit, standard measuring cups may be used, with a metric cup commonly being rounded up to 240 millilitres (legal cup), but 250 ml is also used depending on the measuring scale. United States Customary cup In the United States, the customary cup is half of a US liquid pint. Legal cup The cup currently used in the United States for nutrition labelling is defined in United States law as 240 ml. Conversion table to US legal cup The following information is describing that how to measure US legal cup in different ways. Coffee cup A "cup" of coffee in the US is usually 4 fluid ounces (118 ml), brewed using 5 fluid ounces (148 ml) of water. Coffee carafes used with drip coffee makers, e.g. Black and Decker models, have markings for both water and brewed coffee as the carafe is also used for measuring water prior to brewing. A 12-cup carafe, for example, has markings for 4, 6, 8, 10, and 12 cups of water or coffee, which correspond to 20, 30, 40, 50, and 60 US fluid ounces (0.59, 0.89, 1.18, 1.48, and 1.77 litres) of water or 16, 24, 32, 40, and 48 US fluid ounces (0.47, 0.71, 0.95, 1.18, and 1.42 litres) of brewed coffee respectively, the difference being the volume absorbed by the coffee grounds and lost to evaporation during brewing. Commonwealth of Nations Metric cup Australia, Canada, New Zealand, and some other members of the Commonwealth of Nations, being former British colonies that have since metricated, employ a "metric cup" of 250millilitres. Although derived from the metric system, it is not an SI unit. A "coffee cup" is 1.5 dL (i.e. 150 millilitres or 5.07 US customary fluid ounces), and is occasionally used in recipes; in older recipes, cup may mean "coffee cup". It is also used in the US to specify coffeemaker sizes (what can be referred to as a Tasse à café). A "12-cup" US coffeemaker makes 57.6 US customary fluid ounces of coffee, which is equal to 6.8 metric cups of coffee. Canadian cup Canada now usually employs the metric cup of 250ml, but its conventional cup was somewhat smaller than both American and imperial units. 1 Canadian cup = 8 imperial fluid ounces = imperial gallon =                = UK tumbler = 1 UK breakfast cup = 1 UK cups = 1 UK teacups = 3 UK coffee cups = 4 UK wine glasses                ≈ 0·96 US customary cup                ≈ 0·91 metric cup 1 Canadian tablespoon =                       = 1 UK tablespoon                       ≈ 0·96 US customary tablespoon                       ≈ 0·95 international metric tablespoon ≈ 0·71 Australian metric tablespoon 1 Canadian teaspoon =                      = 1 UK teaspoons                      ≈ 0·96 US customary teaspoon                      ≈ 0·95 metric teaspoon British cup In the United Kingdom, 1 cup is traditionally 6 imperial fluid ounces. The unit is named after a typical drinking cup. There are three related British culinary measurement units of volume bearing names with the word, ‘cup’: the breakfast cup (8 imperial fluid ounces), the teacup (5 imperial fluid ounces), and the coffee cup (2 imperial fluid ounces). Further, there are two related British culinary measurement units of volume without the word, ‘cup’, in their names: the tumbler (10 imperial fluid ounces) and the wine glass (2 imperial fluid ounces). All six units are the traditional British equivalents of the US customary cup and the metric cup, used in situations where a US cook would use the US customary cup and a cook using metric units the metric cup. The breakfast cup is the most similar in size to the US customary cup and the metric cup. Which of these six units is used depends on the quantity or volume of the ingredient: there is division of labour between these six units, like the tablespoon and the teaspoon. British cookery books and recipes, especially those from the days before the UK's partial metrication, commonly use two or more of the aforesaid units simultaneously: for example, the same recipe may call for a ‘tumblerful’ of one ingredient and a ‘wineglassful’ of another one; or a ‘breakfastcupful’ or ‘cupful’ of one ingredient, a ‘teacupful’ of a second one, and a ‘coffeecupful’ of a third one. Unlike the US customary cup and the metric cup, a tumbler, a breakfast cup, a cup, a teacup, a coffee cup, and a wine glass are not measuring cups: they are simply everyday drinking vessels commonly found in British households and typically having the respective aforementioned capacities; due to long‑term and widespread use, they have been transformed into measurement units for cooking. There is not a British imperial unit⁠–⁠based culinary measuring cup. International Similar units in other languages and cultures are sometimes translated "cup", usually with various values around to of a litre. Latin American cup In Latin America, the amount of a "cup" () varies from country to country, using a cup of 200ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces), 250ml (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces), and the US legal or customary amount. Japanese cup The traditional Japanese unit equated with a "cup" size is the gō, legally equated with litre (≈ 180.4 ml/6·35 British imperial fluid ounces/6·1 US customary fluid ounces) in 1891, and is still used for reckoning amounts of rice and sake. The Japanese later defined a "cup" as 200 ml. Russian cup The traditional Russian measurement system included two cup sizes: the "charka" (cup proper) and the "stakan" ("glass"). The charka was usually used for alcoholic drinks and is 123mL (about 4·33 British imperial fluid ounces or 4·16 US customary fluid ounces), while the stakan, used for other liquids, was twice as big and is 246mL (about 8·66 British imperial fluid ounces or 8·32 US customary fluid ounces). Since metrication, the charka was informally redefined as 100 ml (about 3·52 British imperial fluid ounces or 3·38 US customary fluid ounces), acquiring a new name of "stopka" (related to the traditional Russian measurement unit "stopa"), while there are currently two widely used glass sizes of 250mL (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces) and 200 ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces). Dutch cup In The Netherlands, traditionally a "cup" (Dutch: kopje) amounts to 150 ml (about 5·28 British imperial fluid ounces or 5·07 US customary fluid ounces). However, in modern recipes, the US legal cup of 240 ml (about 8·45 British imperial fluid ounces or 8·12 US customary fluid ounces) is more commonly used. Dry measure In Europe, recipes normally weigh non-liquid ingredients in grams rather than measuring volume. For example, where an American recipe might specify "1 cup of sugar and 2 cups of milk", a European recipe might specify "200 g sugar and 500 ml of milk". A precise conversion between the two measures takes into account the density of the ingredients, and some recipes specify both weight and volume to facilitate this conversion. Many European measuring cups have markings that indicate the weight of common ingredients for a given volume.
Physical sciences
Volume
Basics and measurement
2969781
https://en.wikipedia.org/wiki/Thrinaxodon
Thrinaxodon
Thrinaxodon is an extinct genus of cynodonts, including the species T. liorhinus which lived in what are now South Africa and Antarctica during the Late Permian - Early Triassic. Thrinaxodon lived just after the Permian–Triassic mass extinction event, its survival during the extinction may have been due to its burrowing habits. Similar to other therapsids, Thrinaxodon adopted a semi-sprawling posture, an intermediary form between the sprawling position of basal tetrapods and the more upright posture present in current mammals. Thrinaxodon is prevalent in the fossil record in part because it was one of the few carnivores of its time, and was of a larger size than similar cynodont carnivores. Description Thrinaxodon was a small synapsid roughly the size of a fox and possibly covered in hair. The dentition suggests that it was a carnivore, focusing its diet mostly on insects, small herbivores and invertebrates. Their unique secondary palate successfully separated the nasal passages from the rest of the mouth, allowing the Thrinaxodon to continue mastication without interrupting to breathe, an adaptation important for digestion. Skull The nasals of Thrinaxodon are pitted with a large number of foramina. The nasals narrow anteriorly and expand anteriorly and articulate directly with the frontals, pre-frontals and lacrimals; however, there is no interaction with the jugals or the orbitals. The maxilla of Thrinaxodon is also heavily pitted with foramina. The arrangement of foramina on the snout of Thrinaxodon resembles that of lizards, such as Tupinambis, and also bears a single large infraorbital foramen. As such, Thrinaxodon would have had non-muscular lips like those of lizards, not mobile, muscular ones like those of mammals. Without the infraorbital foramen and its associated facial flexibility, it is unlikely that Thrinaxodon would have had whiskers. On the skull roof of Thrinaxodon, the fronto-nasal suture represents an arrow shape instead of the general transverse process seen in more primitive skull morphologies. The prefrontals, which are slightly anterior and ventral to the frontals exhibit a very small size and come in contact with the post-orbitals, frontals, nasals and lacrimals. More posteriorly on the skull, the parietals lack a sagittal crest. The cranial roof is the narrowest just posterior to the parietal foramen, which is very nearly circular in shape. The temporal crests remain quite discrete throughout the length of the skull. The temporal fenestra have been found with ossified fasciae, giving evidence of some type of a temporal muscle attachment. The upper jaw contains a secondary palate which separates the nasal passages from the rest of the mouth, which would have given Thrinaxodon the ability to breathe uninterrupted, even if food had been kept in its mouth. This adaptation would have allowed the Thrinaxodon to mash its food to a greater extent, decreasing the amount of time necessary for digestion. The maxillae and palatines meet medially in the upper jaw developing a midline suture. The maxillopalatine suture also includes a posterior palatine foramen. The large palatal roof component of the vomer in Thrinaxodon is just dorsal to the choana, or interior nasal passages. The pterygoid bones extend in the upper jaw and enclose small interpterygoid vacuities that are present on each side of the cultriform processes of the parasphenoids. The parasphenoid and basisphenoid are fused, except for the most anterior/dorsal end of the fused bones, in which there is a slight separation in the trabecular attachment of the basisphenoid. The otic region is defined by the regions surrounding the temporal fenestrae. Most notable is evidence of a deep recess that is just anterior to the fenestra ovalis, containing evidence of smooth muscle interactions with the skull. Such smooth muscle interactions have been interpreted to be indicative of the tympanum and give the implications that this recess, in conjunction with the fenestra ovalis, outline the origin of the ear in Thrinaxodon. This is a new synapomorphy as this physiology had arisen in Thrinaxodon and had been conserved through late Cynodontia. The stapes contained a heavy cartilage plug, which was fit into the sides of the fenestra ovalis; however, only one half of the articular end of the stapes was able to cover the fenestra ovalis. The remainder of this pit opens to an "un-ossified" region which comes somewhat close to the cochlear recess, giving one the assumption that inner ear articulation occurred directly within this region. The skull of Thrinaxodon is an important transitional fossil which supports the simplification of synapsid skulls over time. The most notable jump in bone number reduction had occurred between Thrinaxodon and Probainognathus, a change so dramatic that it is most likely that the fossil record for this particular transition is incomplete. Thrinaxodon contains fewer bones in the skull than that of its pelycosaurian ancestors. Dentition Data on the dentition of Thrinaxodon liorhinus was compiled by use of a micro CT scanner on a large sample of Thrinaxodon skulls, ranging between in length. These dentition patterns are similar to that of Morganucodon, allowing one to make the assumption that these dentition patterns arose within Thrinaxodontidae and extended into the records of early Mammalia. Adult T. liorhinus assumes the dental pattern of the four incisors, one canine and six postcanines on each side of the upper jaw. This pattern is reflected in the lower jaw by a dental formula of three incisors, one canine and seven or eight postcanines on each side of the lower jaw. With this formula, one can make a small note that in general, adult Thrinaxodon contained anywhere between 44 and 46 total teeth. Upper incisors in T. liorhinus assume a backwards directed cusp, being curved and pointed at their most distal point, and becoming broader and rounder as they reach their proximal insertion point into the premaxilla. The fourth upper incisor is roughly homologous with a small canine tooth in form, but is positioned too far anteriorly to be a functional canine - thus ruling it out as an instance of convergent evolution. Lower incisors possess a very broad base, which is progressively reduced, heading distally towards the tip of the tooth. The lingual face of the lower incisors is most often concave while the labial face is often convex, and these lower incisors are oriented anteriorly, except in some cases for the third lower incisor, which can assume a more dorsoventral orientation. The incisors are, for the most part, single functional teeth encompassing a broad, cone-like morphology. The canines of T. liorhinus possess small dorsoventrally-directed facets on their surfaces, which appear to be involved with occlusion (dentition alignment in upper- and lower jaw closure). Each canine possesses a replacement canine located within the jaw, posterior to the existing canine, neither of the replacement or functional canine teeth possess any serrated margins only the small facets. It is important to note that the lower canine is directed almost vertically (dorsoventrally) while the upper canine is directed slightly anteriorly. The upper and lower postcanines in T. liorhinus share some common features but also vary quite a fair amount in comparison to one another. The first postcanine (just posterior to the canine) is most often smaller than the other postcanines and is most often bicuspid. Including the first postcanine, if any of the other postcanines are bicuspid, then it is safe to assume that the posterior accessary cusp is present and that that tooth will not have any cingular or labial cusps. If, however, the tooth is tricuspid, then there is a chance of cingular cusps developing, if this occurs then the anterior cusp will be the first to appear and will be the most pronounced cusp. In the upper postcanines, there should be no occurrence of any teeth possessing more than three cusps, and there is no occurrence of any labial cusps on the upper postcanines. The majority of upper postcanines in the juvenile Thrinaxodon are bicuspid, while only one of these upper teeth are tricuspid. The upper postcanines of an intermediate (between juvenile and adult) Thrinaxodon are all tricuspid with no labial or cingular cusps. The adult upper postcanines retain the intermediate physiologies and possess only tricuspid teeth; however, it is possible for cingular cusps to develop in these adult teeth. The ultimate (posterior-most) upper canine is often the smallest of all canines in the entire jaw system. Little data is known of the juvenile and intermediate forms of the lower postcanines in Thrinaxodon, but the adult lower postcanines all possess multiple (any value more than three) cusps as well as the only appearance of labial cusps. Some older specimens have been found that possess no multiple-cups lower canines, possibly a response to old age or teeth replacement. Thrinaxodon shows one of the first occurrences of replacement teeth in cynodonts. This was discerned by the presence of replacement pits, which are situated lingual to the functional tooth in the incisors and postcanines. While a replacement canine does exist, more often than not it is not erupted and the original functional canine remains. Histology The bone tissue of Thrinaxodon consists of fibro-lamellar bone, to a varying degree across all the separate limbs, most of which develops into parallel-fibred bone tissue towards the periphery. Each of the bones contains a large abundance of globular osteocyte lacunae which radiate a multitude of branched canaliculi. Ontogenetically early bones - mostly consisting of fibro-lamellar tissue - possessed a large amount of vascular canals. These canals are oriented longitudinally within primary osteons that contain radial anastomoses. Regions consisting mostly of parallel-fibred bone tissue contain few simple vascular canals, in comparison to the nearby fibro-lamellar tissues. Parallel-fibred peripheral bone tissue are indicative that bone growth began to slow, and they bring about the assumption that this change in growth was due to the age of the specimen in question. Combine this with the greater organization of osteocyte lacunae in the periphery of adult T. liorhinus, and we approach the assumption that this creature grew very quickly in order to reach adulthood at an accelerated rate. Before Thrinaxodon, ontogenical patterns such as this had not been seen, establishing the idea that reaching peak size rapidly was an adaptively advantageous trait that had arisen with Thrinaxodon. Within the femur of Thrinaxodon, there is no major region of the bone that is made of parallel-fibred tissues; however, there is a small ring of parallel-fibred bone within the mid-cortex. The remainder of the femur is made of fibro-lamellar tissue; however, the globular osteocyte lacunae become much more organized and the primary osteons assume less vasculature than many other bones as you begin to approach the subperiosteal surface. The femur contains very few bony trabeculae. The humerus differs from the femur in many regards, one of which being that there is a more extensive network of bony trabeculae in the humerus near the meduallary cavity of the bone. The globular osteocyte lacunae become more flattened as you get closer and closer to the midshaft of the humerus. While the vasculature is present, the humerus contains no secondary osteons. The radii and ulnae of Thrinaxodon represent roughly the same histological patterns. In contrast to the humerii and femora, the parallel-fibred region is far more distinct in the distal bones of the forelimb. The medullary cavities are surrounded by multiple layers of very poorly vascularized endosteal lamellar tissue, along with very large cavities near the medullary cavity of the metaphyses. Discovery and naming Thrinaxodon was originally discovered in the Lystrosaurus Assemblage Zone of the Beaufort Group of South Africa. The genoholotype, BMNH R 511, was in 1887 described by Richard Owen as the plesiotype of Galesaurus planiceps. In 1894 it was by Harry Govier Seeley made a separate genus with as type species Thrinaxodon liorhinus. Its generic name was taken from the Ancient Greek for "trident tooth", thrinax and odon. The specific name is Latinised Greek for "smooth-nosed". Thrinaxodon was initially believed to be isolated to that region. Other fossils in South Africa were recovered from the Normandien and Katberg Formations. It had not been until 1977 that additional fossils of Thrinaxodon had been discovered in the Fremouw Formation of Antarctica. Upon its discovery there, numerous experiments were done to confirm whether or not they had found a new species of Thrinaxodontidae, or if they had found another area which T. liorhinus called home. The first experiment was to evaluate the average number of pre-sacral vertebrae in the Antarctic vs African Thrinaxodon. The data actually showed a slight difference between the two, in that the African T. liorhinus contained 26 presacrals, while the Antarctic Thrinaxodon had 27 pre-sacrals. In comparison to other cynodonts, 27 pre-sacrals appeared to be the norm throughout this sub-section of the fossil record. The next step was to evaluate the size of the skull in the two different discovery groups, and in this study they found no difference between the two, the first indication that they may in fact be of the same species. The ribs were the final physiology to be cross-examined, and while they portrayed slight differences in the expanded ribs, against one another, the most important synapomorphy remained consistent between the two, and that was that the intercostal plates overlapped with one another. These evaluations led to the conclusion that they had not found a new species of Thrinaxodontidae, but yet they had found that Thrinaxodon occupied two different geographical regions, which today are separated by an immense expanse of ocean. This discovery was one of many to support the idea of a connected land mass, and that during the early Triassic, Africa and Antarctica must have been linked in some way, shape or form. Classification Thrinaxodon belongs to the clade Epicynodontia, a subdivision of the greater clade Cynodontia. Cynodontia eventually led to the evolution of Morganucodon and all other mammalia. Cynodontia belongs to the clade Therapsida, which was the first major clade along the line of the Synapsida. Synapsida represents one of two major splitting points, under the clade Amniota, which also split into Sauropsida, the larger clade containing today's reptiles, birds and Crocodilia. Thrinaxodon represents a fossil transitional in morphology on the road to humans and other extant mammals. Paleobiology Ontogeny There appear to be nine cranial features that successfully separate Thrinaxodon into four ontogenetic stages. The paper denotes that in general, the Thrinaxodon skull increased in size isometrically, except for four regions, one of which being the optic region. Much of the data assumes that the length of the sagittal crest increased at a greater rate in relation to the rest of the skull. The posterior sagittal crest to appear in an earlier ontogenetic stage than the more anterior crest had, and in conjunction with the dorsal deposition of bone, a unified sagittal crest had developed rather than having a single suture span the entire length of the skull. The bone histology of Thrinaxodon indicates that it most likely had very rapid bone growth during juvenile development, and much slower development throughout adulthood, giving rise to the idea that Thrinaxodon reached peak size very early in its life. Posture The posture of Thrinaxodon is an interesting subject, because it represents a transition between the sprawling behavior of the more lizard-like pelycosaurs and the more upright behavior found in modern, and many extinct, Mammalia. In cynodonts such as Thrinaxodon, the distal femoral condyle articulates with the acetabulum in a way that permits the hindlimb to present itself at a 45-degree angle to the rest of the system. This is a large difference in comparison to the distal femoral condyle of pelycosaurs, which permits the femur to be parallel with the ground, forcing them to assume a sprawling-like posture. More interesting is that there is an adaptation that has only been observed within Thrinaxodontidae, which allows them to assume upright posture, similar to that of early Mammalia, within their burrows. These changes in posture are supported by the physiological changes in the torso of Thrinaxodon. Such changes as the first appearance of a segmented rib compartment, in which Thrinaxodon expresses both thoracic and lumbar vertebrae. The thoracic segment of the vertebrae contain ribs with large intercostal plates that most likely assisted with either protection or supporting the main frame of the back. This newly developed arrangement allowed for the appropriate space for a diaphragm, however, without proper soft tissue records, the presence of a diaphragm is purely speculative. Burrowing Thrinaxodon has been identified as a burrowing cynodont by numerous discoveries in preserved burrow hollows. There is evidence that the burrows are in fact built by the Thrinaxodon to live in them, and they do not simply inhabit leftover burrows by other creatures. Due to the evolution of a segmented vertebral column into thoracic, lumbar and sacral vertebrae, Thrinaxodon was able to achieve flexibilities that permitted it to comfortably rest within smaller burrows, which may have led to habits such as aestivation or torpor. This evolution of a segmented rib cage suggests that this may have been the first instance of a diaphragm in the synapsid fossil record; however, without the proper soft tissue impressions this is nothing more than an assumption. The earliest discovery of a burrowing Thrinaxodon places the specimen found around 251 million years ago, a time frame surrounding the Permian–Triassic extinction event. Much of these fossils had been found in the flood plains of South Africa, in the Karoo Basin. This behavior had been seen at a relatively low occurrence in the pre-Cenozoic, dominated by therapsids, early-Triassic cynodonts and some early Mammalia. Thrinaxodon was in fact the first burrowing cynodont that has been found, showing similar behavioral patterns to that of Trirachodon. The first burrowing vertebrate on record was the dicynodont synapsid Diictodon, and it is possible that these burrowing patterns had passed on to the future cynodonts due to the adaptive advantage of burrowing during the extinction. The burrow of Thrinaxodon consists of two laterally sloping halves, a pattern that has only been observed in burrowing non-mammalian Cynodontia. The changes in vertebral/rib anatomy that arose in Thrinaxodon permit the animals to a greater range of flexibility, and the ability to place their snout underneath their hindlimbs, an adaptive response to small living quarters, in order to preserve warmth and/or for aestivation purposes. A Thrinaxodon burrow contained an injured temnospondyl, Broomistega. The burrow was scanned using a synchrotron, a tool used to observe the contents of the burrows in this experiment, and not damage the intact specimens. The synchrotron revealed an injured rhinesuchid, Broomistega putterilli, showing signs of broken or damaged limbs and two skull perforations, most likely inflicted by the canines of another carnivore. The distance between the perforations was measured in relation to the distance between the canines of the Thrinaxodon in question, and no such relation was found. Therefore, we may assume that the temnospondyl found refuge in the burrow after a traumatic experience and the T. liorhinus allowed it to stay in its burrow until they both ultimately met their respective deaths. Interspecific shelter sharing is a rare anomaly within the fossil record; this T. liorhinus shows one of the first occurrences of this type of behavior in the fossil record, but it currently is unknown if the temnospondyl inhabited the burrow before or after the death of the nesting Thrinaxodon.
Biology and health sciences
Proto-mammals
Animals
1500225
https://en.wikipedia.org/wiki/Gnetum%20gnemon
Gnetum gnemon
Gnetum gnemon is a gymnosperm species of Gnetum, its native area spans from Mizoram and Assam in India down south through Malay Peninsula, Malay Archipelago and the Philippines in southeast Asia to the western Pacific islands. Common names include gnetum, joint fir, two leaf, melinjo/belinjo (Indonesian), bago (Filipino), and tulip (Tok Pisin). Description This species can be easily confused for an angiosperm due to the fruit-like female strobili, broad leaves and male strobili looking like flowers due to convergent evolution. Tree It is a small to medium-size tree (unlike most other Gnetum species, which are lianas), growing to 15–22 metres tall and with a trunk diameter of up to 40 cm (16 in). In addition to the tree form, there are also varieties that includes shrub forms (brunonianum, griffithii, and tenerum). The leaves are evergreen, opposite, 8–20 cm long and 3–10 cm broad, entire, emerging bronze-coloured, maturing glossy dark green. The tree does not flower but still grow male and female sporing organs from single long stems 3–6 centimetres long. Male strobili are small and arranged in long stalks which are often mistaken for flowers, melinjo fruit instead are produced from fertilizing the female strobili. Fruit The oval fruit (technically a strobilus) measures 1–3.5 cm long, it consists of a thin velvety integument and a large nut-like endosperm 2–4 cm long inside. Fleshy strobili weigh about 5.5 g, the endosperm alone 3.8 g. It changes colour from yellow to orange, purple or pink when ripe. Melinjo season in Indonesia comes three times in March to April, June to July, and September to October, but the fruiting season in northeast of Philippines mainly from June to September. Uses Culinary Gnetum nuts are eaten boiled, roasted, or raw in most parts of Southeast Asia and Melanesia. The young leaves, flowers, and the outer flesh of the fruits are also edible when cooked and are eaten in Indonesia, the Philippines, Thailand, Vanuatu, Papua New Guinea, the Solomon Islands, and Fiji. They have a slightly sour taste and are commonly eaten in soups and stews. Gnetum is most widely used in Indonesian cuisine where it is known as melinjo or belinjo. The seeds are used for sayur asem (sour vegetable soup) and also, made into raw chips that later need to be deep-fried as crackers (emping, a type of krupuk). The crackers have a slightly bitter taste and are frequently served as a snack or accompaniment to Indonesian dishes. This plant is commonly cultivated throughout the Aceh region and is regarded as a vegetable of high status. Its male strobili, young leaves and female strobilus are used as ingredients in traditional vegetable curry called . This dish is served on all important traditional occasions, such as and . In the Pidie district, the women pick the red-skinned ripe fruit and make from it. Phytochemicals Recently, it has been discovered that melinjo strobili are rich in a stilbenoid composed of resveratrol and identified as a dimer. This result was published in XXIII International Conference on Polyphenols, Canada, in 2006. Melinjo resveratrol, having antibacterial and antioxidative activity, works as a food preservative, off flavour inhibitor and taste enhancer. This species may have applications in food industries which do not use any synthetic chemicals in their processes. Four new stilbene oligomers, gnemonol G, H, I and J, were isolated from acetone extract of the root of Gnetum gnemon along with five known stilbenoids, ampelopsin E, cis-ampelopsin E, gnetin C, D and E. The extraction of dried leaf of Gnetum gnemon with acetone water (1:1) gave C-glycosylflavones (isovitexin, vicenin II, isoswertisin, swertisin, swertiajaponin, isoswertiajaponin). The separation of a 50% ethanol extract of the dried endosperms yielded gnetin C, gnetin L (new stilbenoid), gnemonosides A, C and D, and resveratrol which were tested for DPPH radical scavenging action, antimicrobial activity and inhibition of lipase and α-amylase from porcine pancreas. Gnetin C showed the best effect among these stilbenoids. Oral administration of the 50% ethanol extract of melinjo fruit at 100 mg/kg/day significantly enhanced the production of the Th1 cytokines IL-2 and IFN-γ irrespective of concanavalin-A stimulation, whereas the production of the Th2 cytokines IL-4 and IL-5 was not affected. New stilbene glucosides gnemonoside L and gnemonoside M, and known stilbenoids resveratrol, isorhapontigenin, gnemonoside D, gnetins C and E were isolated from the extract. Gnemonoside M strongly enhanced Th1 cytokine production in cultured Peyer's patch cells from mice at 10 mg/kg/day.
Biology and health sciences
Gymnosperms (except conifers)
Plants
1500869
https://en.wikipedia.org/wiki/Coefficient%20of%20determination
Coefficient of determination
In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s). It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model. There are several definitions of R2 that are only sometimes equivalent. One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1. There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion. The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on certain test datasets. When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b). The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line). Definitions A data set has n values marked y1, ..., yn (collectively known as yi or as a vector y = [y1, ..., yn]T), each associated with a fitted (or modeled, or predicted) value f1, ..., fn (known as fi, or sometimes ŷi, as a vector f). Define the residuals as (forming a vector e). If is the mean of the observed data: then the variability of the data set can be measured with two sums of squares formulas: The sum of squares of residuals, also called the residual sum of squares: The total sum of squares (proportional to the variance of the data): The most general definition of the coefficient of determination is In the best case, the modeled values exactly match the observed values, which results in and . A baseline model, which always predicts , will have . Relation to unexplained variance In a general form, R2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data): As explained variance A larger value of R2 implies a more successful regression model. Suppose . This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called the explained sum of squares, is defined as In some cases, as in simple linear regression, the total sum of squares equals the sum of the two other sums of squares defined above: See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of R2 is equivalent to where n is the number of observations (cases) on the variables. In this form R2 is expressed as the ratio of the explained variance (variance of the model's predictions, which is ) to the total variance (sample variance of the dependent variable, which is ). This partition of the sum of squares holds for instance when the model values ƒi have been obtained by linear regression. A milder sufficient condition reads as follows: The model has the form where the qi are arbitrary values that may or may not depend on i or on other free parameters (the common choice qi = xi is just one special case), and the coefficient estimates and are obtained by minimizing the residual sum of squares. This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions: As squared correlation coefficient In linear least squares multiple regression (with fitted intercept and slope), R2 equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable. In a linear least squares regression with a single explanator (with fitted intercept and slope), this is also equal to the squared Pearson correlation coefficient between the dependent variable and explanatory variable . It should not be confused with the correlation coefficient between two explanatory variables, defined as where the covariance between two coefficient estimates, as well as their standard deviations, are obtained from the covariance matrix of the coefficient estimates, . Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an R2 value can be calculated as the square of the correlation coefficient between the original and modeled data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the form ). According to Everitt, this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables. Interpretation R2 is a measure of the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data. Values of R2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible least-squares predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth is used (this is the equation used most often), R2 can be less than zero. If equation 2 of Kvålseth is used, R2 can be greater than one. In all instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing SSres. In this case, R2 increases as the number of variables in the model is increased (R2 is monotone increasing with the number of variables included—it will never decrease). This illustrates a drawback to one possible use of R2, where one might keep adding variables (kitchen sink regression) to increase the R2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the R2 will never decrease as variables are added and will likely experience an increase due to chance alone. This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis. In a multiple linear model Consider a linear model with more than a single explanatory variable, of the form where, for the ith case, is the response variable, are p regressors, and is a mean zero error term. The quantities are unknown coefficients, whose values are estimated by least squares. The coefficient of determination R2 is a measure of the global fit of the model. Specifically, R2 is an element of [0, 1] and represents the proportion of variability in Yi that may be attributed to some linear combination of the regressors (explanatory variables) in X. R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in , while R2 = 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept = ) between the response variable and regressors). An interior value such as R2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown, lurking variables or inherent variability." A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause"). In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination. Inflation of R2 In least squares regression using typical data, R2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value of R2, R2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, an F-test can be performed on the residual sum of squares , similar to the F-tests in Granger causality, though this is not always appropriate. As a reminder of this, some authors denote R2 by Rq2, where q is the number of columns in X (the number of explanators including the constant). To demonstrate this property, first recall that the objective of least squares linear regression is where Xi is a row vector of values of explanatory variables for case i and b is a column vector of coefficients of the respective elements of Xi. The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns of (the explanatory data matrix whose ith row is Xi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting that depends only on y, the non-decreasing property of R2 follows directly from the definition above. The intuitive reason that using an additional explanatory variable cannot lower the R2 is this: Minimizing is equivalent to maximizing R2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and the R2 unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the R2. The above gives an analytical explanation of the inflation of R2. Next, an example based on ordinary least square from a geometric perspective is shown below. A simple case to be considered first: This equation describes the ordinary least squares regression model with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space in (without intercept). The residual is shown as the red line. This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space in (without intercept). Noticeably, the values of and are not the same as in the equation for smaller model space as long as and are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space in , giving the minimal distance from the space. The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation for R2, a smaller value of will lead to a larger value of R2, meaning that adding regressors will result in inflation of R2. Caveats R2 does not indicate whether: the independent variables are a cause of the changes in the dependent variable; omitted-variable bias exists; the correct regression was used; the most appropriate set of independent variables has been chosen; there is collinearity present in the data on the explanatory variables; the model might be improved by using transformed versions of the existing set of independent variables; there are enough data points to make a solid conclusion; there are a few outliers in an otherwise good sample. Extensions Adjusted R2 The use of an adjusted R2 (one common notation is , pronounced "R bar squared"; another is or ) is an attempt to account for the phenomenon of the R2 automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting. By far the most used one, to the point that it is typically just referred to as adjusted R, is the correction proposed by Mordecai Ezekiel. The adjusted R2 is defined as where dfres is the degrees of freedom of the estimate of the population variance around the model, and dftot is the degrees of freedom of the estimate of the population variance around the mean. dfres is given in terms of the sample size n and the number of variables p in the model, . dftot is given in the same way, but with p being zero for the mean, i.e. . Inserting the degrees of freedom and using the definition of R2, it can be rewritten as: where p is the total number of explanatory variables in the model (excluding the intercept), and n is the sample size. The adjusted R2 can be negative, and its value will always be less than or equal to that of R2. Unlike R2, the adjusted R2 increases only when the increase in R2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjusted R2 computed each time, the level at which adjusted R2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms. The adjusted R2 can be interpreted as an instance of the bias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjusted R2 specifically, the model complexity (i.e. number of parameters) affects the R2 and the term / frac and thereby captures their attributes in the overall performance of the model. R2 can be interpreted as the variance of the model, which is influenced by the model complexity. A high R2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). In 2, the term () will be lower with high complexity and resulting in a higher 2, consistently indicating a better performance. On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance. Considering the calculation of 2, more parameters will increase the R2 and lead to an increase in 2. Nevertheless, adding more parameters will increase the term/frac and thus decrease 2. These two trends construct a reverse u-shape relationship between model complexity and 2, which is in consistent with the u-shape trend of model complexity versus overall performance. Unlike R2, which will always increase when model complexity increases, 2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. Using 2 instead of R2 could thereby prevent overfitting. Following the same logic, adjusted R2 can be interpreted as a less biased estimator of the population R2, whereas the observed sample R2 is a positively biased estimate of the population value. Adjusted R2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in the feature selection stage of model building. The principle behind the adjusted R2 statistic can be seen by rewriting the ordinary R2 as where and are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statistically unbiased versions: and . Despite using unbiased estimators for the population variances of the error and the dependent variable, adjusted R2 is not an unbiased estimator of the population R2, which results by using the population variances of the errors and the dependent variable instead of estimating them. Ingram Olkin and John W. Pratt derived the minimum-variance unbiased estimator for the population R2, which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjusting R2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator or the exact Olkin–Pratt estimator should be preferred over (Ezekiel) adjusted R2. Coefficient of partial determination The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model. The calculation for the partial R2 is relatively straightforward after estimating two models and generating the ANOVA tables for them. The calculation for the partial R2 is which is analogous to the usual coefficient of determination: Generalizing and decomposing R2 As explained above, model selection heuristics such as the adjusted R2 criterion and the F-test examine whether the total R2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the total R2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high. Alternatively, one can decompose a generalized version of R2 to quantify the relevance of deviating from a hypothesis. As Hoornweg (2018) shows, several shrinkage estimators – such as Bayesian linear regression, ridge regression, and the (adaptive) lasso – make use of this decomposition of R2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as It is assumed that the matrix X is standardized with Z-scores and that the column vector is centered to have a mean of zero. Let the column vector refer to the hypothesized regression parameters and let the column vector denote the estimated parameters. We can then define An R2 of 75% means that the in-sample accuracy improves by 75% if the data-optimized b solutions are used instead of the hypothesized values. In the special case that is a vector of zeros, we obtain the traditional R2 again. The individual effect on R2 of deviating from a hypothesis can be computed with ('R-outer'). This times matrix is given by where . The diagonal elements of exactly add up to R2. If regressors are uncorrelated and is a vector of zeros, then the diagonal element of simply corresponds to the r2 value between and . When regressors and are correlated, might increase at the cost of a decrease in . As a result, the diagonal elements of may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements of to quantify the relevance of deviating from a hypothesized value. Click on the lasso for an example. R2 in logistic regression In the case of logistic regression, usually fit by maximum likelihood, there are several choices of pseudo-R2. One is the generalized R2 originally proposed by Cox & Snell, and independently by Magee: where is the likelihood of the model with only the intercept, is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) and n is the sample size. It is easily rewritten to: where D is the test statistic of the likelihood ratio test. Nico Nagelkerke noted that it had the following properties: It is consistent with the classical coefficient of determination when both can be computed; Its value is maximised by the maximum likelihood estimation of a model; It is asymptotically independent of the sample size; The interpretation is the proportion of the variation explained by the model; The values are between 0 and 1, with 0 denoting that model does not explain any variation and 1 denoting that it perfectly explains the observed variation; It does not have any unit. However, in the case of a logistic model, where cannot be greater than 1, R2 is between 0 and : thus, Nagelkerke suggested the possibility to define a scaled R2 as R2/R2max. Comparison with residual statistics Occasionally, residual statistics are used for indicating goodness of fit. The norm of residuals is calculated as the square-root of the sum of squares of residuals (SSR): Similarly, the reduced chi-square is calculated as the SSR divided by the degrees of freedom. Both R2 and the norm of residuals have their relative merits. For least squares analysis R2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage of R2 is the term acts to normalize the value. If the yi values are all multiplied by a constant, the norm of residuals will also change by that constant but R2 will stay the same. As a basic example, for the linear least squares fit to the set of data: {| class="wikitable" ! x | 1 || 2 || 3 || 4 || 5 |- ! y | 1.9 || 3.7 || 5.8 || 8.0 || 9.6 |} R2 = 0.998, and norm of residuals = 0.302. If all values of y are multiplied by 1000 (for example, in an SI prefix change), then R2 remains the same, but norm of residuals = 302. Another single-parameter indicator of fit is the RMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept. History The creation of the coefficient of determination has been attributed to the geneticist Sewall Wright and was first published in 1921.
Mathematics
Probability
null
1501313
https://en.wikipedia.org/wiki/SN%201006
SN 1006
SN 1006 was a supernova that is likely the brightest observed stellar event in recorded history, reaching an estimated −7.5 visual magnitude, and exceeding roughly sixteen times the brightness of Venus. Appearing between April 30 and May 1, 1006, in the constellation of Lupus, this "guest star" was described by observers across China, Japan, modern-day Iraq, Egypt, and Europe, and was possibly recorded in North American petroglyphs. Some reports state it was clearly visible in the daytime. Modern astronomers now consider its distance from Earth to be about 7,200 light-years or 2,200 parsecs. Historic reports Egyptian astrologer and astronomer Ali ibn Ridwan, writing in a commentary on Ptolemy's Tetrabiblos, stated that the "spectacle was a large circular body, 2 to 3 times as large as Venus. The sky was shining because of its light. The intensity of its light was a little more than a quarter that of Moon light" (or perhaps "than the light of the Moon when one-quarter illuminated"). Like all other observers, Ali ibn Ridwan noted that the new star was low on the southern horizon. Some astrologers interpreted the event as a portent of plague and famine. The most northerly sighting is recorded in the Annales Sangallenses maiores of the Abbey of Saint Gall in Switzerland, at a latitude of 47.5° north. Monks at St. Gall provided independent data as to its magnitude and location in the sky, writing that "[i]n a wonderful manner this was sometimes contracted, sometimes diffused, and moreover sometimes extinguished ... It was seen likewise for three months in the inmost limits of the south, beyond all the constellations which are seen in the sky". This description is often taken as probable evidence that the supernova was of type Ia. In The Book of Healing, Iranian philosopher Ibn Sina reported observing this supernova from northeastern Iran. He reported it as a transient celestial object which was stationary and/or tail-less (a star among the stars), that it remained for close to 3 months getting fainter and fainter until it disappeared, that it threw out sparks, that is, it was scintillating and very bright, and that the color changed with time. Some sources state that the star was bright enough to cast shadows; it was certainly seen during daylight hours for some time. According to Songshi, the official history of the Song dynasty (sections 56 and 461), the star seen on May 1, 1006, appeared to the south of constellation Di, between Lupus and Centaurus. It shone so brightly that objects on the ground could be seen at night. By December, it was again sighted in the constellation Di. The Chinese astrologer Zhou Keming, who was on his return to Kaifeng from his duty in Guangdong, interpreted the star to the emperor on May 30 as an auspicious star, yellow in color and brilliant in its brightness, that would bring great prosperity to the state over which it appeared. The reported color yellow should be taken with some suspicion, however, because Zhou may have chosen a favorable color for political reasons. There appear to have been two distinct phases in the early evolution of this supernova. There was first a three-month period at which it was at its brightest; after this period it diminished, then returned for a period of about eighteen months. Petroglyphs by the Hohokam in White Tank Mountain Regional Park, Arizona, and by the Ancestral Puebloans in Chaco Culture National Historical Park, New Mexico, have been interpreted as the first known North American representations of the supernova, though other researchers remain skeptical. The White Tank Mountain Regional Park petroglyph depicts a "star-like object" over a scorpion symbol. It has been contested that the scorpion represents the constellation Scorpius given a lack of evidence that the Native Americans interpreted the stars of that constellation as a scorpion. Earlier observations discovered from Yemen may indicate a sighting of SN 1006 on April 17, two weeks before its previously assumed earliest observation. Remnant SN 1006's associated supernova remnant from this event was not identified until 1965, when Doug Milne and Frank Gardner used the Parkes radio telescope to demonstrate a connection to known radio source PKS 1459−41. This is located near the star Beta Lupi, displaying a 30 arcmin circular shell. X-ray and optical emission from this remnant have also been detected, and during 2010 the H.E.S.S. gamma-ray observatory announced the detection of very-high-energy gamma-ray emission from the remnant. No associated neutron star or black hole has been found, which is the situation expected for the remnant of a Type Ia supernova (a class of explosion believed to completely disrupt its progenitor star). A survey in 2012 to find any surviving companions of the SN 1006 progenitor found no subgiant or giant companion stars, indicating that SN 1006 most likely had double degenerate progenitors; that is, the merging of two white dwarf stars. Remnant SNR G327.6+14.6 has an estimated distance of 2.2 kpc from Earth, making the true linear diameter approximately 20 parsecs. Effect on Earth Research has suggested that type Ia supernovae can irradiate the Earth with significant amounts of gamma-ray flux, compared with the typical flux from the Sun, up to distances on the order of 1 kiloparsec. SN 1006 lies well beyond 1 kiloparsec, and it did not appear to have significant effects on Earth. However, a signal of its outburst can be found in nitrate deposits in Antarctic ice.
Physical sciences
Notable transient events
Astronomy
1501800
https://en.wikipedia.org/wiki/Destructive%20distillation
Destructive distillation
Destructive distillation is a chemical process in which decomposition of unprocessed material is achieved by heating it to a high temperature; the term generally applies to processing of organic material in the absence of air or in the presence of limited amounts of oxygen or other reagents, catalysts, or solvents, such as steam or phenols. It is an application of pyrolysis. The process breaks up or "cracks" large molecules. Coke, coal gas, gaseous carbon, coal tar, ammonia liquor, and coal oil are examples of commercial products historically produced by the destructive distillation of coal. Destructive distillation of any particular inorganic feedstock produces only a small range of products as a rule, but destructive distillation of many organic materials commonly produces very many compounds, often hundreds, although not all products of any particular process are of commercial importance. The distillate are generally lower molecular weight. Some fractions however polymerise or condense small molecules into larger molecules, including heat-stable tarry substances and chars. Cracking feedstocks into liquid and volatile compounds, and polymerising, or the forming of chars and solids, may both occur in the same process, and any class of the products might be of commercial interest. Currently the major industrial application of destructive distillation is to coal. Historically the process of destructive distillation and other forms of pyrolysis led to the discovery of many chemical compounds or elucidation of their structures before contemporary organic chemists had developed the processes to synthesise or specifically investigate the parent molecules. It was especially in the early days that investigation of the products of destructive distillation, like those of other destructive processes, played parts in enabling chemists to deduce the chemical nature of many natural materials. Well known examples include the deduction of the structures of pyranoses and furanoses. History In his encyclopedic work Natural History () the Roman naturalist and author Pliny the Elder (23/24 –79 CE) describes how, in the destructive distillation of pine wood, two liquid fractions are produced: a lighter (aromatic oils) and a heavier (pitch_(resin)). The lighter fraction is released in the form of gases, which are condensed and collected. Process The process of pyrolysis can be conducted in a distillation apparatus (retort) to form the volatile products for collection. The mass of the product will represent only a part of the mass of the feedstock, because much of the material remains as char, ash, and non-volatile tars. In contrast, combustion consumes most of the organic matter, and the net weight of the products amount to roughly the same mass as the fuel and oxidant consumed. Destructive distillation and related processes are in effect the modern industrial descendants of traditional charcoal burning crafts. As such they are of industrial significance in many regions, such as Scandinavia. The modern processes are sophisticated and require careful engineering to produce the most valuable possible products from the available feedstocks. Applications Destructive distillation of wood produces methanol and acetic acid, together with a solid residue of charcoal. Destructive distillation of a tonne of coal can produce 700 kg of coke, 100 liters of liquor ammonia, 50 liters of coal tar and 400 m3 of coal gas. Destructive distillation is an increasingly promising method for recycling monomers derived from waste polymers. Destructive distillation of natural rubber resulted in the discovery of isoprene which led to the creation of synthetic rubbers such as neoprene.
Physical sciences
Other reactions
Chemistry
1502780
https://en.wikipedia.org/wiki/Dinocephalia
Dinocephalia
Dinocephalians (terrible heads) are a clade of large-bodied early therapsids that flourished in the Early and Middle Permian between 279.5 and 260 million years ago (Ma), but became extinct during the Capitanian mass extinction event. Dinocephalians included herbivorous, carnivorous, and omnivorous forms. Many species had thickened skulls with many knobs and bony projections. Dinocephalians were the first non-mammalian therapsids to be scientifically described and their fossils are known from Russia, China, Brazil, South Africa, Zimbabwe, and Tanzania. Description Apart from the biarmosuchians, the dinocephalians are the least advanced therapsids, although still uniquely specialised in their own way. They retain a number of primitive characteristics (e.g. no secondary palate, small dentary) shared with their pelycosaur ancestors, although they are also more advanced in possessing therapsid adaptations like the expansion of the ilium and more erect limbs. They include carnivorous, herbivorous, and omnivorous forms. Some, like Keratocephalus, Moschops, Struthiocephalus and Jonkeria were semiaquatic, others, like Anteosaurus, were more terrestrial. Dinocephalians were among the largest animals of the Permian period; only the biggest caseids and pareiasaurs reaching them in size. Size Dinocephalians were generally large. The biggest herbivores (Tapinocephalus) and omnivores (Titanosuchus) may have weighed up to , and were some long, while the largest carnivores (such as Titanophoneus and Anteosaurus) were at least as long, with heavy skulls long, and overall masses of around a half-tonne. Skull All dinocephalians are distinguished by the interlocking incisor (front) teeth. Correlated features are the distinctly downturned facial region, a deep temporal region, and forwardly rotated suspensorium. Shearing contact between the upper and lower teeth (allowing food to be more easily sliced into small bits for digestion) is achieved through keeping a fixed quadrate and a hinge-like movement at the jaw articulation. The lower teeth are inclined forward, and occlusion is achieved by the interlocking of the incisors. The later dinocephalians improved on this system by developing heels on the lingual sides of the incisor teeth that met against one another to form a crushing surface when the jaws were shut. Most dinocephalians also developed pachyostosis of the bones in the skull, which seems to have been an adaptation for intra-specific behaviour (head-butting), perhaps for territory or a mate. In some types, such as Estemmenosuchus and Styracocephalus, there are also horn-like structures, which evolved independently in each case. Evolutionary history The dinocephalians are an ancient group and their ancestry is not clear. It is assumed that they must have evolved during the earlier part of the Roadian, or possibly even the Kungurian epoch, but no trace has been found. These animals radiated at the expense of the dying pelycosaurs, who dominated during the early part of the Permian and may have even gone extinct due to competition with therapsids, especially the short-lived but most dominant dinocephalians. Even the earliest members, the estemmenosuchids and early brithopodids of the Russian Ocher fauna, were already a diverse group of herbivores and carnivores. During the Wordian and early Capitanian, advanced dinocephalians radiated into a large number of herbivorous forms, representing a diverse megafauna. This is well known from the Tapinocephalus Assemblage Zone of the Southern African Karoo. At the height of their diversity (middle or late Capitanian age) all the dinocephalians suddenly died out, during the Capitanian mass extinction event. The reason for their extinction is not clear; although disease, sudden climatic change, or other factors of environmental stress may have brought about their end. They were replaced by much smaller therapsids; herbivorous dicynodonts and carnivorous biarmosuchians, gorgonopsians and therocephalians. Taxonomy Class Synapsida Order Therapsida Suborder Dinocephalia ?Driveria ?Mastersonia Family Estemmenosuchidae Estemmenosuchus Molybdopygus ?Parabradysaurus ?Family Phreatosuchidae Phreatosaurus Phreatosuchus ?Family Phthinosuchidae Phthinosuchus ?Phthinosaurus Family Rhopalodontidae ?Phthinosaurus Rhopalodon Clade Anteosauria Family Anteosauridae Family Brithopodidae Family Deuterosauridae Clade Tapinocephalia ?Dimacrodon ?Driveria ?Mastersonia Family Styracocephalidae Family Tapinocephalidae Family Titanosuchidae
Biology and health sciences
Proto-mammals
Animals