id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1503750
https://en.wikipedia.org/wiki/Rolling%20resistance
Rolling resistance
Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. In addition, only the so-called slip resistance involves friction, therefore the name "rolling friction" is to an extent a misnomer. Analogous with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction. Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete. Soil rolling resistance factor is not dependent on speed. Primary cause The primary cause of pneumatic tire rolling resistance is hysteresis: A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber. — National Academy of Sciences This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate. The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion. Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure. Definitions In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail). But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly. The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like). Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance. For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire. For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances. Rolling resistance coefficient The "rolling resistance coefficient" is defined by the following equation: where is the rolling resistance force (shown as in figure 1), is the dimensionless rolling resistance coefficient or coefficient of rolling friction (CRF), and is the normal force, the force perpendicular to the surface on which the wheel is rolling. is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), which is the same as pounds of resistance per 1000 pounds of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just . Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square). The above shows resistance proportional to but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc., because itself varies with those factors. It might seem from the above definition of that the rolling resistance is directly proportional to vehicle weight but it is not. Measurement There are at least two popular models for calculating rolling resistance. "Rolling resistance coefficient (RRC). The value of the rolling resistance force divided by the wheel load. The Society of Automotive Engineers (SAE) has developed test practices to measure the RRC of tires. These tests (SAE J1269 and SAE J2452) are usually performed on new tires. When measured by using these standard test practices, most new passenger tires have reported RRCs ranging from 0.007 to 0.014." In the case of bicycle tires, values of 0.0025 to 0.005 are achieved. These coefficients are measured on rollers, with power meters on road surfaces, or with coast-down tests. In the latter two cases, the effect of air resistance must be subtracted or the tests performed at very low speeds. The coefficient of rolling resistance b, which has the dimension of length, is approximately (due to the small-angle approximation of ) equal to the value of the rolling resistance force times the radius of the wheel divided by the wheel load. ISO 18164:2005 is used to test rolling resistance in Europe. The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance". Physical formulae The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by where is the sinkage depth is the diameter of the rigid wheel The empirical formula for for cast iron mine car wheels on steel rails is: where is the wheel diameter in inches is the load on the wheel in pounds-force As an alternative to using one can use , which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula: where is the rolling resistance force (shown in figure 1), is the wheel radius, is the rolling resistance coefficient or coefficient of rolling friction with dimension of length, and is the normal force (equal to W, not R, as shown in figure 1). The above equation, where resistance is inversely proportional to radius seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction). See dependence on diameter. Equating this equation with the force per the rolling resistance coefficient, and solving for , gives = . Therefore, if a source gives rolling resistance coefficient () as a dimensionless coefficient, it can be converted to , having units of length, by multiplying by wheel radius . Rolling resistance coefficient examples Table of rolling resistance coefficient examples: For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N). Dependence on diameter Stagecoaches and railroads According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail and for 19th century carriage wheels. But there are other tests on carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied) -"Coulomb's law" is still found in handbooks, however. Pneumatic tires For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters). Dependence on applied torque The driving torque to overcome rolling resistance and maintain steady speed on level ground (with no air resistance) can be calculated by: where is the linear speed of the body (at the axle), and its rotational speed. It is noteworthy that is usually not equal to the radius of the rolling body as a result of wheel slip. The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. Consequently, the linear speed of the vehicle differs from the wheel's circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. Significance of rolling or slip resistance is largely dependent on the tractive force, coefficient of friction, normal load, etc. All wheels "Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes (including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%. A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels. It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance. Railroad steel wheels In order to apply any traction to the wheels, some slippage of the wheel is required. For trains climbing up a grade, this slip is normally 1.5% to 2.5%. Slip (also known as creep) is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage. Pneumatic tires Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher. Dependence on wheel load Railroad steel wheels The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load. If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs. Pneumatic tires For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance. Dependence on curvature of roadway General When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle. This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering. Sound Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads. Factors that contribute in tires Several factors affect the magnitude of rolling resistance a tire generates: As mentioned in the introduction: wheel radius, forward speed, surface adhesion, and relative micro-sliding. Material - different fillers and polymers in tire composition can improve traction while reducing hysteresis. The replacement of some carbon black with higher-priced silica–silane is one common way of reducing rolling resistance. The use of exotic materials including nano-clay has been shown to reduce rolling resistance in high performance rubber tires. Solvents may also be used to swell solid tires, decreasing the rolling resistance. Dimensions - rolling resistance in tires is related to the flex of sidewalls and the contact area of the tire For example, at the same pressure, wider bicycle tires flex less in the sidewalls as they roll and thus have lower rolling resistance (although higher air resistance). Extent of inflation - Lower pressure in tires results in more flexing of the sidewalls and higher rolling resistance. This energy conversion in the sidewalls increases resistance and can also lead to overheating and may have played a part in the infamous Ford Explorer rollover accidents. Over inflating tires (such a bicycle tires) may not lower the overall rolling resistance as the tire may skip and hop over the road surface. Traction is sacrificed, and overall rolling friction may not be reduced as the wheel rotational speed changes and slippage increases. Sidewall deflection is not a direct measurement of rolling friction. A high quality tire with a high quality (and supple) casing will allow for more flex per energy loss than a cheap tire with a stiff sidewall. Again, on a bicycle, a quality tire with a supple casing will still roll easier than a cheap tire with a stiff casing. Similarly, as noted by Goodyear truck tires, a tire with a "fuel saving" casing will benefit the fuel economy through many tread lives (i.e. retreading), while a tire with a "fuel saving" tread design will only benefit until the tread wears down. In tires, tread thickness and shape has much to do with rolling resistance. The thicker and more contoured the tread, the higher the rolling resistance Thus, the "fastest" bicycle tires have very little tread and heavy duty trucks get the best fuel economy as the tire tread wears out. Diameter effects seem to be negligible, provided the pavement is hard and the range of diameters is limited. See dependence on diameter. Virtually all world speed records have been set on relatively narrow wheels, probably because of their aerodynamic advantage at high speed, which is much less important at normal speeds. Temperature: with both solid and pneumatic tires, rolling resistance has been found to decrease as temperature increases (within a range of temperatures: i.e. there is an upper limit to this effect) For a rise in temperature from 30 °C to 70 °C the rolling resistance decreased by 20-25%. Racers heat their tires before racing, but this is primarily used to increase tire friction rather than to decrease rolling resistance. Railroads: Components of rolling resistance In a broad sense rolling resistance can be defined as the sum of components): Wheel bearing torque losses. Pure rolling resistance. Sliding of the wheel on the rail. Loss of energy to the roadbed (and earth). Loss of energy to oscillation of railway rolling stock. Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance in bearings varies with both wheel loading and speed. Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels. Comparing rolling resistance of highway vehicles and trains The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter. As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight. An example of a very light high-speed passenger train is the N700 Series Shinkansen, which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile. In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall.
Physical sciences
Classical mechanics
Physics
1505215
https://en.wikipedia.org/wiki/Tip%20of%20the%20red-giant%20branch
Tip of the red-giant branch
Tip of the red-giant branch (TRGB) is a primary distance indicator used in astronomy. It uses the luminosity of the brightest red-giant-branch stars in a galaxy as a standard candle to gauge the distance to that galaxy. It has been used in conjunction with observations from the Hubble Space Telescope to determine the relative motions of the Local Cluster of galaxies within the Local Supercluster. Ground-based, 8-meter-class telescopes like the VLT are also able to measure the TRGB distance within reasonable observation times in the local universe. Method The Hertzsprung–Russell diagram (HR diagram) is a plot of stellar luminosity versus surface temperature for a population of stars. During the core hydrogen burning phase of a Sun-like star's lifetime, it will appear on the HR diagram at a position along a diagonal band called the main sequence. When the hydrogen at the core is exhausted, energy will continue to be generated by hydrogen fusion in a shell around the core. The center of the star will accumulate the helium "ash" from this fusion and the star will migrate along an evolutionary branch of the HR diagram that leads toward the upper right. That is, the surface temperature will decrease and the total energy output (luminosity) of the star will increase as the surface area increases. At a certain point, the helium at the core of the star will reach a pressure and temperature where it can begin to undergo nuclear fusion through the triple-alpha process. For a star with less than 1.8 times the mass of the Sun, this will occur in a process called the helium flash. The evolutionary track of the star will then carry it toward the left of the HR diagram as the surface temperature increases under the new equilibrium. The result is a sharp discontinuity in the evolutionary track of the star on the HR diagram. This discontinuity is called the tip of the red-giant branch. When distant stars at the TRGB are measured in the I-band (in the infrared), their luminosity is somewhat insensitive to their composition of elements heavier than helium (metallicity) or their mass; they are a standard candle with an I-band absolute magnitude of –4.0±0.1. This makes the technique especially useful as a distance indicator. The TRGB indicator uses stars in the old stellar populations (Population II).
Physical sciences
Basics
Astronomy
1505481
https://en.wikipedia.org/wiki/Rail%20yard
Rail yard
A rail yard, railway yard, railroad yard (US) or simply yard, is a series of tracks in a rail network for storing, sorting, or loading and unloading rail vehicles and locomotives. Yards have many tracks in parallel for keeping rolling stock or unused locomotives stored off the main line, so that they do not obstruct the flow of traffic. Cars or wagons are moved around by specially designed yard switcher locomotives (US) or shunter locomotives (UK), a type of locomotive. Cars or wagons in a yard may be sorted by numerous categories, including railway company, loaded or unloaded, destination, car type, or whether they need repairs. Yards are normally built where there is a need to store rail vehicles while they are not being loaded or unloaded, or are waiting to be assembled into trains. Large yards may have a tower to control operations. Many yards are located at strategic points on a main line. Main-line yards are often composed of an up yard and a down yard, linked to the associated direction of travel. There are different types of yards, and different parts within a yard, depending on how they are built. Freight yards For freight cars, the overall yard layout is typically designed around a principal switching (US term) or shunting (UK) technique: A flat yard has no hump, and relies on locomotives for all car movements. A gravity yard is built on a natural slope and relies less on locomotives; generally locomotives will control a consist being sorted from uphill of the cars about to be sorted. They are decoupled and let to accelerate into the classification equipment lower down. A hump yard has a constructed hill, over which freight cars are shoved by yard locomotives, and then gravity is used to propel the cars to various sorting tracks; Sorting yard basics In the case of all classification or sorting yards, human intelligence plays a primary role in setting a strategy for the switching operations; the fewer times coupling operations need to be made and the less distance traveled, the faster the operation, the better the strategy and the sooner the newly configured consist can be joined to its outbound train.   Switching yards, staging yards, or shunting yards are typically graded to be flat yards, where switch engines manually shuffle and maneuver cars from (a) train arrival tracks, to (b) to consist breakdown track, to (c) an consist assembly track, thence to (d) departure tracks of the yard. A large sub-group of such yards are known as staging yards, which are yards serving an end destination that is also a collection yard starting car groups for departure. These seemingly incompatible tasks are because the operating or road company and its locomotive drops off empties and picks up full cars waiting departure which have been spotted and assembled by local switch engines. The long haul carrier makes the round trip with a minimal turn around time, and the local switch engine transfers empties to the loading yard when the industries output is ready to be shipped. This activity is duplicated in a transfer yard, the difference being in the latter several industrial customers are serviced by the local switcher, which is part of the yard equipment, and the industry pays a cargo transfer fee to the railroad or yard operating company. In the staging yard, the locomotive is most likely operated by industry (refinery, chemical company or coal mine personnel); and ownership of the yard in both cases is a matter of business, and could be any imaginable combination. Ownership and operation are quite often a matter of leases and interests. Hump yards and gravity yards are usually highly automated and designed for the efficient break-down, sorting, and recombining of freight into consists, so they are equipped with mechanical retarders (external brakes) and scales that a computer or operator uses along with knowledge of the gradient of the hump to calculate and control the speed of the cars as they roll downhill to their destination tracks. These modern sorting and classification systems are sophisticated enough to allow a first car to roll to a stop near the end of its classification track, and, by slowing the speed of subsequent cars down the hump, shorten the distance for the following series of cars so they can bump and couple gently, without damaging one another. Since overall throughput speed matters, many have small pneumatic, hydraulic or spring-driven braking retarders (below, right) to adjust and slow speed both before and after yard switch points. Along with car tracking and load tracking to destination technologies such as RFID, long trains can be broken down and reconfigured in transfer yards or operations in remarkable time. Nomenclature and components A large freight yard may include the following components: Receiving yard, also called an arrival yard, where freight cars or wagons are detached from their locomotives, inspected for mechanical problems, and sent to a classification or marshalling yard. Switching yards, switchyards, shunting yards or sorting yards—yards where cars are sorted for various destinations and assembled into blocks have different formal names in different cultural traditions: Classification yard (US and by Canadian National Railway in Canada) or Marshalling yard (UK and Canadian Pacific Railway in Canada) Departure yard, where car blocks are assembled into trains. Car repair yard or maintenance yard, for freight cars. Engine house (in some yards, a roundhouse), to fuel and service locomotives. Transfer yard, a yard where consists are dropped off or picked up as a group by through service such as a unit train, but managed locally by local switching service locomotives. Unit tracks may be reserved for unit trains, which carry a block of cars all of the same origin and destination, and so as through traffic do not get sorted in a classification yard. Such consists often stop in a freight yard for other purposes: inspection, engine servicing, being switched into a longer consist, or crew changes. Freight yards may have multiple industries adjacent to them where railroad cars are loaded or unloaded and then stored before they move on to their new destination. Coach yards Coach yards (American English) or stabling yards or carriage sidings (British English) are used for sorting, storing and repairing passenger cars. These yards are located in metropolitan areas near large stations or terminals. An example of a major US coach yard is Sunnyside Yard in New York City, operated by Amtrak. Those that are principally used for storage, such as the West Side Yard in New York, are called "layup yards" or "stabling yards." Coach yards are commonly flat yards because unladen passenger coaches are heavier than unladen freight carriages. In the UK, a stabling point is a place where rail locomotives are parked while awaiting their next turn of duty. A stabling point may be fitted with a fuelling point and other minor maintenance facilities. A good example of this was Newport's Godfrey Road stabling point, which has since been closed. Stabling sidings can be just a few roads or large complexes like Feltham Sidings. They are sometimes electrified with a third rail or OLE. An example of a stabling point with third rail would be Feltham marshalling yard which is being made into carriage sidings for the British Rail Class 701 EMU.
Technology
Concepts of ground transport
null
39
https://en.wikipedia.org/wiki/Albedo
Albedo
Albedo ( ; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages. Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation). Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science. Terrestrial albedo Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds. Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo. Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost . In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend. White-sky, black-sky, and blue-sky albedo For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms: the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo. with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as: This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface. Changes to albedo due to human activities Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites. Urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions." Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved. The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change. Examples of terrestrial albedo effects Illumination Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics. Insolation effects The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes. Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect. Climate and weather Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather. The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds. Albedo–temperature feedback When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating. Snow Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice–albedo positive feedback). In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%. Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming. Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets. The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions. Small-scale effects Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing. Solar photovoltaic effects Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications. Trees Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo). In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate. Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit. In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy. Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming. Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter. Water Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations. At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle. Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light. Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection. Snow on top of this sea ice increases the albedo to 0.9. Clouds Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth." Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies. Aerosol effects Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. Black carbon Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo. Astronomical albedo In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved. Optical or visual albedo The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids. Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds. The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies. Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion. In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation. An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by: where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude. Radar albedo In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power. Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering. For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo): where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have . Radar albedos of Solar System objects The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references. Relationship to surface bulk density In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships: . History The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria.
Physical sciences
Astrometry
null
334
https://en.wikipedia.org/wiki/International%20Atomic%20Time
International Atomic Time
International Atomic Time (abbreviated TAI, from its French name ) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth's surface and which has leap seconds. UTC deviates from TAI by a number of whole seconds. , immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed. TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth. Operation TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock. The participating institutions each broadcast, in real time, a frequency signal with timecodes, which is their estimate of TAI. Time codes are usually published in the form of UTC, which differs from TAI by a well-known integer number of seconds. These time scales are denoted in the form UTC(NPL) in the UTC form, where NPL here identifies the National Physical Laboratory, UK. The TAI form may be denoted TAI(NPL). The latter is not to be confused with TA(NPL), which denotes an independent atomic time scale, not synchronised to TAI or to anything else. The clocks at different institutions are regularly compared against each other. The International Bureau of Weights and Measures (BIPM, France), combines these measurements to retrospectively calculate the weighted average that forms the most stable time scale possible. This combined time scale is published monthly in "Circular T", and is the canonical TAI. This time scale is expressed in the form of tables of differences UTC − UTC(k) (equal to TAI − TAI(k)) for each participating institution k. The same circular also gives tables of TAI − TA(k), for the various unsynchronised atomic time scales. Errors in publication may be corrected by issuing a revision of the faulty Circular T or by errata in a subsequent Circular T. Aside from this, once published in Circular T, the TAI scale is not revised. In hindsight, it is possible to discover errors in TAI and to make better estimates of the true proper time scale. Since the published circulars are definitive, better estimates do not create another version of TAI; it is instead considered to be creating a better realisation of Terrestrial Time (TT). History Early atomic time scales consisted of quartz clocks with frequencies calibrated by a single atomic clock; the atomic clocks were not operated continuously. Atomic timekeeping services started experimentally in 1955, using the first caesium atomic clock at the National Physical Laboratory, UK (NPL). It was used as a basis for calibrating the quartz clocks at the Royal Greenwich Observatory and to establish a time scale, called Greenwich Atomic (GA). The United States Naval Observatory began the A.1 scale on 13 September 1956, using an Atomichron commercial atomic clock, followed by the NBS-A scale at the National Bureau of Standards, Boulder, Colorado on 9 October 1957. The International Time Bureau (BIH) began a time scale, Tm or AM, in July 1955, using both local caesium clocks and comparisons to distant clocks using the phase of VLF radio signals. The BIH scale, A.1, and NBS-A were defined by an epoch at the beginning of 1958 The procedures used by the BIH evolved, and the name for the time scale changed: A3 in 1964 and TA(BIH) in 1969. The SI second was defined in terms of the caesium atom in 1967. From 1971 to 1975 the General Conference on Weights and Measures and the International Committee for Weights and Measures made a series of decisions that designated the BIPM time scale International Atomic Time (TAI). In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale, therefore, corresponded to an average of the altitudes of the various clocks. Starting from the Julian Date 2443144.5 (1 January 1977 00:00:00 TAI), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at the geoid (mean sea level). Because the clocks were, on average, well above sea level, this meant that TAI slowed by about one part in a trillion. The former uncorrected time scale continues to be published under the name EAL (Échelle Atomique Libre, meaning Free Atomic Scale). The instant that the gravitational correction started to be applied serves as the epoch for Barycentric Coordinate Time (TCB), Geocentric Coordinate Time (TCG), and Terrestrial Time (TT), which represent three fundamental time scales in the solar system. All three of these time scales were defined to read JD 2443144.5003725 (1 January 1977 00:00:32.184) exactly at that instant. TAI was henceforth a realisation of TT, with the equation TT(TAI) = TAI + 32.184 s. The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated, "In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC." Relation to UTC Contrary to TAI, UTC is a discontinuous time scale. It is occasionally adjusted by leap seconds. Between these adjustments, it is composed of segments that are mapped to atomic time by a constant offset. From its beginning in 1961 through December 1971, the adjustments were made regularly in fractional leap seconds so that UTC approximated UT2. Afterwards, these adjustments were made only in whole seconds to approximate UT1. This was a compromise arrangement in order to enable a publicly broadcast time scale. The less frequent whole-second adjustments meant that the time scale would be more stable and easier to synchronize internationally. The fact that it continues to approximate UT1 means that tasks such as navigation which require a source of Universal Time continue to be well served by the public broadcast of UTC.
Technology
Timekeeping
null
572
https://en.wikipedia.org/wiki/Agricultural%20science
Agricultural science
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists. History In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer. In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018. In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures. Prominent agricultural scientists Wilbur Olin Atwater Robert Bakewell Norman Borlaug Luther Burbank George Washington Carver Carl Henry Clerk George C. Clerk René Dumont Sir Albert Howard Kailas Nath Kaul Thomas Lecky Justus von Liebig Jay Laurence Lush Gregor Mendel Louis Pasteur M. S. Swaminathan Jethro Tull Artturi Ilmari Virtanen Sewall Wright Fields or related disciplines Scope Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts: Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research. Agronomy is research and development related to studying and improving plant-based crops. is the science of cultivating the earth. Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment. Research topics Agricultural sciences include research and development on: Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques) Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems. Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products) Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation) Theoretical production ecology, relating to crop production modeling Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems. Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU. Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering.
Technology
Basics
null
573
https://en.wikipedia.org/wiki/Alchemy
Alchemy
Alchemy (from the Arabic word , ) is an ancient branch of natural philosophy, a philosophical and protoscientific tradition that was historically practised in China, India, the Muslim world, and Europe. In its Western form, alchemy is first attested in a number of pseudepigraphical texts written in Greco-Roman Egypt during the first few centuries AD. Greek-speaking alchemists often referred to their craft as "the Art" (τέχνη) or "Knowledge" (ἐπιστήμη), and it was often characterised as mystic (μυστική), sacred (ἱɛρά), or divine (θɛíα). Alchemists attempted to purify, mature, and perfect certain materials. Common aims were chrysopoeia, the transmutation of "base metals" (e.g., lead) into "noble metals" (particularly gold); the creation of an elixir of immortality; and the creation of panaceas able to cure any disease. The perfection of the human body and soul was thought to result from the alchemical magnum opus ("Great Work"). The concept of creating the philosophers' stone was variously connected with all of these projects. Islamic and European alchemists developed a basic set of laboratory techniques, theories, and terms, some of which are still in use today. They did not abandon the Ancient Greek philosophical idea that everything is composed of four elements, and they tended to guard their work in secrecy, often making use of cyphers and cryptic symbolism. In Europe, the 12th-century translations of medieval Islamic works on science and the rediscovery of Aristotelian philosophy gave birth to a flourishing tradition of Latin alchemy. This late medieval tradition of alchemy would go on to play a significant role in the development of early modern science (particularly chemistry and medicine). Modern discussions of alchemy are generally split into an examination of its exoteric practical applications and its esoteric spiritual aspects, despite criticisms by scholars such as Eric J. Holmyard and Marie-Louise von Franz that they should be understood as complementary. The former is pursued by historians of the physical sciences, who examine the subject in terms of early chemistry, medicine, and charlatanism, and the philosophical and religious contexts in which these events occurred. The latter interests historians of esotericism, psychologists, and some philosophers and spiritualists. The subject has also made an ongoing impact on literature and the arts. Etymology The word alchemy comes from old French alquemie, alkimie, used in Medieval Latin as . This name was itself adopted from the Arabic word (). The Arabic in turn was a borrowing of the Late Greek term khēmeía (), also spelled khumeia () and khēmía (), with al- being the Arabic definite article 'the'. Together this association can be interpreted as 'the process of transmutation by which to fuse or reunite with the divine or original form'. Several etymologies have been proposed for the Greek term. The first was proposed by Zosimos of Panopolis (3rd–4th centuries), who derived it from the name of a book, the Khemeu. Hermann Diels argued in 1914 that it rather derived from χύμα, used to describe metallic objects formed by casting. Others trace its roots to the Egyptian name (hieroglyphic 𓆎𓅓𓏏𓊖 ), meaning 'black earth', which refers to the fertile and auriferous soil of the Nile valley, as opposed to red desert sand. According to the Egyptologist Wallis Budge, the Arabic word ʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", (or its equivalent in the Mediaeval Bohairic dialect of Coptic, ). This Coptic word derives from Demotic , itself from ancient Egyptian . The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "black Land", by contrast with the "red Land", the surrounding desert). History Alchemy encompasses several philosophical traditions spanning some four millennia and three continents. These traditions' general penchant for cryptic and symbolic language makes it hard to trace their mutual influences and genetic relationships. One can distinguish at least three major strands, which appear to be mostly independent, at least in their earlier stages: Chinese alchemy, centered in China; Indian alchemy, centered on the Indian subcontinent; and Western alchemy, which occurred around the Mediterranean and whose center shifted over the millennia from Greco-Roman Egypt to the Islamic world, and finally medieval Europe. Chinese alchemy was closely connected to Taoism and Indian alchemy with the Dharmic faiths. In contrast, Western alchemy developed its philosophical system mostly independent of but influenced by various Western religions. It is still an open question whether these three strands share a common origin, or to what extent they influenced each other. Hellenistic Egypt The start of Western alchemy may generally be traced to ancient and Hellenistic Egypt, where the city of Alexandria was a center of alchemical knowledge, and retained its pre-eminence through most of the Greek and Roman periods. Following the work of André-Jean Festugière, modern scholars see alchemical practice in the Roman Empire as originating from the Egyptian goldsmith's art, Greek philosophy and different religious traditions. Tracing the origins of the alchemical art in Egypt is complicated by the pseudepigraphic nature of texts from the Greek alchemical corpus. The treatises of Zosimos of Panopolis, the earliest historically attested author (fl. c. 300 AD), can help in situating the other authors. Zosimus based his work on that of older alchemical authors, such as Mary the Jewess, Pseudo-Democritus, and Agathodaimon, but very little is known about any of these authors. The most complete of their works, The Four Books of Pseudo-Democritus, were probably written in the first century AD. Recent scholarship tends to emphasize the testimony of Zosimus, who traced the alchemical arts back to Egyptian metallurgical and ceremonial practices. It has also been argued that early alchemical writers borrowed the vocabulary of Greek philosophical schools but did not implement any of its doctrines in a systematic way. Zosimos of Panopolis wrote in the Final Abstinence (also known as the "Final Count"). Zosimos explains that the ancient practice of "tinctures" (the technical Greek name for the alchemical arts) had been taken over by certain "demons" who taught the art only to those who offered them sacrifices. Since Zosimos also called the demons "the guardians of places" (, ) and those who offered them sacrifices "priests" (, ), it is fairly clear that he was referring to the gods of Egypt and their priests. While critical of the kind of alchemy he associated with the Egyptian priests and their followers, Zosimos nonetheless saw the tradition's recent past as rooted in the rites of the Egyptian temples. Mythology Zosimos of Panopolis asserted that alchemy dated back to Pharaonic Egypt where it was the domain of the priestly class, though there is little to no evidence for his assertion. Alchemical writers used Classical figures from Greek, Roman, and Egyptian mythology to illuminate their works and allegorize alchemical transmutation. These included the pantheon of gods related to the Classical planets, Isis, Osiris, Jason, and many others. The central figure in the mythology of alchemy is Hermes Trismegistus (or Thrice-Great Hermes). His name is derived from the god Thoth and his Greek counterpart Hermes. Hermes and his caduceus or serpent-staff, were among alchemy's principal symbols. According to Clement of Alexandria, he wrote what were called the "forty-two books of Hermes", covering all fields of knowledge. The Hermetica of Thrice-Great Hermes is generally understood to form the basis for Western alchemical philosophy and practice, called the hermetic philosophy by its early practitioners. These writings were collected in the first centuries of the common era. Technology The dawn of Western alchemy is sometimes associated with that of metallurgy, extending back to 3500 BC. Many writings were lost when the Roman emperor Diocletian ordered the burning of alchemical books after suppressing a revolt in Alexandria (AD 292). Few original Egyptian documents on alchemy have survived, most notable among them the Stockholm papyrus and the Leyden papyrus X. Dating from AD 250–300, they contained recipes for dyeing and making artificial gemstones, cleaning and fabricating pearls, and manufacturing of imitation gold and silver. These writings lack the mystical, philosophical elements of alchemy, but do contain the works of Bolus of Mendes (or Pseudo-Democritus), which aligned these recipes with theoretical knowledge of astrology and the classical elements. Between the time of Bolus and Zosimos, the change took place that transformed this metallurgy into a Hermetic art. Philosophy Alexandria acted as a melting pot for philosophies of Pythagoreanism, Platonism, Stoicism and Gnosticism which formed the origin of alchemy's character. An important example of alchemy's roots in Greek philosophy, originated by Empedocles and developed by Aristotle, was that all things in the universe were formed from only four elements: earth, air, water, and fire. According to Aristotle, each element had a sphere to which it belonged and to which it would return if left undisturbed. The four elements of the Greek were mostly qualitative aspects of matter, not quantitative, as our modern elements are; "...True alchemy never regarded earth, air, water, and fire as corporeal or chemical substances in the present-day sense of the word. The four elements are simply the primary, and most general, qualities by means of which the amorphous and purely quantitative substance of all bodies first reveals itself in differentiated form." Later alchemists extensively developed the mystical aspects of this concept. Alchemy coexisted alongside emerging Christianity. Lactantius believed Hermes Trismegistus had prophesied its birth. St Augustine later affirmed this in the 4th and 5th centuries, but also condemned Trismegistus for idolatry. Examples of Pagan, Christian, and Jewish alchemists can be found during this period. Most of the Greco-Roman alchemists preceding Zosimos are known only by pseudonyms, such as Moses, Isis, Cleopatra, Democritus, and Ostanes. Others authors such as Komarios, and Chymes, we only know through fragments of text. After AD 400, Greek alchemical writers occupied themselves solely in commenting on the works of these predecessors. By the middle of the 7th century alchemy was almost an entirely mystical discipline. It was at that time that Khalid Ibn Yazid sparked its migration from Alexandria to the Islamic world, facilitating the translation and preservation of Greek alchemical texts in the 8th and 9th centuries. Byzantium Greek alchemy was preserved in medieval Byzantine manuscripts after the fall of Egypt, and yet historians have only relatively recently begun to pay attention to the study and development of Greek alchemy in the Byzantine period. India The 2nd millennium BC text Vedas describe a connection between eternal life and gold. A considerable knowledge of metallurgy has been exhibited in a third-century AD text called Arthashastra which provides ingredients of explosives (Agniyoga) and salts extracted from fertile soils and plant remains (Yavakshara) such as saltpetre/nitre, perfume making (different qualities of perfumes are mentioned), granulated (refined) Sugar. Buddhist texts from the 2nd to 5th centuries mention the transmutation of base metals to gold. According to some scholars Greek alchemy may have influenced Indian alchemy but there are no hard evidences to back this claim. The 11th-century Persian chemist and physician Abū Rayhān Bīrūnī, who visited Gujarat as part of the court of Mahmud of Ghazni, reported that they The goals of alchemy in India included the creation of a divine body (Sanskrit divya-deham) and immortality while still embodied (Sanskrit jīvan-mukti). Sanskrit alchemical texts include much material on the manipulation of mercury and sulphur, that are homologized with the semen of the god Śiva and the menstrual blood of the goddess Devī. Some early alchemical writings seem to have their origins in the Kaula tantric schools associated to the teachings of the personality of Matsyendranath. Other early writings are found in the Jaina medical treatise Kalyāṇakārakam of Ugrāditya, written in South India in the early 9th century. Two famous early Indian alchemical authors were Nāgārjuna Siddha and Nityanātha Siddha. Nāgārjuna Siddha was a Buddhist monk. His book, Rasendramangalam, is an example of Indian alchemy and medicine. Nityanātha Siddha wrote Rasaratnākara, also a highly influential work. In Sanskrit, rasa translates to "mercury", and Nāgārjuna Siddha was said to have developed a method of converting mercury into gold. Scholarship on Indian alchemy is in the publication of The Alchemical Body by David Gordon White. A modern bibliography on Indian alchemical studies has been written by White. The contents of 39 Sanskrit alchemical treatises have been analysed in detail in G. Jan Meulenbeld's History of Indian Medical Literature. The discussion of these works in HIML gives a summary of the contents of each work, their special features, and where possible the evidence concerning their dating. Chapter 13 of HIML, Various works on rasaśāstra and ratnaśāstra (or Various works on alchemy and gems) gives brief details of a further 655 (six hundred and fifty-five) treatises. In some cases Meulenbeld gives notes on the contents and authorship of these works; in other cases references are made only to the unpublished manuscripts of these titles. A great deal remains to be discovered about Indian alchemical literature. The content of the Sanskrit alchemical corpus has not yet (2014) been adequately integrated into the wider general history of alchemy. Islamic world After the fall of the Roman Empire, the focus of alchemical development moved to the Islamic World. Much more is known about Islamic alchemy because it was better documented: indeed, most of the earlier writings that have come down through the years were preserved as Arabic translations. The word alchemy itself was derived from the Arabic word al-kīmiyā (الكيمياء). The early Islamic world was a melting pot for alchemy. Platonic and Aristotelian thought, which had already been somewhat appropriated into hermetical science, continued to be assimilated during the late 7th and early 8th centuries through Syriac translations and scholarship. In the late ninth and early tenth centuries, the Arabic works attributed to Jābir ibn Hayyān (Latinized as "Geber" or "Geberus") introduced a new approach to alchemy. Paul Kraus, who wrote the standard reference work on Jabir, put it as follows: Islamic philosophers also made great contributions to alchemical hermeticism. The most influential author in this regard was arguably Jabir. Jabir's ultimate goal was Takwin, the artificial creation of life in the alchemical laboratory, up to, and including, human life. He analysed each Aristotelian element in terms of four basic qualities of hotness, coldness, dryness, and moistness. According to Jabir, in each metal two of these qualities were interior and two were exterior. For example, lead was externally cold and dry, while gold was hot and moist. Thus, Jabir theorized, by rearranging the qualities of one metal, a different metal would result. By this reasoning, the search for the philosopher's stone was introduced to Western alchemy. Jabir developed an elaborate numerology whereby the root letters of a substance's name in Arabic, when treated with various transformations, held correspondences to the element's physical properties. The elemental system used in medieval alchemy also originated with Jabir. His original system consisted of seven elements, which included the five classical elements (aether, air, earth, fire, and water) in addition to two chemical elements representing the metals: sulphur, "the stone which burns", which characterized the principle of combustibility, and mercury, which contained the idealized principle of metallic properties. Shortly thereafter, this evolved into eight elements, with the Arabic concept of the three metallic principles: sulphur giving flammability or combustion, mercury giving volatility and stability, and salt giving solidity. The atomic theory of corpuscularianism, where all physical bodies possess an inner and outer layer of minute particles or corpuscles, also has its origins in the work of Jabir. From the 9th to 14th centuries, alchemical theories faced criticism from a variety of practical Muslim chemists, including Alkindus, Abū al-Rayhān al-Bīrūnī, Avicenna and Ibn Khaldun. In particular, they wrote refutations against the idea of the transmutation of metals. From the 14th century onwards, many materials and practices originally belonging to Indian alchemy (Rasayana) were assimilated in the Persian texts written by Muslim scholars. East Asia Researchers have found evidence that Chinese alchemists and philosophers discovered complex mathematical phenomena that were shared with Arab alchemists during the medieval period. Discovered in BC China, the "magic square of three" was propagated to followers of Abū Mūsā Jābir ibn Ḥayyān at some point over the proceeding several hundred years. Other commonalities shared between the two alchemical schools of thought include discrete naming for ingredients and heavy influence from the natural elements. The silk road provided a clear path for the exchange of goods, ideas, ingredients, religion, and many other aspects of life with which alchemy is intertwined. Whereas European alchemy eventually centered on the transmutation of base metals into noble metals, Chinese alchemy had a more obvious connection to medicine. The philosopher's stone of European alchemists can be compared to the Grand Elixir of Immortality sought by Chinese alchemists. In the hermetic view, these two goals were not unconnected, and the philosopher's stone was often equated with the universal panacea; therefore, the two traditions may have had more in common than initially appears. As early as 317 AD, Ge Hong documented the use of metals, minerals, and elixirs in early Chinese medicine. Hong identified three ancient Chinese documents, titled Scripture of Great Clarity, Scripture of the Nine Elixirs, and Scripture of the Golden Liquor, as texts containing fundamental alchemical information. He also described alchemy, along with meditation, as the sole spiritual practices that could allow one to gain immortality or to transcend. In his work Inner Chapters of the Book of the Master Who Embraces Spontaneous Nature (317 AD), Hong argued that alchemical solutions such as elixirs were preferable to traditional medicinal treatment due to the spiritual protection they could provide. In the centuries following Ge Hong's death, the emphasis placed on alchemy as a spiritual practice among Chinese Daoists was reduced. In 499 AD, Tao Hongjing refuted Hong's statement that alchemy is as important a spiritual practice as Shangqing meditation. While Hongjing did not deny the power of alchemical elixirs to grant immortality or provide divine protection, he ultimately found the Scripture of the Nine Elixirs to be ambiguous and spiritually unfulfilling, aiming to implement more accessible practising techniques. In the early 700s, Neidan (also known as internal alchemy) was adopted by Daoists as a new form of alchemy. Neidan emphasized appeasing the inner gods that inhabit the human body by practising alchemy with compounds found in the body, rather than the mixing of natural resources that was emphasized in early Dao alchemy. For example, saliva was often considered nourishment for the inner gods and did not require any conscious alchemical reaction to produce. The inner gods were not thought of as physical presences occupying each person, but rather a collection of deities that are each said to represent and protect a specific body part or region. Although those who practised Neidan prioritized meditation over external alchemical strategies, many of the same elixirs and constituents from previous Daoist alchemical schools of thought continued to be utilized in tandem with meditation. Eternal life remained a consideration for Neidan alchemists, as it was believed that one would become immortal if an inner god were to be immortalized within them through spiritual fulfilment. Black powder may have been an important invention of Chinese alchemists. It is said that the Chinese invented gunpowder while trying to find a potion for eternal life. Described in 9th-century texts and used in fireworks in China by the 10th century, it was used in cannons by 1290. From China, the use of gunpowder spread to Japan, the Mongols, the Muslim world, and Europe. Gunpowder was used by the Mongols against the Hungarians in 1241, and in Europe by the 14th century. Chinese alchemy was closely connected to Taoist forms of traditional Chinese medicine, such as Acupuncture and Moxibustion. In the early Song dynasty, followers of this Taoist idea (chiefly the elite and upper class) would ingest mercuric sulfide, which, though tolerable in low levels, led many to suicide. Thinking that this consequential death would lead to freedom and access to the Taoist heavens, the ensuing deaths encouraged people to eschew this method of alchemy in favour of external sources (the aforementioned Tai Chi Chuan, mastering of the qi, etc.) Chinese alchemy was introduced to the West by Obed Simon Johnson. Medieval Europe The introduction of alchemy to Latin Europe may be dated to 11 February 1144, with the completion of Robert of Chester's translation of the ("Book on the Composition of Alchemy") from an Arabic work attributed to Khalid ibn Yazid. Although European craftsmen and technicians pre-existed, Robert notes in his preface that alchemy (here still referring to the elixir rather than to the art itself) was unknown in Latin Europe at the time of his writing. The translation of Arabic texts concerning numerous disciplines including alchemy flourished in 12th-century Toledo, Spain, through contributors like Gerard of Cremona and Adelard of Bath. Translations of the time included the Turba Philosophorum, and the works of Avicenna and Muhammad ibn Zakariya al-Razi. These brought with them many new words to the European vocabulary for which there was no previous Latin equivalent. Alcohol, carboy, elixir, and athanor are examples. Meanwhile, theologian contemporaries of the translators made strides towards the reconciliation of faith and experimental rationalism, thereby priming Europe for the influx of alchemical thought. The 11th-century St Anselm put forth the opinion that faith and rationalism were compatible and encouraged rationalism in a Christian context. In the early 12th century, Peter Abelard followed Anselm's work, laying down the foundation for acceptance of Aristotelian thought before the first works of Aristotle had reached the West. In the early 13th century, Robert Grosseteste used Abelard's methods of analysis and added the use of observation, experimentation, and conclusions when conducting scientific investigations. Grosseteste also did much work to reconcile Platonic and Aristotelian thinking. Through much of the 12th and 13th centuries, alchemical knowledge in Europe remained centered on translations, and new Latin contributions were not made. The efforts of the translators were succeeded by that of the encyclopaedists. In the 13th century, Albertus Magnus and Roger Bacon were the most notable of these, their work summarizing and explaining the newly imported alchemical knowledge in Aristotelian terms. Albertus Magnus, a Dominican friar, is known to have written works such as the Book of Minerals where he observed and commented on the operations and theories of alchemical authorities like Hermes Trismegistus, pseudo-Democritus and unnamed alchemists of his time. Albertus critically compared these to the writings of Aristotle and Avicenna, where they concerned the transmutation of metals. From the time shortly after his death through to the 15th century, more than 28 alchemical tracts were misattributed to him, a common practice giving rise to his reputation as an accomplished alchemist. Likewise, alchemical texts have been attributed to Albert's student Thomas Aquinas. Roger Bacon, a Franciscan friar who wrote on a wide variety of topics including optics, comparative linguistics, and medicine, composed his Great Work () for as part of a project towards rebuilding the medieval university curriculum to include the new learning of his time. While alchemy was not more important to him than other sciences and he did not produce allegorical works on the topic, he did consider it and astrology to be important parts of both natural philosophy and theology and his contributions advanced alchemy's connections to soteriology and Christian theology. Bacon's writings integrated morality, salvation, alchemy, and the prolongation of life. His correspondence with Clement highlighted this, noting the importance of alchemy to the papacy. Like the Greeks before him, Bacon acknowledged the division of alchemy into practical and theoretical spheres. He noted that the theoretical lay outside the scope of Aristotle, the natural philosophers, and all Latin writers of his time. The practical confirmed the theoretical, and Bacon advocated its uses in natural science and medicine. In later European legend, he became an archmage. In particular, along with Albertus Magnus, he was credited with the forging of a brazen head capable of answering its owner's questions. Soon after Bacon, the influential work of Pseudo-Geber (sometimes identified as Paul of Taranto) appeared. His Summa Perfectionis remained a staple summary of alchemical practice and theory through the medieval and renaissance periods. It was notable for its inclusion of practical chemical operations alongside sulphur-mercury theory, and the unusual clarity with which they were described. By the end of the 13th century, alchemy had developed into a fairly structured system of belief. Adepts believed in the macrocosm-microcosm theories of Hermes, that is to say, they believed that processes that affect minerals and other substances could have an effect on the human body (for example, if one could learn the secret of purifying gold, one could use the technique to purify the human soul). They believed in the four elements and the four qualities as described above, and they had a strong tradition of cloaking their written ideas in a labyrinth of coded jargon set with traps to mislead the uninitiated. Finally, the alchemists practised their art: they actively experimented with chemicals and made observations and theories about how the universe operated. Their entire philosophy revolved around their belief that man's soul was divided within himself after the fall of Adam. By purifying the two parts of man's soul, man could be reunited with God. In the 14th century, alchemy became more accessible to Europeans outside the confines of Latin-speaking churchmen and scholars. Alchemical discourse shifted from scholarly philosophical debate to an exposed social commentary on the alchemists themselves. Dante, Piers Plowman, and Chaucer all painted unflattering pictures of alchemists as thieves and liars. Pope John XXII's 1317 edict, Spondent quas non-exhibent forbade the false promises of transmutation made by pseudo-alchemists. Roman Catholic Inquisitor General Nicholas Eymerich's Directorium Inquisitorum, written in 1376, associated alchemy with the performance of demonic rituals, which Eymerich differentiated from magic performed in accordance with scripture. This did not, however, lead to any change in the Inquisition's monitoring or prosecution of alchemists. In 1404, Henry IV of England banned the practice of multiplying metals by the passing of the (5 Hen. 4. c. 4) (although it was possible to buy a licence to attempt to make gold alchemically, and a number were granted by Henry VI and Edward IV). These critiques and regulations centered more around pseudo-alchemical charlatanism than the actual study of alchemy, which continued with an increasingly Christian tone. The 14th century saw the Christian imagery of death and resurrection employed in the alchemical texts of Petrus Bonus, John of Rupescissa, and in works written in the name of Raymond Lull and Arnold of Villanova. Nicolas Flamel is a well-known alchemist to the point where he had many pseudepigraphic imitators. Although the historical Flamel existed, the writings and legends assigned to him only appeared in 1612. A common idea in European alchemy in the medieval era was a metaphysical "Homeric chain of wise men that link[ed] heaven and earth" that included ancient pagan philosophers and other important historical figures. Renaissance and early modern Europe During the Renaissance, Hermetic and Platonic foundations were restored to European alchemy. The dawn of medical, pharmaceutical, occult, and entrepreneurial branches of alchemy followed. In the late 15th century, Marsilio Ficino translated the Corpus Hermeticum and the works of Plato into Latin. These were previously unavailable to Europeans who for the first time had a full picture of the alchemical theory that Bacon had declared absent. Renaissance Humanism and Renaissance Neoplatonism guided alchemists away from physics to refocus on mankind as the alchemical vessel. Esoteric systems developed that blended alchemy into a broader occult Hermeticism, fusing it with magic, astrology, and Christian cabala. A key figure in this development was German Heinrich Cornelius Agrippa (1486–1535), who received his Hermetic education in Italy in the schools of the humanists. In his De Occulta Philosophia, he attempted to merge Kabbalah, Hermeticism, and alchemy. He was instrumental in spreading this new blend of Hermeticism outside the borders of Italy. Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541) cast alchemy into a new form, rejecting some of Agrippa's occultism and moving away from chrysopoeia. Paracelsus pioneered the use of chemicals and minerals in medicine and wrote, "Many have said of Alchemy, that it is for the making of gold and silver. For me such is not the aim, but to consider only what virtue and power may lie in medicines." His hermetical views were that sickness and health in the body relied on the harmony of man the microcosm and Nature the macrocosm. He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Iatrochemistry refers to the pharmaceutical applications of alchemy championed by Paracelsus. John Dee (13 July 1527 – December 1608) followed Agrippa's occult tradition. Although better known for angel summoning, divination, and his role as astrologer, cryptographer, and consultant to Queen Elizabeth I, Dee's alchemical Monas Hieroglyphica, written in 1564 was his most popular and influential work. His writing portrayed alchemy as a sort of terrestrial astronomy in line with the Hermetic axiom As above so below. During the 17th century, a short-lived "supernatural" interpretation of alchemy became popular, including support by fellows of the Royal Society: Robert Boyle and Elias Ashmole. Proponents of the supernatural interpretation of alchemy believed that the philosopher's stone might be used to summon and communicate with angels. Entrepreneurial opportunities were common for the alchemists of Renaissance Europe. Alchemists were contracted by the elite for practical purposes related to mining, medical services, and the production of chemicals, medicines, metals, and gemstones. Rudolf II, Holy Roman Emperor, in the late 16th century, famously received and sponsored various alchemists at his court in Prague, including Dee and his associate Edward Kelley. King James IV of Scotland, Julius, Duke of Brunswick-Lüneburg, Henry V, Duke of Brunswick-Lüneburg, Augustus, Elector of Saxony, Julius Echter von Mespelbrunn, and Maurice, Landgrave of Hesse-Kassel all contracted alchemists. John's son Arthur Dee worked as a court physician to Michael I of Russia and Charles I of England but also compiled the alchemical book Fasciculus Chemicus. Although most of these appointments were legitimate, the trend of pseudo-alchemical fraud continued through the Renaissance. Betrüger would use sleight of hand, or claims of secret knowledge to make money or secure patronage. Legitimate mystical and medical alchemists such as Michael Maier and Heinrich Khunrath wrote about fraudulent transmutations, distinguishing themselves from the con artists. False alchemists were sometimes prosecuted for fraud. The terms "chemia" and "alchemia" were used as synonyms in the early modern period, and the differences between alchemy, chemistry and small-scale assaying and metallurgy were not as neat as in the present day. There were important overlaps between practitioners, and trying to classify them into alchemists, chemists and craftsmen is anachronistic. For example, Tycho Brahe (1546–1601), an alchemist better known for his astronomical and astrological investigations, had a laboratory built at his Uraniborg observatory/research institute. Michael Sendivogius (Michał Sędziwój, 1566–1636), a Polish alchemist, philosopher, medical doctor and pioneer of chemistry wrote mystical works but is also credited with distilling oxygen in a lab sometime around 1600. Sendivogious taught his technique to Cornelius Drebbel who, in 1621, applied this in a submarine. Isaac Newton devoted considerably more of his writing to the study of alchemy (see Isaac Newton's occult studies) than he did to either optics or physics. Other early modern alchemists who were eminent in their other studies include Robert Boyle, and Jan Baptist van Helmont. Their Hermeticism complemented rather than precluded their practical achievements in medicine and science. Later modern period The decline of European alchemy was brought about by the rise of modern science with its emphasis on rigorous quantitative experimentation and its disdain for "ancient wisdom". Although the seeds of these events were planted as early as the 17th century, alchemy still flourished for some two hundred years, and in fact may have reached its peak in the 18th century. As late as 1781 James Price claimed to have produced a powder that could transmute mercury into silver or gold. Early modern European alchemy continued to exhibit a diversity of theories, practices, and purposes: "Scholastic and anti-Aristotelian, Paracelsian and anti-Paracelsian, Hermetic, Neoplatonic, mechanistic, vitalistic, and more—plus virtually every combination and compromise thereof." Robert Boyle (1627–1691) pioneered the scientific method in chemical investigations. He assumed nothing in his experiments and compiled every piece of relevant data. Boyle would note the place in which the experiment was carried out, the wind characteristics, the position of the Sun and Moon, and the barometer reading, all just in case they proved to be relevant. This approach eventually led to the founding of modern chemistry in the 18th and 19th centuries, based on revolutionary discoveries and ideas of Lavoisier and John Dalton. Beginning around 1720, a rigid distinction began to be drawn for the first time between "alchemy" and "chemistry". By the 1740s, "alchemy" was now restricted to the realm of gold making, leading to the popular belief that alchemists were charlatans, and the tradition itself nothing more than a fraud. In order to protect the developing science of modern chemistry from the negative censure to which alchemy was being subjected, academic writers during the 18th-century scientific Enlightenment attempted to divorce and separate the "new" chemistry from the "old" practices of alchemy. This move was mostly successful, and the consequences of this continued into the 19th, 20th and 21st centuries. During the occult revival of the early 19th century, alchemy received new attention as an occult science. The esoteric or occultist school that arose during the 19th century held the view that the substances and operations mentioned in alchemical literature are to be interpreted in a spiritual sense, less than as a practical tradition or protoscience. This interpretation claimed that the obscure language of the alchemical texts, which 19th century practitioners were not always able to decipher, were an allegorical guise for spiritual, moral or mystical processes. Two seminal figures during this period were Mary Anne Atwood and Ethan Allen Hitchcock, who independently published similar works regarding spiritual alchemy. Both rebuffed the growing successes of chemistry, developing a completely esoteric view of alchemy. Atwood wrote: "No modern art or chemistry, notwithstanding all its surreptitious claims, has any thing in common with Alchemy." Atwood's work influenced subsequent authors of the occult revival including Eliphas Levi, Arthur Edward Waite, and Rudolf Steiner. Hitchcock, in his Remarks Upon Alchymists (1855) attempted to make a case for his spiritual interpretation with his claim that the alchemists wrote about a spiritual discipline under a materialistic guise in order to avoid accusations of blasphemy from the church and state. In 1845, Baron Carl Reichenbach, published his studies on Odic force, a concept with some similarities to alchemy, but his research did not enter the mainstream of scientific discussion. In 1946, Louis Cattiaux published the Message Retrouvé, a work that was at once philosophical, mystical and highly influenced by alchemy. In his lineage, many researchers, including Emmanuel and Charles d'Hooghvorst, are updating alchemical studies in France and Belgium. Women Several women appear in the earliest history of alchemy. Michael Maier names four women who were able to make the philosophers' stone: Mary the Jewess, Cleopatra the Alchemist, Medera, and Taphnutia. Zosimos' sister Theosebia (later known as Euthica the Arab) and Isis the Prophetess also played roles in early alchemical texts. The first alchemist whose name we know was Mary the Jewess (). Early sources claim that Mary (or Maria) devised a number of improvements to alchemical equipment and tools as well as novel techniques in chemistry. Her best known advances were in heating and distillation processes. The laboratory water-bath, known eponymously (especially in France) as the bain-marie, is said to have been invented or at least improved by her. Essentially a double-boiler, it was (and is) used in chemistry for processes that required gentle heating. The tribikos (a modified distillation apparatus) and the kerotakis (a more intricate apparatus used especially for sublimations) are two other advancements in the process of distillation that are credited to her. Although we have no writing from Mary herself, she is known from the early-fourth-century writings of Zosimos of Panopolis. After the Greco-Roman period, women's names appear less frequently in alchemical literature. Towards the end of the Middle Ages and beginning of the Renaissance, due to the emergence of print, women were able to access the alchemical knowledge from texts of the preceding centuries. Caterina Sforza, the Countess of Forlì and Lady of Imola, is one of the few confirmed female alchemists after Mary the Jewess. As she owned an apothecary, she would practice science and conduct experiments in her botanic gardens and laboratories. Being knowledgeable in alchemy and pharmacology, she recorded all of her alchemical ventures in a manuscript named ('Experiments'). The manuscript contained more than four hundred recipes covering alchemy as well as cosmetics and medicine. One of these recipes was for the water of talc. Talc, which makes up talcum powder, is a mineral which, when combined with water and distilled, was said to produce a solution which yielded many benefits. These supposed benefits included turning silver to gold and rejuvenation. When combined with white wine, its powder form could be ingested to counteract poison. Furthermore, if that powder was mixed and drunk with white wine, it was said to be a source of protection from any poison, sickness, or plague. Other recipes were for making hair dyes, lotions, lip colours. There was also information on how to treat a variety of ailments from fevers and coughs to epilepsy and cancer. In addition, there were instructions on producing the quintessence (or aether), an elixir which was believed to be able to heal all sicknesses, defend against diseases, and perpetuate youthfulness. She also wrote about creating the illustrious philosophers' stone. Some women known for their interest in alchemy were Catherine de' Medici, the Queen of France, and Marie de' Medici, the following Queen of France, who carried out experiments in her personal laboratory. Also, Isabella d'Este, the Marchioness of Mantua, made perfumes herself to serve as gifts. Due to the proliferation in alchemical literature of pseudepigrapha and anonymous works, however, it is difficult to know which of the alchemists were actually women. This contributed to a broader pattern in which male authors credited prominent noblewomen for beauty products with the purpose of appealing to a female audience. For example, in ("Gallant Recipe-Book"), the distillation of lemons and roses was attributed to Elisabetta Gonzaga, the duchess of Urbino. In the same book, Isabella d'Aragona, the daughter of Alfonso II of Naples, is accredited for recipes involving alum and mercury. Ippolita Maria Sforza is even referred to in an anonymous manuscript about a hand lotion created with rose powder and crushed bones. As the sixteenth century went on, scientific culture flourished and people began collecting "secrets". During this period "secrets" referred to experiments, and the most coveted ones were not those which were bizarre, but the ones which had been proven to yield the desired outcome. In this period, the only book of secrets ascribed to a woman was ('The Secrets of Signora Isabella Cortese'). This book contained information on how to turn base metals into gold, medicine, and cosmetics. However, it is rumoured that a man, Girolamo Ruscelli, was the real author and only used a female voice to attract female readers. In the nineteenth-century, Mary Anne Atwood's A Suggestive Inquiry into the Hermetic Mystery (1850) marked the return of women during the occult revival. Modern historical research The history of alchemy has become a recognized subject of academic study. As the language of the alchemists is analysed, historians are becoming more aware of the connections between that discipline and other facets of Western cultural history, such as the evolution of science and philosophy, the sociology and psychology of the intellectual communities, kabbalism, spiritualism, Rosicrucianism, and other mystic movements. Institutions involved in this research include The Chymistry of Isaac Newton project at Indiana University, the University of Exeter Centre for the Study of Esotericism (EXESESO), the European Society for the Study of Western Esotericism (ESSWE), and the University of Amsterdam's Sub-department for the History of Hermetic Philosophy and Related Currents. A large collection of books on alchemy is kept in the Bibliotheca Philosophica Hermetica in Amsterdam. Journals which publish regularly on the topic of Alchemy include Ambix, published by the Society for the History of Alchemy and Chemistry, and Isis, published by the History of Science Society. Core concepts Western alchemical theory corresponds to the worldview of late antiquity in which it was born. Concepts were imported from Neoplatonism and earlier Greek cosmology. As such, the classical elements appear in alchemical writings, as do the seven classical planets and the corresponding seven metals of antiquity. Similarly, the gods of the Roman pantheon who are associated with these luminaries are discussed in alchemical literature. The concepts of prima materia and anima mundi are central to the theory of the philosopher's stone. Magnum opus The Great Work of Alchemy is often described as a series of four stages represented by colours. nigredo, a blackening or melanosis albedo, a whitening or leucosis citrinitas, a yellowing or xanthosis rubedo, a reddening, purpling, or iosis Modernity Due to the complexity and obscurity of alchemical literature, and the 18th-century diffusion of remaining alchemical practitioners into the area of chemistry, the general understanding of alchemy in the 19th and 20th centuries was influenced by several distinct and radically different interpretations. Those focusing on the exoteric, such as historians of science Lawrence M. Principe and William R. Newman, have interpreted the 'Decknamen' (or code words) of alchemy as physical substances. These scholars have reconstructed physicochemical experiments that they say are described in medieval and early modern texts. At the opposite end of the spectrum, focusing on the esoteric, scholars, such as Florin George Călian and Anna Marie Roos, who question the reading of Principe and Newman, interpret these same Decknamen as spiritual, religious, or psychological concepts. New interpretations of alchemy are still perpetuated, sometimes merging in concepts from New Age or radical environmentalism movements. Groups like the Rosicrucians and Freemasons have a continued interest in alchemy and its symbolism. Since the Victorian revival of alchemy, "occultists reinterpreted alchemy as a spiritual practice, involving the self-transformation of the practitioner and only incidentally or not at all the transformation of laboratory substances", which has contributed to a merger of magic and alchemy in popular thought. Esoteric interpretations of historical texts In the eyes of a variety of modern esoteric and Neo-Hermetic practitioners, alchemy is primarily spiritual. In this interpretation, transmutation of lead into gold is presented as an analogy for personal transmutation, purification, and perfection. According to this view, early alchemists such as Zosimos of Panopolis () highlighted the spiritual nature of the alchemical quest, symbolic of a religious regeneration of the human soul. This approach is held to have continued in the Middle Ages, as metaphysical aspects, substances, physical states, and material processes are supposed to have been used as metaphors for spiritual entities, spiritual states, and, ultimately, transformation. In this sense, the literal meanings of 'Alchemical Formulas' hid a spiritual philosophy. In the Neo-Hermeticist interpretation, both the transmutation of common metals into gold and the universal panacea are held to symbolize evolution from an imperfect, diseased, corruptible, and ephemeral state toward a perfect, healthy, incorruptible, and everlasting state, so the philosopher's stone then represented a mystic key that would make this evolution possible. Applied to the alchemist, the twin goal symbolized their evolution from ignorance to enlightenment, and the stone represented a hidden spiritual truth or power that would lead to that goal. In texts that are believed to have been written according to this view, the cryptic alchemical symbols, diagrams, and textual imagery of late alchemical works are supposed to contain multiple layers of meanings, allegories, and references to other equally cryptic works; which must be laboriously decoded to discover their true meaning. In his 1766 Alchemical Catechism, Théodore Henri de Tschudi suggested that the usage of the metals was symbolic: Psychology Alchemical symbolism has been important in analytical psychology and was revived and popularized from near extinction by the Swiss psychologist Carl Gustav Jung. Jung was initially confounded and at odds with alchemy and its images but after being given a copy of The Secret of the Golden Flower, a Chinese alchemical text translated by his friend Richard Wilhelm, he discovered a direct correlation or parallel between the symbolic images in the alchemical drawings and the inner, symbolic images coming up in his patients' dreams, visions, or fantasies. He observed these alchemical images occurring during the psychic process of transformation, a process that Jung called "individuation". Specifically, he regarded the conjuring up of images of gold or Lapis as symbolic expressions of the origin and goal of this "process of individuation". Together with his alchemical mystica soror (mystical sister) Jungian Swiss analyst Marie-Louise von Franz, Jung began collecting old alchemical texts, compiled a lexicon of key phrases with cross-references, and pored over them. The volumes of work he wrote shed new light onto understanding the art of transubstantiation and renewed alchemy's popularity as a symbolic process of coming into wholeness as a human being where opposites are brought into contact and inner and outer, spirit and matter are reunited in the hieros gamos, or divine marriage. His writings are influential in general psychology, but especially to those who have an interest in understanding the importance of dreams, symbols, and the unconscious archetypal forces (archetypes) that comprise all psychic life. Both von Franz and Jung have contributed significantly to the subject and work of alchemy and its continued presence in psychology as well as contemporary culture. Among the volumes Jung wrote on alchemy, his magnum opus is Volume 14 of his Collected Works, Mysterium Coniunctionis. Literature Alchemy has had a long-standing relationship with art, seen both in alchemical texts and in mainstream entertainment. Literary alchemy appears throughout the history of English literature from Shakespeare to J. K. Rowling, and also the popular Japanese manga Fullmetal Alchemist. Here, characters or plot structure follow an alchemical magnum opus. In the 14th century, Chaucer began a trend of alchemical satire that can still be seen in recent fantasy works like those of the late Sir Terry Pratchett. Another literary work taking inspiration from the alchemical tradition is the 1988 novel The Alchemist by Brazilian writer Paulo Coelho. Visual artists have had a similar relationship with alchemy. While some used it as a source of satire, others worked with the alchemists themselves or integrated alchemical thought or symbols in their work. Music was also present in the works of alchemists and continues to influence popular performers. In the last hundred years, alchemists have been portrayed in a magical and spagyric role in fantasy fiction, film, television, novels, comics and video games. Science One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by means of traditional chemistry, but possible by other physical means. Although not financially worthwhile, gold was synthesized in particle accelerators as early as 1941.
Physical sciences
Chemistry: General
null
580
https://en.wikipedia.org/wiki/Astronomer
Astronomer
An astronomer is a scientist in the field of astronomy who focuses on a specific question or field outside the scope of Earth. Astronomers observe astronomical objects, such as stars, planets, moons, comets and galaxies – in either observational (by analyzing the data) or theoretical astronomy. Examples of topics or fields astronomers study include planetary science, solar astronomy, the origin or evolution of stars, or the formation of galaxies. A related but distinct subject is physical cosmology, which studies the Universe as a whole. Types Astronomers typically fall under either of two main types: observational and theoretical. Observational astronomers make direct observations of celestial objects and analyze the data. In contrast, theoretical astronomers create and investigate models of things that cannot be observed. Because it takes millions to billions of years for a system of stars or a galaxy to complete a life cycle, astronomers must observe snapshots of different systems at unique points in their evolution to determine how they form, evolve, and die. They use this data to create models or simulations to theorize how different celestial objects work. Further subcategories under these two main branches of astronomy include planetary astronomy, astrobiology, stellar astronomy, astrometry, galactic astronomy, extragalactic astronomy, or physical cosmology. Astronomers can also specialize in certain specialties of observational astronomy, such as infrared astronomy, neutrino astronomy, x-ray astronomy, and gravitational-wave astronomy. Academic History Historically, astronomy was more concerned with the classification and description of phenomena in the sky, while astrophysics attempted to explain these phenomena and the differences between them using physical laws. Today, that distinction has mostly disappeared and the terms "astronomer" and "astrophysicist" are interchangeable. Professional astronomers are highly educated individuals who typically have a PhD in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although they quite often have other duties such as teaching, building instruments, or aiding in the operation of an observatory. The American Astronomical Society, which is the major organization of professional astronomers in North America, has approximately 8,200 members (as of 2024). This number includes scientists from other fields such as physics, geology, and engineering, whose research interests are closely related to astronomy. The International Astronomical Union comprises about 12,700 members from 92 countries who are involved in astronomical research at the PhD level and beyond (as of 2024). Contrary to the classical image of an old astronomer peering through a telescope through the dark hours of the night, it is far more common to use a charge-coupled device (CCD) camera to record a long, deep exposure, allowing a more sensitive image to be created because the light is added over time. Before CCDs, photographic plates were a common method of observation. Modern astronomers spend relatively little time at telescopes, usually just a few weeks per year. Analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers' time. Activities and graduate degree training Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes. Most universities also have outreach programs, including public telescope time and sometimes planetariums, as a public service to encourage interest in the field. Those who become astronomers usually have a broad background in physics, mathematics, sciences, and computing in high school. Taking courses that teach how to research, write, and present papers are part of the higher education of an astronomer, while most astronomers attain both a Master's degree and eventually a PhD degree in astronomy, physics or astrophysics. PhD training typically involves 5-6 years of study, including completion of upper-level courses in the core sciences, a competency examination, experience with teaching undergraduates and participating in outreach programs, work on research projects under the student's supervising professor, completion of a PhD thesis, and passing a final oral exam. Throughout the PhD training, a successful student is financially supported with a stipend. Amateur astronomers While there is a relatively low number of professional astronomers, the field is popular among amateurs. Most cities have amateur astronomy clubs that meet on a regular basis and often host star parties. The Astronomical Society of the Pacific is the largest general astronomical society in the world, comprising both professional and amateur astronomers as well as educators from 70 different nations. As with any hobby, most people who practice amateur astronomy may devote a few hours a month to stargazing and reading the latest developments in research. However, amateurs span the range from so-called "armchair astronomers" to the highly ambitious people who own science-grade telescopes and instruments with which they are able to make their own discoveries, create astrophotographs, and assist professional astronomers in research.
Physical sciences
Astronomy basics
Astronomy
586
https://en.wikipedia.org/wiki/ASCII
ASCII
ASCII ( ), an acronym for American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. ASCII has just 128 code points, of which only 95 are , which severely limit its scope. The set of available punctuation had significant impact on the syntax of computer languages and text markup. ASCII hugely influenced the design of character sets used by modern computers, including Unicode which has over a million code points, but the first 128 of these are the same as ASCII. The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding. ASCII is one of the IEEE milestones. Overview ASCII was developed in part from telegraph code. Its first commercial use was in the Teletype Model 33 and the Teletype Model 35 as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters. The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015. Originally based on the (modern) English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart in this article. Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols. In addition, the original ASCII specification included 33 non-printing control codes which originated with s; most of these are now obsolete, although a few are still commonly used, such as the carriage return, line feed, and tab codes. For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105. Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño, or proper nouns with diacritical marks such as Beyoncé (although on certain devices characters could be combined with punctuation such as Tilde (~) and Backtick (`) to approximate such characters.) History The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI) and ultimately became the American National Standards Institute (ANSI). With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers. The X3 committee made other changes, including other new characters (the brace and vertical bar characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986. Revisions ASA X3.4-1963 ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260 & 2265 Display Stations and IBM 2848 Display Control) USAS X3.4-1967 USAS X3.4-1968 ANSI X3.4-1977 ANSI X3.4-1986 ANSI X3.4-1986 (R1992) ANSI X3.4-1986 (R1997) ANSI INCITS 4-1986 (R2002) ANSI INCITS 4-1986 (R2007) INCITS 4-1986 (R2012) INCITS 4-1986 (R2017) INCITS 4-1986 (R2022) In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first) and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats. Design considerations Bit width The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1932, FIELDATA (1956), and early EBCDIC (1963), more than 64 codes were required for ASCII. ITA2 was in turn based on Baudot code, the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874. The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code. The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0. Internal organization The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard. The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 0110101, where 5 is 0101 in binary). Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters. Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers following the IBM PC (1981), especially Model M (1984) and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=. Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A. The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns. Character order ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are: All uppercase come before lowercase letters; for example, "Z" precedes "a" Digits and many punctuation marks come before letters An intermediate order converts uppercase letters to lowercase before comparing ASCII values. Character set Character groups Control characters ASCII reserves the first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters. These are codes intended to control peripheral devices (such as printers), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters (i.e. they are not characters at all, but signals). For debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned to them. For example, character 0x0A represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting. The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example the standard is unclear about the meaning of "delete". Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected. When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q is replaced by a second control-S to resume output. The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively. Delete vs backspace The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL character was assigned to erase the previous character. Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS character for the backspace key. The early Unix tty drivers, unlike some modern implementations, allowed only one character to be set to erase the previous character in canonical input processing (where a very simple line editor is available); this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS character allowed Ctrl+H to be used for other purposes, such as the "help" prefix command in GNU Emacs. Escape Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed. In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence, which can be used to address the cursor, scroll a region, set/query various terminal properties, and more. They are usually in the form of a so-called "ANSI escape code" (often starting with a "Control Sequence Introducer", "CSI", "") from ECMA-48 (1972) and its successors. Some escape sequences do not have introducers, like the "Reset to Initial State", "RIS" command "". In contrast, an ESC read from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether. End of line The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line. DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system. Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows in turn inherited it from MS-DOS. Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. The tty driver would handle the LF to CRLF conversion on output so files can be directly printed to terminal, and NL (newline) is often used to refer to CRLF in UNIX documents. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple later replaced these obsolete operating systems with their Unix-based macOS (formerly named OS X) operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines. Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention. End of file/stream The PDP-6 monitor, and its PDP-10 successor TOPS-10, used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character (ETX), also known as control-C, was inappropriate for a variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard. The Unix terminal driver uses the end-of-transmission character (EOT), also known as control-D, to indicate the end of a data stream. In the C programming language, and in Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero". Table of codes Control code table Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers. Printable character table At the time of adoption, the codes 20hex to 7Ehex would cause the printing of a visible character (a glyph), and thus were designated "printable characters". These codes represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total. The empty space between words, as produced by the space bar of a keyboard, is character code 20hex. Since the space character is visible in printed text it considered a "printable character", even though it is unique in having no visible glyph. It is listed in the printable character table, as per the ASCII standard, instead of in the control character table. Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is listed in the control character table. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex). Usage ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII". On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating: I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations. All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used. ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII. Variants and derivations As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII. 7-bit codes From early in its development, ASCII was intended to be just one of several national variants of an international character code standard. Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters. Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc.
Technology
Software development: General
null
612
https://en.wikipedia.org/wiki/Arithmetic%20mean
Arithmetic mean
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic. In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population. While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency. Definition The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values , the arithmetic mean is defined by the formula: (For an explanation of the summation operator, see summation.) In simpler terms, the formula for the arithmetic mean is: For example, if the monthly salaries of employees are , then the arithmetic mean is: If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean and denoted by the Greek letter . If the data set is a statistical sample (a subset of the population), it is called the sample mean (which for a data set is denoted as ). The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to ), it can be defined on a convex space, not only a vector space. History The statistician Churchill Eisenhart, senior researcher fellow at the U. S. National Bureau of Standards, traced the history of the arithmetic mean in detail. In the modern age it started to be used as a way of combining various observations that should be identical, but were not such as estimates of the direction of magnetic north. In 1635 the mathematician Henry Gellibrand described as “meane” the midpoint of a lowest and highest number, not quite the arithmetic mean. In 1668, a person known as “DB” was quoted in the Transactions of the Royal Society describing “taking the mean” of five values: Motivating properties The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include: If numbers have mean , then . Since is the distance from a given number to the mean, one way to interpret this property is by saying that the numbers to the left of the mean are balanced by the numbers to the right. The mean is the only number for which the residuals (deviations from the estimate) sum to zero. This can also be interpreted as saying that the mean is translationally invariant in the sense that for any real number , . If it is required to use a single number as a "typical" value for a set of known numbers , then the arithmetic mean of the numbers does this best since it minimizes the sum of squared deviations from the typical value: the sum of . The sample mean is also the best single predictor because it has the lowest root mean squared error. If the arithmetic mean of a population of numbers is desired, then the estimate of it that is unbiased is the arithmetic mean of a sample drawn from the population. The arithmetic mean is independent of scale of the units of measurement, in the sense that So, for example, calculating a mean of liters and then converting to gallons is the same as converting to gallons first and then calculating the mean. This is also called first order homogeneity. Additional properties The arithmetic mean of a sample is always between the largest and smallest values in that sample. The arithmetic mean of any amount of equal-sized number groups together is the arithmetic mean of the arithmetic means of each group. Contrast with median The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger, and no more than half are smaller than it. If elements in the data increase arithmetically when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample . The mean is , as is the median. However, when we consider a sample that cannot be arranged to increase arithmetically, such as , the median and arithmetic average can differ significantly. In this case, the arithmetic average is , while the median is . The average value can vary considerably from most values in the sample and can be larger or smaller than most. There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income. Generalizations Weighted average A weighted average, or weighted mean, is an average in which some data points count more heavily than others in that they are given more weight in the calculation. For example, the arithmetic mean of and is , or equivalently . In contrast, a weighted mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as . Here the weights, which necessarily sum to one, are and , the former being twice the latter. The arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all weights are equal to the same number ( in the above example and in a situation with numbers being averaged). Continuous probability distributions If a numerical property, and any sample of data from it, can take on any value from a continuous range instead of, for example, just integers, then the probability of a number falling into some range of possible values can be described by integrating a continuous probability distribution across this range, even when the naive probability for a sample number taking one certain value from infinitely many is zero. In this context, the analog of a weighted average, in which there are infinitely many possibilities for the precise value of the variable in each range, is called the mean of the probability distribution. The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the median mentioned above and the mode (the three Ms), are equal. This equality does not hold for other probability distributions, as illustrated for the log-normal distribution here. Angles Particular care is needed when using cyclic data, such as phases or angles. Taking the arithmetic mean of 1° and 359° yields a result of 180°. This is incorrect for two reasons: Firstly, angle measurements are only defined up to an additive constant of 360° ( or , if measuring in radians). Thus, these could easily be called 1° and -1°, or 361° and 719°, since each one of them produces a different average. Secondly, in this situation, 0° (or 360°) is geometrically a better average value: there is lower dispersion about it (the points are both 1° from it and 179° from 180°, the putative average). In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (that is, define the mean as the central point: the point about which one has the lowest dispersion) and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°). Symbols and encoding The arithmetic mean is often denoted by a bar (vinculum or macron), as in . Some software (text processors, web browsers) may not display the "x̄" symbol correctly. For example, the HTML symbol "x̄" combines two codes — the base letter "x" plus a code for the line above ( ̄ or ¯). In some document formats (such as PDF), the symbol may be replaced by a "¢" (cent) symbol when copied to a text processor such as Microsoft Word.
Mathematics
Statistics
null
621
https://en.wikipedia.org/wiki/Amphibian
Amphibian
Amphibians are ectothermic, anamniotic, four-limbed vertebrate animals that constitute the class Amphibia. In its broadest sense, it is a paraphyletic group encompassing all tetrapods excluding the amniotes (tetrapods with an amniotic membrane, such as modern reptiles, birds and mammals). All extant (living) amphibians belong to the monophyletic subclass Lissamphibia, with three living orders: Anura (frogs and toads), Urodela (salamanders), and Gymnophiona (caecilians). Evolved to be mostly semiaquatic, amphibians have adapted to inhabit a wide variety of habitats, with most species living in freshwater, wetland or terrestrial ecosystems (such as riparian woodland, fossorial and even arboreal habitats). Their life cycle typically starts out as aquatic larvae with gills known as tadpoles, but some species have developed behavioural adaptations to bypass this. Young amphibians generally undergo metamorphosis from an aquatic larval form with gills to an air-breathing adult form with lungs. Amphibians use their skin as a secondary respiratory interface and some small terrestrial salamanders and frogs lack lungs and rely entirely on their skin. They are superficially similar to reptiles like lizards, but unlike reptiles and other amniotes, require access to water bodies to breed. With their complex reproductive needs and permeable skins, amphibians are often ecological indicators to habitat conditions; in recent decades there has been a dramatic decline in amphibian populations for many species around the globe. The earliest amphibians evolved in the Devonian period from tetrapodomorph sarcopterygians (lobe-finned fish with articulated limb-like fins) that evolved primitive lungs, which were helpful in adapting to dry land. They diversified and became ecologically dominant during the Carboniferous and Permian periods, but were later displaced in terrestrial environments by early reptiles and basal synapsids (predecessors of mammals). The origin of modern lissamphibians, which first appeared during the Early Triassic, around 250 million years ago, has long been contentious. The most popular hypothesis is that they likely originated from temnospondyls, the most diverse group of prehistoric amphibians, during the Permian period. Another hypothesis is that they emerged from lepospondyls. A fourth group of lissamphibians, the Albanerpetontidae, became extinct around 2 million years ago. The number of known amphibian species is approximately 8,000, of which nearly 90% are frogs. The smallest amphibian (and vertebrate) in the world is a frog from New Guinea (Paedophryne amauensis) with a length of just . The largest living amphibian is the South China giant salamander (Andrias sligoi), but this is dwarfed by prehistoric temnospondyls such as Mastodonsaurus which could reach up to in length. The study of amphibians is called batrachology, while the study of both reptiles and amphibians is called herpetology. Classification The word amphibian is derived from the Ancient Greek term (), which means 'both kinds of life', meaning 'of both kinds' and meaning 'life'. The term was initially used as a general adjective for animals that could live on land or in water, including seals and otters. Traditionally, the class Amphibia includes all tetrapod vertebrates that are not amniotes. Amphibia in its widest sense () was divided into three subclasses, two of which are extinct: Subclass Lepospondyli† (A potentially polyphyletic Late Paleozoic group of small forms, likely more closely related to amniotes than Lissamphibia) Subclass Temnospondyli† (diverse Late Paleozoic and early Mesozoic grade, some of which were large predators) Subclass Lissamphibia (all modern amphibians, including frogs, toads, salamanders, newts and caecilians) Salientia (frogs, toads and relatives): Early Triassic to present—7,360 current species in 53 families. Modern (crown group) salientians are described via the name Anura. Caudata (salamanders, newts and relatives): Late Triassic to present—764 current species in 9 families. Modern (crown group) caudatans are described via the name Urodela. Gymnophiona (caecilians and relatives): Late Triassic to present—215 current species in 10 families. The name Apoda is also sometimes used for caecilians. Allocaudata† (Albanerpetontidae) Middle Jurassic – Early Pleistocene These three subclasses do not include all extinct amphibians. Other extinct amphibian groups include Embolomeri (Late Paleozoic large aquatic predators), Seymouriamorpha (semiaquatic to terrestrial Permian forms related to amniotes)[citation needed], among others. Names such as Tetrapoda and Stegocephalia encompass the entirety of amphibian-grade tetrapods, while Reptiliomorpha or Anthracosauria are variably used to describe extinct amphibians more closely related to amniotes than to lissamphibians. The actual number of species in each group depends on the taxonomic classification followed. The two most common systems are the classification adopted by the website AmphibiaWeb, University of California, Berkeley, and the classification by herpetologist Darrel Frost and the American Museum of Natural History, available as the online reference database "Amphibian Species of the World". The numbers of species cited above follows Frost and the total number of known (living) amphibian species as of March 31, 2019, is exactly 8,000, of which nearly 90% are frogs. With the phylogenetic classification, the taxon Labyrinthodontia has been discarded as it is a polyparaphyletic group without unique defining features apart from shared primitive characteristics. Classification varies according to the preferred phylogeny of the author and whether they use a stem-based or a node-based classification. Traditionally, amphibians as a class are defined as all tetrapods with a larval stage, while the group that includes the common ancestors of all living amphibians (frogs, salamanders and caecilians) and all their descendants is called Lissamphibia. The phylogeny of Paleozoic amphibians is uncertain, and Lissamphibia may possibly fall within extinct groups, like the Temnospondyli (traditionally placed in the subclass Labyrinthodontia) or the Lepospondyli, and in some analyses even in the amniotes. This means that advocates of phylogenetic nomenclature have removed a large number of basal Devonian and Carboniferous amphibian-type tetrapod groups that were formerly placed in Amphibia in Linnaean taxonomy, and included them elsewhere under cladistic taxonomy. If the common ancestor of amphibians and amniotes is included in Amphibia, it becomes a paraphyletic group. All modern amphibians are included in the subclass Lissamphibia, which is usually considered a clade, a group of species that have evolved from a common ancestor. The three modern orders are Anura (the frogs), Caudata (or Urodela, the salamanders), and Gymnophiona (or Apoda, the caecilians). It has been suggested that salamanders arose separately from a temnospondyl-like ancestor, and even that caecilians are the sister group of the advanced reptiliomorph amphibians, and thus of amniotes. Although the fossils of several older proto-frogs with primitive characteristics are known, the oldest "true frog", with hopping adaptations is Prosalirus bitis, from the Early Jurassic Kayenta Formation of Arizona. It is anatomically very similar to modern frogs. The oldest known caecilians are Funcusvermis gilmorei (from the Late Triassic) and Eocaecilia micropodia (from the Early Jurassic), both from Arizona. The earliest salamander is Beiyanerpeton jianpingensis from the Late Jurassic of northeastern China. Authorities disagree as to whether Salientia is a superorder that includes the order Anura, or whether Anura is a sub-order of the order Salientia. The Lissamphibia are traditionally divided into three orders, but an extinct salamander-like family, the Albanerpetontidae, is now considered part of Lissamphibia alongside the superorder Salientia. Furthermore, Salientia includes all three recent orders plus the Triassic proto-frog, Triadobatrachus. Evolutionary history The first major groups of amphibians developed in the Devonian period, around 370 million years ago, from lobe-finned fish which were similar to the modern coelacanth and lungfish. These ancient lobe-finned fish had evolved multi-jointed leg-like fins with digits that enabled them to crawl along the sea bottom. Some fish had developed primitive lungs that help them breathe air when the stagnant pools of the Devonian swamps were low in oxygen. They could also use their strong fins to hoist themselves out of the water and onto dry land if circumstances so required. Eventually, their bony fins would evolve into limbs and they would become the ancestors to all tetrapods, including modern amphibians, reptiles, birds, and mammals. Despite being able to crawl on land, many of these prehistoric tetrapodomorph fish still spent most of their time in the water. They had started to develop lungs, but still breathed predominantly with gills. Many examples of species showing transitional features have been discovered. Ichthyostega was one of the first primitive amphibians, with nostrils and more efficient lungs. It had four sturdy limbs, a neck, a tail with fins and a skull very similar to that of the lobe-finned fish, Eusthenopteron. Amphibians evolved adaptations that allowed them to stay out of the water for longer periods. Their lungs improved and their skeletons became heavier and stronger, better able to support the weight of their bodies on land. They developed "hands" and "feet" with five or more digits; the skin became more capable of retaining body fluids and resisting desiccation. The fish's hyomandibula bone in the hyoid region behind the gills diminished in size and became the stapes of the amphibian ear, an adaptation necessary for hearing on dry land. An affinity between the amphibians and the teleost fish is the multi-folded structure of the teeth and the paired supra-occipital bones at the back of the head, neither of these features being found elsewhere in the animal kingdom. At the end of the Devonian period (360 million years ago), the seas, rivers and lakes were teeming with life while the land was the realm of early plants and devoid of vertebrates, though some, such as Ichthyostega, may have sometimes hauled themselves out of the water. It is thought they may have propelled themselves with their forelimbs, dragging their hindquarters in a similar manner to that used by the elephant seal. In the early Carboniferous (360 to 323 million years ago), the climate was relatively wet and warm. Extensive swamps developed with mosses, ferns, horsetails and calamites. Air-breathing arthropods evolved and invaded the land where they provided food for the carnivorous amphibians that began to adapt to the terrestrial environment. There were no other tetrapods on the land and the amphibians were at the top of the food chain, with some occupying ecological positions currently held by crocodiles. Though equipped with limbs and the ability to breathe air, most still had a long tapering body and strong tail. Others were the top land predators, sometimes reaching several metres in length, preying on the large insects of the period and the many types of fish in the water. They still needed to return to water to lay their shell-less eggs, and even most modern amphibians have a fully aquatic larval stage with gills like their fish ancestors. It was the development of the amniotic egg, which prevents the developing embryo from drying out, that enabled the reptiles to reproduce on land and which led to their dominance in the period that followed. After the Carboniferous rainforest collapse amphibian dominance gave way to reptiles, and amphibians were further devastated by the Permian–Triassic extinction event. During the Triassic Period (252 to 201 million years ago), the reptiles continued to out-compete the amphibians, leading to a reduction in both the amphibians' size and their importance in the biosphere. According to the fossil record, Lissamphibia, which includes all modern amphibians and is the only surviving lineage, may have branched off from the extinct groups Temnospondyli and Lepospondyli at some period between the Late Carboniferous and the Early Triassic. The relative scarcity of fossil evidence precludes precise dating, but the most recent molecular study, based on multilocus sequence typing, suggests a Late Carboniferous/Early Permian origin for extant amphibians. The origins and evolutionary relationships between the three main groups of amphibians is a matter of debate. A 2005 molecular phylogeny, based on rDNA analysis, suggests that salamanders and caecilians are more closely related to each other than they are to frogs. It also appears that the divergence of the three groups took place in the Paleozoic or early Mesozoic (around 250 million years ago), before the breakup of the supercontinent Pangaea and soon after their divergence from the lobe-finned fish. The briefness of this period, and the swiftness with which radiation took place, would help account for the relative scarcity of primitive amphibian fossils. There are large gaps in the fossil record, the discovery of the dissorophoid temnospondyl Gerobatrachus from the Early Permian in Texas in 2008 provided a missing link with many of the characteristics of modern frogs. Molecular analysis suggests that the frog–salamander divergence took place considerably earlier than the palaeontological evidence indicates. One study suggested that the last common ancestor of all modern amphibians lived about 315 million years ago, and that stereospondyl temnospondyls are the closest relatives to the caecilians. However, most studies support a single monophyletic origin of all modern amphibians within the dissorophoid temnospondyls. As they evolved from lunged fish, amphibians had to make certain adaptations for living on land, including the need to develop new means of locomotion. In the water, the sideways thrusts of their tails had propelled them forward, but on land, quite different mechanisms were required. Their vertebral columns, limbs, limb girdles and musculature needed to be strong enough to raise them off the ground for locomotion and feeding. Terrestrial adults discarded their lateral line systems and adapted their sensory systems to receive stimuli via the medium of the air. They needed to develop new methods to regulate their body heat to cope with fluctuations in ambient temperature. They developed behaviours suitable for reproduction in a terrestrial environment. Their skins were exposed to harmful ultraviolet rays that had previously been absorbed by the water. The skin changed to become more protective and prevent excessive water loss. Characteristics The superclass Tetrapoda is divided into four classes of vertebrate animals with four limbs. Reptiles, birds and mammals are amniotes, the eggs of which are either laid or carried by the female and are surrounded by several membranes, some of which are impervious. Lacking these membranes, amphibians require water bodies for reproduction, although some species have developed various strategies for protecting or bypassing the vulnerable aquatic larval stage. They are not found in the sea with the exception of one or two frogs that live in brackish water in mangrove swamps; the Anderson's salamander meanwhile occurs in brackish or salt water lakes. On land, amphibians are restricted to moist habitats because of the need to keep their skin damp. Modern amphibians have a simplified anatomy compared to their ancestors due to paedomorphosis, caused by two evolutionary trends: miniaturization and an unusually large genome, which result in a slower growth and development rate compared to other vertebrates. Another reason for their size is associated with their rapid metamorphosis, which seems to have evolved only in the ancestors of Lissamphibia; in all other known lines the development was much more gradual. Because a remodeling of the feeding apparatus means they do not eat during the metamorphosis, the metamorphosis has to go faster the smaller the individual is, so it happens at an early stage when the larvae are still small. (The largest species of salamanders do not go through a metamorphosis.) Amphibians that lay eggs on land often go through the whole metamorphosis inside the egg. An anamniotic terrestrial egg is less than 1 cm in diameter due to diffusion problems, a size which puts a limit on the amount of posthatching growth. The smallest amphibian (and vertebrate) in the world is a microhylid frog from New Guinea (Paedophryne amauensis) first discovered in 2012. It has an average length of and is part of a genus that contains four of the world's ten smallest frog species. The largest living amphibian is the Chinese giant salamander (Andrias davidianus) but this is a great deal smaller than the largest amphibian that ever existed—the extinct Prionosuchus, a crocodile-like temnospondyl dating to 270 million years ago from the middle Permian of Brazil. The largest frog is the African Goliath frog (Conraua goliath), which can reach and weigh . Amphibians are ectothermic (cold-blooded) vertebrates that do not maintain their body temperature through internal physiological processes. Their metabolic rate is low and as a result, their food and energy requirements are limited. In the adult state, they have tear ducts and movable eyelids, and most species have ears that can detect airborne or ground vibrations. They have muscular tongues, which in many species can be protruded. Modern amphibians have fully ossified vertebrae with articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, apart from a few fish-like scales in certain caecilians. The skin contains many mucous glands and in some species, poison glands (a type of granular gland). The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Most amphibians lay their eggs in water and have aquatic larvae that undergo metamorphosis to become terrestrial adults. Amphibians breathe by means of a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin. Anura The order Anura (from the Ancient Greek a(n)- meaning "without" and oura meaning "tail") comprises the frogs and toads. They usually have long hind limbs that fold underneath them, shorter forelimbs, webbed toes with no claws, no tails, large eyes and glandular moist skin. Members of this order with smooth skins are commonly referred to as frogs, while those with warty skins are known as toads. The difference is not a formal one taxonomically and there are numerous exceptions to this rule. Members of the family Bufonidae are known as the "true toads". Frogs range in size from the Goliath frog (Conraua goliath) of West Africa to the Paedophryne amauensis, first described in Papua New Guinea in 2012, which is also the smallest known vertebrate. Although most species are associated with water and damp habitats, some are specialised to live in trees or in deserts. They are found worldwide except for polar areas. Anura is divided into three suborders that are broadly accepted by the scientific community, but the relationships between some families remain unclear. Future molecular studies should provide further insights into their evolutionary relationships. The suborder Archaeobatrachia contains four families of primitive frogs. These are Ascaphidae, Bombinatoridae, Discoglossidae and Leiopelmatidae which have few derived features and are probably paraphyletic with regard to other frog lineages. The six families in the more evolutionarily advanced suborder Mesobatrachia are the fossorial Megophryidae, Pelobatidae, Pelodytidae, Scaphiopodidae and Rhinophrynidae and the obligatorily aquatic Pipidae. These have certain characteristics that are intermediate between the two other suborders. Neobatrachia is by far the largest suborder and includes the remaining families of modern frogs, including most common species. Approximately 96% of the over 5,000 extant species of frog are neobatrachians. Caudata The order Caudata (from the Latin cauda meaning "tail") consists of the salamanders—elongated, low-slung animals that mostly resemble lizards in form. This is a symplesiomorphic trait and they are no more closely related to lizards than they are to mammals. Salamanders lack claws, have scale-free skins, either smooth or covered with tubercles, and tails that are usually flattened from side to side and often finned. They range in size from the Chinese giant salamander (Andrias davidianus), which has been reported to grow to a length of , to the diminutive Thorius pennatulus from Mexico which seldom exceeds in length. Salamanders have a mostly Laurasian distribution, being present in much of the Holarctic region of the northern hemisphere. The family Plethodontidae is also found in Central America and South America north of the Amazon basin; South America was apparently invaded from Central America by about the start of the Miocene, 23 million years ago. Urodela is a name sometimes used for all the extant species of salamanders. Members of several salamander families have become paedomorphic and either fail to complete their metamorphosis or retain some larval characteristics as adults. Most salamanders are under long. They may be terrestrial or aquatic and many spend part of the year in each habitat. When on land, they mostly spend the day hidden under stones or logs or in dense vegetation, emerging in the evening and night to forage for worms, insects and other invertebrates. The suborder Cryptobranchoidea contains the primitive salamanders. A number of fossil cryptobranchids have been found, but there are only three living species, the Chinese giant salamander (Andrias davidianus), the Japanese giant salamander (Andrias japonicus) and the hellbender (Cryptobranchus alleganiensis) from North America. These large amphibians retain several larval characteristics in their adult state; gills slits are present and the eyes are unlidded. A unique feature is their ability to feed by suction, depressing either the left side of their lower jaw or the right. The males excavate nests, persuade females to lay their egg strings inside them, and guard them. As well as breathing with lungs, they respire through the many folds in their thin skin, which has capillaries close to the surface. The suborder Salamandroidea contains the advanced salamanders. They differ from the cryptobranchids by having fused prearticular bones in the lower jaw, and by using internal fertilisation. In salamandrids, the male deposits a bundle of sperm, the spermatophore, and the female picks it up and inserts it into her cloaca where the sperm is stored until the eggs are laid. The largest family in this group is Plethodontidae, the lungless salamanders, which includes 60% of all salamander species. The family Salamandridae includes the true salamanders and the name "newt" is given to members of its subfamily Pleurodelinae. The third suborder, Sirenoidea, contains the four species of sirens, which are in a single family, Sirenidae. Members of this order are eel-like aquatic salamanders with much reduced forelimbs and no hind limbs. Some of their features are primitive while others are derived. Fertilisation is likely to be external as sirenids lack the cloacal glands used by male salamandrids to produce spermatophores and the females lack spermathecae for sperm storage. Despite this, the eggs are laid singly, a behaviour not conducive for external fertilisation. Gymnophiona The order Gymnophiona (from the Greek gymnos meaning "naked" and ophis meaning "serpent") or Apoda comprises the caecilians. These are long, cylindrical, limbless animals with a snake- or worm-like form. The adults vary in length from 8 to 75 centimetres (3 to 30 inches) with the exception of Thomson's caecilian (Caecilia thompsoni), which can reach . A caecilian's skin has a large number of transverse folds and in some species contains tiny embedded dermal scales. It has rudimentary eyes covered in skin, which are probably limited to discerning differences in light intensity. It also has a pair of short tentacles near the eye that can be extended and which have tactile and olfactory functions. Most caecilians live underground in burrows in damp soil, in rotten wood and under plant debris, but some are aquatic. Most species lay their eggs underground and when the larvae hatch, they make their way to adjacent bodies of water. Others brood their eggs and the larvae undergo metamorphosis before the eggs hatch. A few species give birth to live young, nourishing them with glandular secretions while they are in the oviduct. Caecilians have a mostly Gondwanan distribution, being found in tropical regions of Africa, Asia and Central and South America. Anatomy and physiology Skin The integumentary structure contains some typical characteristics common to terrestrial vertebrates, such as the presence of highly cornified outer layers, renewed periodically through a moulting process controlled by the pituitary and thyroid glands. Local thickenings (often called warts) are common, such as those found on toads. The outside of the skin is shed periodically mostly in one piece, in contrast to mammals and birds where it is shed in flakes. Amphibians often eat the sloughed skin. Caecilians are unique among amphibians in having mineralized dermal scales embedded in the dermis between the furrows in the skin. The similarity of these to the scales of bony fish is largely superficial. Lizards and some frogs have somewhat similar osteoderms forming bony deposits in the dermis, but this is an example of convergent evolution with similar structures having arisen independently in diverse vertebrate lineages. Amphibian skin is permeable to water. Gas exchange can take place through the skin (cutaneous respiration) and this allows adult amphibians to respire without rising to the surface of water and to hibernate at the bottom of ponds. To compensate for their thin and delicate skin, amphibians have evolved mucous glands, principally on their heads, backs and tails. The secretions produced by these help keep the skin moist. In addition, most species of amphibian have granular glands that secrete distasteful or poisonous substances. Some amphibian toxins can be lethal to humans while others have little effect. The main poison-producing glands, the parotoids, produce the neurotoxin bufotoxin and are located behind the ears of toads, along the backs of frogs, behind the eyes of salamanders and on the upper surface of caecilians. The skin colour of amphibians is produced by three layers of pigment cells called chromatophores. These three cell layers consist of the melanophores (occupying the deepest layer), the guanophores (forming an intermediate layer and containing many granules, producing a blue-green colour) and the lipophores (yellow, the most superficial layer). The colour change displayed by many species is initiated by hormones secreted by the pituitary gland. Unlike bony fish, there is no direct control of the pigment cells by the nervous system, and this results in the colour change taking place more slowly than happens in fish. A vividly coloured skin usually indicates that the species is toxic and is a warning sign to predators. Skeletal system and locomotion Amphibians have a skeletal system that is structurally homologous to other tetrapods, though with a number of variations. They all have four limbs except for the legless caecilians and a few species of salamander with reduced or no limbs. The bones are hollow and lightweight. The musculoskeletal system is strong to enable it to support the head and body. The bones are fully ossified and the vertebrae interlock with each other by means of overlapping processes. The pectoral girdle is supported by muscle, and the well-developed pelvic girdle is attached to the backbone by a pair of sacral ribs. The ilium slopes forward and the body is held closer to the ground than is the case in mammals. In most amphibians, there are four digits on the fore foot and five on the hind foot, but no claws on either. Some salamanders have fewer digits and the amphiumas are eel-like in appearance with tiny, stubby legs. The sirens are aquatic salamanders with stumpy forelimbs and no hind limbs. The caecilians are limbless. They burrow in the manner of earthworms with zones of muscle contractions moving along the body. On the surface of the ground or in water they move by undulating their body from side to side. In frogs, the hind legs are larger than the fore legs, especially so in those species that principally move by jumping or swimming. In the walkers and runners the hind limbs are not so large, and the burrowers mostly have short limbs and broad bodies. The feet have adaptations for the way of life, with webbing between the toes for swimming, broad adhesive toe pads for climbing, and keratinised tubercles on the hind feet for digging (frogs usually dig backwards into the soil). In most salamanders, the limbs are short and more or less the same length and project at right angles from the body. Locomotion on land is by walking and the tail often swings from side to side or is used as a prop, particularly when climbing. In their normal gait, only one leg is advanced at a time in the manner adopted by their ancestors, the lobe-finned fish. Some salamanders in the genus Aneides and certain plethodontids climb trees and have long limbs, large toepads and prehensile tails. In aquatic salamanders and in frog tadpoles, the tail has dorsal and ventral fins and is moved from side to side as a means of propulsion. Adult frogs do not have tails and caecilians have only very short ones. Salamanders use their tails in defence and some are prepared to jettison them to save their lives in a process known as autotomy. Certain species in the Plethodontidae have a weak zone at the base of the tail and use this strategy readily. The tail often continues to twitch after separation which may distract the attacker and allow the salamander to escape. Both tails and limbs can be regenerated. Adult frogs are unable to regrow limbs but tadpoles can do so. Circulatory system Amphibians have a juvenile stage and an adult stage, and the circulatory systems of the two are distinct. In the juvenile (or tadpole) stage, the circulation is similar to that of a fish; the two-chambered heart pumps the blood through the gills where it is oxygenated, and is spread around the body and back to the heart in a single loop. In the adult stage, amphibians (especially frogs) lose their gills and develop lungs. They have a heart that consists of a single ventricle and two atria. When the ventricle starts contracting, deoxygenated blood is pumped through the pulmonary artery to the lungs. Continued contraction then pumps oxygenated blood around the rest of the body. Mixing of the two bloodstreams is minimized by the anatomy of the chambers. Nervous and sensory systems The nervous system is basically the same as in other vertebrates, with a central brain, a spinal cord, and nerves throughout the body. The amphibian brain is relatively simple but broadly the same structurally as in reptiles, birds and mammals. Their brains are elongated, except in caecilians, and contain the usual motor and sensory areas of tetrapods. The pineal body, known to regulate sleep patterns in humans, is thought to produce the hormones involved in hibernation and aestivation in amphibians. Tadpoles retain the lateral line system of their ancestral fishes, but this is lost in terrestrial adult amphibians. Many aquatic salamanders and some caecilians possess electroreceptors called ampullary organs (completely absent in anurans), that allow them to locate objects around them when submerged in water. The ears are well developed in frogs. There is no external ear, but the large circular eardrum lies on the surface of the head just behind the eye. This vibrates and sound is transmitted through a single bone, the stapes, to the inner ear. Only high-frequency sounds like mating calls are heard in this way, but low-frequency noises can be detected through another mechanism. There is a patch of specialized haircells, called papilla amphibiorum, in the inner ear capable of detecting deeper sounds. Another feature, unique to frogs and salamanders, is the columella-operculum complex adjoining the auditory capsule which is involved in the transmission of both airborne and seismic signals. The ears of salamanders and caecilians are less highly developed than those of frogs as they do not normally communicate with each other through the medium of sound. The eyes of tadpoles lack lids, but at metamorphosis, the cornea becomes more dome-shaped, the lens becomes flatter, and eyelids and associated glands and ducts develop. The adult eyes are an improvement on invertebrate eyes and were a first step in the development of more advanced vertebrate eyes. They allow colour vision and depth of focus. In the retinas are green rods, which are receptive to a wide range of wavelengths. Digestive and excretory systems Many amphibians catch their prey by flicking out an elongated tongue with a sticky tip and drawing it back into the mouth before seizing the item with their jaws. Some use inertial feeding to help them swallow the prey, repeatedly thrusting their head forward sharply causing the food to move backwards in their mouth by inertia. Most amphibians swallow their prey whole without much chewing so they possess voluminous stomachs. The short oesophagus is lined with cilia that help to move the food to the stomach and mucus produced by glands in the mouth and pharynx eases its passage. The enzyme chitinase produced in the stomach helps digest the chitinous cuticle of arthropod prey. Amphibians possess a pancreas, liver and gall bladder. The liver is usually large with two lobes. Its size is determined by its function as a glycogen and fat storage unit, and may change with the seasons as these reserves are built or used up. Adipose tissue is another important means of storing energy and this occurs in the abdomen (in internal structures called fat bodies), under the skin and, in some salamanders, in the tail. There are two kidneys located dorsally, near the roof of the body cavity. Their job is to filter the blood of metabolic waste and transport the urine via ureters to the urinary bladder where it is stored before being passed out periodically through the cloacal vent. Larvae and most aquatic adult amphibians excrete the nitrogen as ammonia in large quantities of dilute urine, while terrestrial species, with a greater need to conserve water, excrete the less toxic product urea. Some tree frogs with limited access to water excrete most of their metabolic waste as uric acid. Urinary bladder Most aquatic and semi-aquatic amphibians have a membranous skin which allows them to absorb water directly through it. Some semi-aquatic animals also have similarly permeable bladder membrane. As a result, they tend to have high rates of urine production to offset this high water intake, and have urine which is low in dissolved salts. The urinary bladder assists such animals to retain salts. Some aquatic amphibian such as Xenopus do not reabsorb water, to prevent excessive water influx. For land-dwelling amphibians, dehydration results in reduced urine output. The amphibian bladder is usually highly distensible and among some land-dwelling species of frogs and salamanders may account for between 20% and 50% of their total body weight. Urine flows from the kidneys through the ureters into the bladder and is periodically released from the bladder to the cloaca. Respiratory system The lungs in amphibians are primitive compared to those of amniotes, possessing few internal septa and large alveoli, and consequently having a comparatively slow diffusion rate for oxygen entering the blood. Ventilation is accomplished by buccal pumping. Most amphibians, however, are able to exchange gases with the water or air via their skin. To enable sufficient cutaneous respiration, the surface of their highly vascularised skin must remain moist to allow the oxygen to diffuse at a sufficiently high rate. Because oxygen concentration in the water increases at both low temperatures and high flow rates, aquatic amphibians in these situations can rely primarily on cutaneous respiration, as in the Titicaca water frog and the hellbender salamander. In air, where oxygen is more concentrated, some small species can rely solely on cutaneous gas exchange, most famously the plethodontid salamanders, which have neither lungs nor gills. Many aquatic salamanders and all tadpoles have gills in their larval stage, with some (such as the axolotl) retaining gills as aquatic adults. Reproduction For the purpose of reproduction, most amphibians require fresh water although some lay their eggs on land and have developed various means of keeping them moist. A few (e.g. Fejervarya raja) can inhabit brackish water, but there are no true marine amphibians. There are reports, however, of particular amphibian populations unexpectedly invading marine waters. Such was the case with the Black Sea invasion of the natural hybrid Pelophylax esculentus reported in 2010. Several hundred frog species in adaptive radiations (e.g., Eleutherodactylus, the Pacific Platymantis, the Australo-Papuan microhylids, and many other tropical frogs), however, do not need any water for breeding in the wild. They reproduce via direct development, an ecological and evolutionary adaptation that has allowed them to be completely independent from free-standing water. Almost all of these frogs live in wet tropical rainforests and their eggs hatch directly into miniature versions of the adult, passing through the tadpole stage within the egg. Reproductive success of many amphibians is dependent not only on the quantity of rainfall, but the seasonal timing. In the tropics, many amphibians breed continuously or at any time of year. In temperate regions, breeding is mostly seasonal, usually in the spring, and is triggered by increasing day length, rising temperatures or rainfall. Experiments have shown the importance of temperature, but the trigger event, especially in arid regions, is often a storm. In anurans, males usually arrive at the breeding sites before females and the vocal chorus they produce may stimulate ovulation in females and the endocrine activity of males that are not yet reproductively active. In caecilians, fertilisation is internal, the male extruding an intromittent organ, the , and inserting it into the female cloaca. The paired Müllerian glands inside the male cloaca secrete a fluid which resembles that produced by mammalian prostate glands and which may transport and nourish the sperm. Fertilisation probably takes place in the oviduct. The majority of salamanders also engage in internal fertilisation. In most of these, the male deposits a spermatophore, a small packet of sperm on top of a gelatinous cone, on the substrate either on land or in the water. The female takes up the sperm packet by grasping it with the lips of the cloaca and pushing it into the vent. The spermatozoa move to the spermatheca in the roof of the cloaca where they remain until ovulation which may be many months later. Courtship rituals and methods of transfer of the spermatophore vary between species. In some, the spermatophore may be placed directly into the female cloaca while in others, the female may be guided to the spermatophore or restrained with an embrace called amplexus. Certain primitive salamanders in the families Sirenidae, Hynobiidae and Cryptobranchidae practice external fertilisation in a similar manner to frogs, with the female laying the eggs in water and the male releasing sperm onto the egg mass. With a few exceptions, frogs use external fertilisation. The male grasps the female tightly with his forelimbs either behind the arms or in front of the back legs, or in the case of Epipedobates tricolor, around the neck. They remain in amplexus with their cloacae positioned close together while the female lays the eggs and the male covers them with sperm. Roughened nuptial pads on the male's hands aid in retaining grip. Often the male collects and retains the egg mass, forming a sort of basket with the hind feet. An exception is the granular poison frog (Oophaga granulifera) where the male and female place their cloacae in close proximity while facing in opposite directions and then release eggs and sperm simultaneously. The tailed frog (Ascaphus truei) exhibits internal fertilisation. The "tail" is only possessed by the male and is an extension of the cloaca and used to inseminate the female. This frog lives in fast-flowing streams and internal fertilisation prevents the sperm from being washed away before fertilisation occurs. The sperm may be retained in storage tubes attached to the oviduct until the following spring. Most frogs can be classified as either prolonged or explosive breeders. Typically, prolonged breeders congregate at a breeding site, the males usually arriving first, calling and setting up territories. Other satellite males remain quietly nearby, waiting for their opportunity to take over a territory. The females arrive sporadically, mate selection takes place and eggs are laid. The females depart and territories may change hands. More females appear and in due course, the breeding season comes to an end. Explosive breeders on the other hand are found where temporary pools appear in dry regions after rainfall. These frogs are typically fossorial species that emerge after heavy rains and congregate at a breeding site. They are attracted there by the calling of the first male to find a suitable place, perhaps a pool that forms in the same place each rainy season. The assembled frogs may call in unison and frenzied activity ensues, the males scrambling to mate with the usually smaller number of females. There is a direct competition between males to win the attention of the females in salamanders and newts, with elaborate courtship displays to keep the female's attention long enough to get her interested in choosing him to mate with. Some species store sperm through long breeding seasons, as the extra time may allow for interactions with rival sperm. Unisexual reproduction Unisexual female mole salamanders (genus Ambystoma) are common in the Great Lakes region of North America. These salamanders are the oldest known unisexual vertebrate lineage, having emerged about 5 million years ago. Genome exchange can sometimes occur between the unisexual female Ambystoma and males from sympatric sexual species. Life cycle Most amphibians go through metamorphosis, a process of significant morphological change after birth. In typical amphibian development, eggs are laid in water and larvae are adapted to an aquatic lifestyle. Frogs, toads and salamanders all hatch from the egg as larvae with external gills. Metamorphosis in amphibians is regulated by thyroxine concentration in the blood, which stimulates metamorphosis, and prolactin, which counteracts thyroxine's effect. Specific events are dependent on threshold values for different tissues. Because most embryonic development is outside the parental body, it is subject to many adaptations due to specific environmental circumstances. For this reason tadpoles can have horny ridges instead of teeth, whisker-like skin extensions or fins. They also make use of a sensory lateral line organ similar to that of fish. After metamorphosis, these organs become redundant and will be reabsorbed by controlled cell death, called apoptosis. The variety of adaptations to specific environmental circumstances among amphibians is wide, with many discoveries still being made. Eggs In the egg, the embryo is suspended in perivitelline fluid and surrounded by semi-permeable gelatinous capsules, with the yolk mass providing nutrients. As the larvae hatch, the capsules are dissolved by enzymes secreted from gland at the tip of the snout. The eggs of some salamanders and frogs contain unicellular green algae. These penetrate the jelly envelope after the eggs are laid and may increase the supply of oxygen to the embryo through photosynthesis. They seem to both speed up the development of the larvae and reduce mortality. In the wood frog (Rana sylvatica), the interior of the globular egg cluster has been found to be up to warmer than its surroundings, which is an advantage in its cool northern habitat. The eggs may be deposited singly, in cluster or in long strands. Sites for laying eggs include water, mud, burrows, debris and on plants or under logs or stones. The greenhouse frog (Eleutherodactylus planirostris) lays eggs in small groups in the soil where they develop in about two weeks directly into juvenile frogs without an intervening larval stage. The tungara frog (Physalaemus pustulosus) builds a floating nest from foam to protect its eggs. First a raft is built, then eggs are laid in the centre, and finally a foam cap is overlaid. The foam has anti-microbial properties. It contains no detergents but is created by whipping up proteins and lectins secreted by the female. Larvae The eggs of amphibians are typically laid in water and hatch into free-living larvae that complete their development in water and later transform into either aquatic or terrestrial adults. In many species of frog and in most lungless salamanders (Plethodontidae), direct development takes place, the larvae growing within the eggs and emerging as miniature adults. Many caecilians and some other amphibians lay their eggs on land, and the newly hatched larvae wriggle or are transported to water bodies. Some caecilians, the alpine salamander (Salamandra atra) and some of the African live-bearing toads (Nectophrynoides spp.) are viviparous. Their larvae feed on glandular secretions and develop within the female's oviduct, often for long periods. Other amphibians, but not caecilians, are ovoviviparous. The eggs are retained in or on the parent's body, but the larvae subsist on the yolks of their eggs and receive no nourishment from the adult. The larvae emerge at varying stages of their growth, either before or after metamorphosis, according to their species. The toad genus Nectophrynoides exhibits all of these developmental patterns among its dozen or so members. Amphibian larvae are known as tadpoles. They have thick, rounded bodies with powerful muscular tails. Frogs Unlike in other amphibians, frog tadpoles do not resemble adults. The free-living larvae are normally fully aquatic, but the tadpoles of some species (such as Nannophrys ceylonensis) are semi-terrestrial and live among wet rocks. Tadpoles have cartilaginous skeletons, gills for respiration (external gills at first, internal gills later), lateral line systems and large tails that they use for swimming. Newly hatched tadpoles soon develop gill pouches that cover the gills. These internal gills and operculum are not homologous with those of fish, and are only found in tadpoles as both salamanders and caecilians have external gills only. Combined with buccal pumping the internal gills has allowed tadpoles to adopt a filter feeding lifestyle, even if several species have since evolved other types of feeding strategies. The lungs develop early and are used as accessory breathing organs, the tadpoles rising to the water surface to gulp air. Some species complete their development inside the egg and hatch directly into small frogs. These larvae do not have gills but instead have specialised areas of skin through which respiration takes place. While tadpoles do not have true teeth, in most species, the jaws have long, parallel rows of small keratinized structures called keradonts surrounded by a horny beak. Front legs are formed under the gill sac and hind legs become visible a few days later. Iodine and T4 (over stimulate the spectacular apoptosis [programmed cell death] of the cells of the larval gills, tail and fins) also stimulate the evolution of nervous systems transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog with better neurological, visuospatial, olfactory and cognitive abilities for hunting. In fact, tadpoles developing in ponds and streams are typically herbivorous. Pond tadpoles tend to have deep bodies, large caudal fins and small mouths; they swim in the quiet waters feeding on growing or loose fragments of vegetation. Stream dwellers mostly have larger mouths, shallow bodies and caudal fins; they attach themselves to plants and stones and feed on the surface films of algae and bacteria. They also feed on diatoms, filtered from the water through the gills, and stir up the sediment at bottom of the pond, ingesting edible fragments. They have a relatively long, spiral-shaped gut to enable them to digest this diet. Some species are carnivorous at the tadpole stage, eating insects, smaller tadpoles and fish. Young of the Cuban tree frog (Osteopilus septentrionalis) can occasionally be cannibalistic, the younger tadpoles attacking a larger, more developed tadpole when it is undergoing metamorphosis. At metamorphosis, rapid changes in the body take place as the lifestyle of the frog changes completely. The spiral-shaped mouth with horny tooth ridges is reabsorbed together with the spiral gut. The animal develops a large jaw, and its gills disappear along with its gill sac. Eyes and legs grow quickly, and a tongue is formed. There are associated changes in the neural networks such as development of stereoscopic vision and loss of the lateral line system. All this can happen in about a day. A few days later, the tail is reabsorbed, due to the higher thyroxine concentration required for this to take place. Salamanders At hatching, a typical salamander larva has eyes without lids, teeth in both upper and lower jaws, three pairs of feathery external gills, and a long tail with dorsal and ventral fins. The forelimbs may be partially developed and the hind limbs are rudimentary in pond-living species but may be rather more developed in species that reproduce in moving water. Pond-type larvae often have a pair of balancers, rod-like structures on either side of the head that may prevent the gills from becoming clogged up with sediment. Both of these are able to breed. Some have larvae that never fully develop into the adult form, a condition known as neoteny. Neoteny occurs when the animal's growth rate is very low and is usually linked to adverse conditions such as low water temperatures that may change the response of the tissues to the hormone thyroxine. as well as lack of food. There are fifteen species of obligate neotenic salamanders, including species of Necturus, Proteus and Amphiuma, and many examples of facultative ones, such as the northwestern salamander (Ambystoma gracile) and the tiger salamander (A. tigrinum) that adopt this strategy under appropriate environmental circumstances. Lungless salamanders in the family Plethodontidae are terrestrial and lay a small number of unpigmented eggs in a cluster among damp leaf litter. Each egg has a large yolk sac and the larva feeds on this while it develops inside the egg, emerging fully formed as a juvenile salamander. The female salamander often broods the eggs. In the genus Ensatinas, the female has been observed to coil around them and press her throat area against them, effectively massaging them with a mucous secretion. In newts and salamanders, metamorphosis is less dramatic than in frogs. This is because the larvae are already carnivorous and continue to feed as predators when they are adults so few changes are needed to their digestive systems. Their lungs are functional early, but the larvae do not make as much use of them as do tadpoles. Their gills are never covered by gill sacs and are reabsorbed just before the animals leave the water. Other changes include the reduction in size or loss of tail fins, the closure of gill slits, thickening of the skin, the development of eyelids, and certain changes in dentition and tongue structure. Salamanders are at their most vulnerable at metamorphosis as swimming speeds are reduced and transforming tails are encumbrances on land. Adult salamanders often have an aquatic phase in spring and summer, and a land phase in winter. For adaptation to a water phase, prolactin is the required hormone, and for adaptation to the land phase, thyroxine. External gills do not return in subsequent aquatic phases because these are completely absorbed upon leaving the water for the first time. Caecilians Most terrestrial caecilians that lay eggs do so in burrows or moist places on land near bodies of water. The development of the young of Ichthyophis glutinosus, a species from Sri Lanka, has been much studied. The eel-like larvae hatch out of the eggs and make their way to water. They have three pairs of external red feathery gills, a blunt head with two rudimentary eyes, a lateral line system and a short tail with fins. They swim by undulating their body from side to side. They are mostly active at night, soon lose their gills and make sorties onto land. Metamorphosis is gradual. By the age of about ten months they have developed a pointed head with sensory tentacles near the mouth and lost their eyes, lateral line systems and tails. The skin thickens, embedded scales develop and the body divides into segments. By this time, the caecilian has constructed a burrow and is living on land. In the majority of species of caecilians, the young are produced by viviparity. Typhlonectes compressicauda, a species from South America, is typical of these. Up to nine larvae can develop in the oviduct at any one time. They are elongated and have paired sac-like gills, small eyes and specialised scraping teeth. At first, they feed on the yolks of the eggs, but as this source of nourishment declines they begin to rasp at the ciliated epithelial cells that line the oviduct. This stimulates the secretion of fluids rich in lipids and mucoproteins on which they feed along with scrapings from the oviduct wall. They may increase their length sixfold and be two-fifths as long as their mother before being born. By this time they have undergone metamorphosis, lost their eyes and gills, developed a thicker skin and mouth tentacles, and reabsorbed their teeth. A permanent set of teeth grow through soon after birth. Gills are only necessarily during embryonic development, and in species that give birth the offspring is born after gill degeneration. In egg laying caecilians the gills are either reabsorbed before hatching, or, in species that hatch with gill remnants still present, short lived and only leaves behind a gill slit. For species with scales under their skin, the scales does not form before during metamorphosis. The ringed caecilian (Siphonops annulatus) has developed a unique adaptation for the purposes of reproduction. The progeny feed on a skin layer that is specially developed by the adult in a phenomenon known as maternal dermatophagy. The brood feed as a batch for about seven minutes at intervals of approximately three days which gives the skin an opportunity to regenerate. Meanwhile, they have been observed to ingest fluid exuded from the maternal cloaca. Parental care The care of offspring among amphibians has been little studied but, in general, the larger the number of eggs in a batch, the less likely it is that any degree of parental care takes place. Nevertheless, it is estimated that in up to 20% of amphibian species, one or both adults play some role in the care of the young. Those species that breed in smaller water bodies or other specialised habitats tend to have complex patterns of behaviour in the care of their young. Many woodland salamanders lay clutches of eggs under dead logs or stones on land. The black mountain salamander (Desmognathus welteri) does this, the mother brooding the eggs and guarding them from predation as the embryos feed on the yolks of their eggs. When fully developed, they break their way out of the egg capsules and disperse as juvenile salamanders. The male hellbender, a primitive salamander, excavates an underwater nest and encourages females to lay there. The male then guards the site for the two or three months before the eggs hatch, using body undulations to fan the eggs and increase their supply of oxygen. The male Colostethus subpunctatus, a tiny frog, protects the egg cluster which is hidden under a stone or log. When the eggs hatch, the male transports the tadpoles on his back, stuck there by a mucous secretion, to a temporary pool where he dips himself into the water and the tadpoles drop off. The male midwife toad (Alytes obstetricans) winds egg strings round his thighs and carries the eggs around for up to eight weeks. He keeps them moist and when they are ready to hatch, he visits a pond or ditch and releases the tadpoles. The female gastric-brooding frog (Rheobatrachus spp.) reared larvae in her stomach after swallowing either the eggs or hatchlings; however, this stage was never observed before the species became extinct. The tadpoles secrete a hormone that inhibits digestion in the mother whilst they develop by consuming their very large yolk supply. The pouched frog (Assa darlingtoni) lays eggs on the ground. When they hatch, the male carries the tadpoles around in brood pouches on his hind legs. The aquatic Surinam toad (Pipa pipa) raises its young in pores on its back where they remain until metamorphosis. The granular poison frog (Oophaga granulifera) is typical of a number of tree frogs in the poison dart frog family Dendrobatidae. Its eggs are laid on the forest floor and when they hatch, the tadpoles are carried one by one on the back of an adult to a suitable water-filled crevice such as the axil of a leaf or the rosette of a bromeliad. The female visits the nursery sites regularly and deposits unfertilised eggs in the water and these are consumed by the tadpoles. Genetics and genomics Amphibians are notable among vertebrates for their diversity of chromosomes and genomes. The karyotypes (chromosomes) have been determined for at least 1,193 (14.5%) of the ≈8,200 known (diploid) species, including 963 anurans, 209 salamanders, and 21 caecilians. Generally, the karyotypes of diploid amphibians are characterized by 20–26 bi-armed chromosomes. Amphibians have also very large genomes compared to other taxa of vertebrates and corresponding variation in genome size (C-value: picograms of DNA in haploid nuclei). The genome sizes range from 0.95 to 11.5 pg in frogs, from 13.89 to 120.56 pg in salamanders, and from 2.94 to 11.78 pg in caecilians. The large genome sizes have prevented whole-genome sequencing of amphibians although a number of genomes have been published recently. The 1.7GB draft genome of Xenopus tropicalis was the first to be reported for amphibians in 2010. Compared to some salamanders this frog genome is tiny. For instance, the genome of the Mexican axolotl turned out to be 32 Gb, which is more than 10 times larger than the human genome (3GB). Feeding and diet With a few exceptions, adult amphibians are predators, feeding on virtually anything that moves that they can swallow. The diet mostly consists of small prey that do not move too fast such as beetles, caterpillars, earthworms and spiders. The sirens (Siren spp.) often ingest aquatic plant material with the invertebrates on which they feed and a Brazilian tree frog (Xenohyla truncata) includes a large quantity of fruit in its diet. The Mexican burrowing toad (Rhinophrynus dorsalis) has a specially adapted tongue for picking up ants and termites. It projects it with the tip foremost whereas other frogs flick out the rear part first, their tongues being hinged at the front. Food is mostly selected by sight, even in conditions of dim light. Movement of the prey triggers a feeding response. Frogs have been caught on fish hooks baited with red flannel and green frogs (Rana clamitans) have been found with stomachs full of elm seeds that they had seen floating past. Toads, salamanders and caecilians also use smell to detect prey. This response is mostly secondary because salamanders have been observed to remain stationary near odoriferous prey but only feed if it moves. Cave-dwelling amphibians normally hunt by smell. Some salamanders seem to have learned to recognize immobile prey when it has no smell, even in complete darkness. Amphibians usually swallow food whole but may chew it lightly first to subdue it. They typically have small hinged pedicellate teeth, a feature unique to amphibians. The base and crown of these are composed of dentine separated by an uncalcified layer and they are replaced at intervals. Salamanders, caecilians and some frogs have one or two rows of teeth in both jaws, but some frogs (Rana spp.) lack teeth in the lower jaw, and toads (Bufo spp.) have no teeth. In many amphibians there are also vomerine teeth attached to a facial bone in the roof of the mouth. The tiger salamander (Ambystoma tigrinum) is typical of the frogs and salamanders that hide under cover ready to ambush unwary invertebrates. Other amphibians, such as the Bufo spp. toads, actively search for prey, while the Argentine horned frog (Ceratophrys ornata) lures inquisitive prey closer by raising its hind feet over its back and vibrating its yellow toes. Among leaf litter frogs in Panama, frogs that actively hunt prey have narrow mouths and are slim, often brightly coloured and toxic, while ambushers have wide mouths and are broad and well-camouflaged. Caecilians do not flick their tongues, but catch their prey by grabbing it with their slightly backward-pointing teeth. The struggles of the prey and further jaw movements work it inwards and the caecilian usually retreats into its burrow. The subdued prey is gulped down whole. When they are newly hatched, frog larvae feed on the yolk of the egg. When this is exhausted some move on to feed on bacteria, algal crusts, detritus and raspings from submerged plants. Water is drawn in through their mouths, which are usually at the bottom of their heads, and passes through branchial food traps between their mouths and their gills where fine particles are trapped in mucus and filtered out. Others have specialised mouthparts consisting of a horny beak edged by several rows of labial teeth. They scrape and bite food of many kinds as well as stirring up the bottom sediment, filtering out larger particles with the papillae around their mouths. Some, such as the spadefoot toads, have strong biting jaws and are carnivorous or even cannibalistic. Vocalization The calls made by caecilians and salamanders are limited to occasional soft squeaks, grunts or hisses and have not been much studied. A clicking sound sometimes produced by caecilians may be a means of orientation, as in bats, or a form of communication. Most salamanders are considered voiceless, but the California giant salamander (Dicamptodon ensatus) has vocal cords and can produce a rattling or barking sound. Some species of salamander emit a quiet squeak or yelp if attacked. Frogs are much more vocal, especially during the breeding season when they use their voices to attract mates. The presence of a particular species in an area may be more easily discerned by its characteristic call than by a fleeting glimpse of the animal itself. In most species, the sound is produced by expelling air from the lungs over the vocal cords into one or more air sacs in the throat or at the corner of the mouth. This may distend like a balloon and acts as a resonator, helping to transfer the sound to the atmosphere, or the water at times when the animal is submerged. The main vocalisation is the male's loud advertisement call which seeks to both encourage a female to approach and discourage other males from intruding on its territory. This call is modified to a quieter courtship call on the approach of a female or to a more aggressive version if a male intruder draws near. Calling carries the risk of attracting predators and involves the expenditure of much energy. Other calls include those given by a female in response to the advertisement call and a release call given by a male or female during unwanted attempts at amplexus. When a frog is attacked, a distress or fright call is emitted, often resembling a scream. The usually nocturnal Cuban tree frog (Osteopilus septentrionalis) produces a rain call when there is rainfall during daylight hours. Territorial behaviour Little is known of the territorial behaviour of caecilians, but some frogs and salamanders defend home ranges. These are usually feeding, breeding or sheltering sites. Males normally exhibit such behaviour though in some species, females and even juveniles are also involved. Although in many frog species, females are larger than males, this is not the case in most species where males are actively involved in territorial defence. Some of these have specific adaptations such as enlarged teeth for biting or spines on the chest, arms or thumbs. In salamanders, defence of a territory involves adopting an aggressive posture and if necessary attacking the intruder. This may involve snapping, chasing and sometimes biting, occasionally causing the loss of a tail. The behaviour of red back salamanders (Plethodon cinereus) has been much studied. 91% of marked individuals that were later recaptured were within a metre (yard) of their original daytime retreat under a log or rock. A similar proportion, when moved experimentally a distance of , found their way back to their home base. The salamanders left odour marks around their territories which averaged in size and were sometimes inhabited by a male and female pair. These deterred the intrusion of others and delineated the boundaries between neighbouring areas. Much of their behaviour seemed stereotyped and did not involve any actual contact between individuals. An aggressive posture involved raising the body off the ground and glaring at the opponent who often turned away submissively. If the intruder persisted, a biting lunge was usually launched at either the tail region or the naso-labial grooves. Damage to either of these areas can reduce the fitness of the rival, either because of the need to regenerate tissue or because it impairs its ability to detect food. In frogs, male territorial behaviour is often observed at breeding locations; calling is both an announcement of ownership of part of this resource and an advertisement call to potential mates. In general, a deeper voice represents a heavier and more powerful individual, and this may be sufficient to prevent intrusion by smaller males. Much energy is used in the vocalization and it takes a toll on the territory holder who may be displaced by a fitter rival if he tires. There is a tendency for males to tolerate the holders of neighbouring territories while vigorously attacking unknown intruders. Holders of territories have a "home advantage" and usually come off better in an encounter between two similar-sized frogs. If threats are insufficient, chest to chest tussles may take place. Fighting methods include pushing and shoving, deflating the opponent's vocal sac, seizing him by the head, jumping on his back, biting, chasing, splashing, and ducking him under the water. Defence mechanisms Amphibians have soft bodies with thin skins, and lack claws, defensive armour, or spines. Nevertheless, they have evolved various defence mechanisms to keep themselves alive. The first line of defence in salamanders and frogs is the mucous secretion that they produce. This keeps their skin moist and makes them slippery and difficult to grip. The secretion is often sticky and distasteful or toxic. Snakes have been observed yawning and gaping when trying to swallow African clawed frogs (Xenopus laevis), which gives the frogs an opportunity to escape. Caecilians have been little studied in this respect, but the Cayenne caecilian (Typhlonectes compressicauda) produces toxic mucus that has killed predatory fish in a feeding experiment in Brazil. In some salamanders, the skin is poisonous. The rough-skinned newt (Taricha granulosa) from North America and other members of its genus contain the neurotoxin tetrodotoxin (TTX), the most toxic non-protein substance known and almost identical to that produced by pufferfish. Handling the newts does not cause harm, but ingestion of even the most minute amounts of the skin is deadly. In feeding trials, fish, frogs, reptiles, birds and mammals were all found to be susceptible. The only predators with some tolerance to the poison are certain populations of common garter snake (Thamnophis sirtalis). In locations where both snake and salamander co-exist, the snakes have developed immunity through genetic changes and they feed on the amphibians with impunity. Coevolution occurs with the newt increasing its toxic capabilities at the same rate as the snake further develops its immunity. Some frogs and toads are toxic, the main poison glands being at the side of the neck and under the warts on the back. These regions are presented to the attacking animal and their secretions may be foul-tasting or cause various physical or neurological symptoms. Altogether, over 200 toxins have been isolated from the limited number of amphibian species that have been investigated. Poisonous species often use bright colouring to warn potential predators of their toxicity. These warning colours tend to be red or yellow combined with black, with the fire salamander (Salamandra salamandra) being an example. Once a predator has sampled one of these, it is likely to remember the colouration next time it encounters a similar animal. In some species, such as the fire-bellied toad (Bombina spp.), the warning colouration is on the belly and these animals adopt a defensive pose when attacked, exhibiting their bright colours to the predator. The frog Allobates zaparo is not poisonous, but mimics the appearance of other toxic species in its locality, a strategy that may deceive predators. Many amphibians are nocturnal and hide during the day, thereby avoiding diurnal predators that hunt by sight. Other amphibians use camouflage to avoid being detected. They have various colourings such as mottled browns, greys and olives to blend into the background. Some salamanders adopt defensive poses when faced by a potential predator such as the North American northern short-tailed shrew (Blarina brevicauda). Their bodies writhe and they raise and lash their tails which makes it difficult for the predator to avoid contact with their poison-producing granular glands. A few salamanders will autotomise their tails when attacked, sacrificing this part of their anatomy to enable them to escape. The tail may have a constriction at its base to allow it to be easily detached. The tail is regenerated later, but the energy cost to the animal of replacing it is significant. Some frogs and toads inflate themselves to make themselves look large and fierce, and some spadefoot toads (Pelobates spp) scream and leap towards the attacker. Giant salamanders of the genus Andrias, as well as Ceratophrine and Pyxicephalus frogs possess sharp teeth and are capable of drawing blood with a defensive bite. The blackbelly salamander (Desmognathus quadramaculatus) can bite an attacking common garter snake (Thamnophis sirtalis) two or three times its size on the head and often manages to escape. Cognition In amphibians, there is evidence of habituation, associative learning through both classical and instrumental learning, and discrimination abilities. Amphibians are widely considered to be sentient, able to feel emotions such as anxiety and fear. In one experiment, when offered live fruit flies (Drosophila virilis), salamanders chose the larger of 1 vs 2 and 2 vs 3. Frogs can distinguish between low numbers (1 vs 2, 2 vs 3, but not 3 vs 4) and large numbers (3 vs 6, 4 vs 8, but not 4 vs 6) of prey. This is irrespective of other characteristics, i.e. surface area, volume, weight and movement, although discrimination among large numbers may be based on surface area. Conservation Dramatic declines in amphibian populations, including population crashes and mass localized extinction, have been noted since the late 1980s from locations all over the world, and amphibian declines are thus perceived to be one of the most critical threats to global biodiversity. In 2004, the International Union for Conservation of Nature (IUCN) reported stating that currently birds, mammals, and amphibians extinction rates were at minimum 48 times greater than natural extinction rates—possibly 1,024 times higher. In 2006, there were believed to be 4,035 species of amphibians that depended on water at some stage during their life cycle. Of these, 1,356 (33.6%) were considered to be threatened and this figure is likely to be an underestimate because it excludes 1,427 species for which there was insufficient data to assess their status. A number of causes are believed to be involved, including habitat destruction and modification, over-exploitation, pollution, introduced species, global warming, endocrine-disrupting pollutants, destruction of the ozone layer (ultraviolet radiation has shown to be especially damaging to the skin, eyes, and eggs of amphibians), and diseases like chytridiomycosis. However, many of the causes of amphibian declines are still poorly understood, and are a topic of ongoing discussion. Food webs and predation Any decline in amphibian numbers will affect the patterns of predation. The loss of carnivorous species near the top of the food chain will upset the delicate ecosystem balance and may cause dramatic increases in opportunistic species. Predators that feed on amphibians are affected by their decline. The western terrestrial garter snake (Thamnophis elegans) in California is largely aquatic and depends heavily on two species of frog that are decreasing in numbers, the Yosemite toad (Bufo canorus) and the mountain yellow-legged frog (Rana muscosa), putting the snake's future at risk. If the snake were to become scarce, this would affect birds of prey and other predators that feed on it. Meanwhile, in the ponds and lakes, fewer frogs means fewer tadpoles. These normally play an important role in controlling the growth of algae and also forage on detritus that accumulates as sediment on the bottom. A reduction in the number of tadpoles may lead to an overgrowth of algae, resulting in depletion of oxygen in the water when the algae later die and decompose. Aquatic invertebrates and fish might then die and there would be unpredictable ecological consequences. Pollution and pesticides The decline in amphibian and reptile populations has led to an awareness of the effects of pesticides on reptiles and amphibians. In the past, the argument that amphibians or reptiles were more susceptible to any chemical contamination than any land aquatic vertebrate was not supported by research until recently. Amphibians and reptiles have complex life cycles, live in different climate and ecological zones, and are more vulnerable to chemical exposure. Certain pesticides, such as organophosphates, neonicotinoids, and carbamates, react via cholinesterase inhibition. Cholinesterase is an enzyme that causes the hydrolysis of acetylcholine, an excitatory neurotransmitter that is abundant in the nervous system. AChE inhibitors are either reversible or irreversible, and carbamates are safer than organophosphorus insecticides, which are more likely to cause cholinergic poisoning. Reptile exposure to an AChE inhibitory pesticide may result in disruption of neural function in reptiles. The buildup of these inhibitory effects on motor performance, such as food consumption and other activities. Conservation and protection strategies The Amphibian Specialist Group of the IUCN is spearheading efforts to implement a comprehensive global strategy for amphibian conservation. Amphibian Ark is an organization that was formed to implement the ex-situ conservation recommendations of this plan, and they have been working with zoos and aquaria around the world, encouraging them to create assurance colonies of threatened amphibians. One such project is the Panama Amphibian Rescue and Conservation Project that built on existing conservation efforts in Panama to create a country-wide response to the threat of chytridiomycosis. Another measure would be to stop exploitation of frogs for human consumption. In the Middle East, a growing appetite for eating frog legs and the consequent gathering of them for food was already linked to an increase in mosquitoes and thus has direct consequences for human health.
Biology and health sciences
Biology
null
633
https://en.wikipedia.org/wiki/Algae
Algae
Algae ( , ; : alga ) is an informal term for any organisms of a large and diverse group of photosynthetic eukaryotes, which include species from multiple distinct clades. Such organisms range from unicellular microalgae such as Chlorella, Prototheca and the diatoms, to multicellular macroalgae such as the giant kelp, a large brown alga which may grow up to in length. Most algae are aquatic organisms and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds. In contrast, the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried passively by water are plankton, specifically phytoplankton. Algae constitute a polyphyletic group since they do not include a common ancestor, and although their chlorophyll-bearing plastids seem to have a single origin (from symbiogenesis with cyanobacteria), they were acquired in different ways. Green algae are a prominent examples of algae that have primary chloroplasts derived from endosymbiont cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from endosymbiotic red algae, which they acquired via phagocytosis. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction via spores. Algae lack the various structures that characterize plants (which evolved from freshwater green algae), such as the phyllids (leaf-like structures) and rhizoids of bryophytes (non-vascular plants), and the roots, leaves and other xylemic/phloemic organs found in tracheophytes (vascular plants). Most algae are autotrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed chlorophyllic plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a byproduct of splitting water molecules, unlike other organisms that conduct anoxygenic photosynthesis such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated to 1.6 to 1.7 billion years ago. Because of the wide range of algae types, they have increasingly different industrial and traditional applications in human society. Traditional seaweed farming practices have existed for thousands of years and have strong traditions in East Asia food cultures. More modern algaculture applications extend the food traditions for other applications, including cattle feed, using algae for bioremediation or pollution control, transforming sunlight into algae fuels or other chemicals used in industrial processes, and in medical and scientific applications. A 2020 review found that these applications of algae could play an important role in carbon sequestration to mitigate climate change while providing lucrative value-added products for global economies. Etymology and study The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'. The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue. The study of algae is most commonly called phycology (); the term algology is falling out of use. Classifications One definition of algae is that they "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". On the other hand, the colorless Prototheca under Chlorophyta are all devoid of any chlorophyll. Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes, including cyanobacteria, from the definition of algae. The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely. Phylogeny based on plastid not nucleocytoplasmic genealogy: Linnaeus, in Species Plantarum (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In Systema Naturae, Linnaeus described the genera Volvox and Corallina, and a species of Acetabularia (as Madrepora), among the animals. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. W. H. Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae. At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals. Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Lamouroux (1813), Harvey (1836), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group. Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes). With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists). Some parasitic algae (e.g., the green algae Prototheca and Helicosporidium, parasites of metazoans, or Cephaleuros, parasites of plants) were originally classified as fungi, sporozoans, or protistans of incertae sedis, while others (e.g., the green algae Phyllosiphon and Rhodochytrium, parasites of plants, or the red algae Pterocladiophila and Gelidiocolax mammillatus, parasites of other red algae, or the dinoflagellates Oodinium, parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., Chlorochytrium), but later were seen as endophytic algae. Some filamentous bacteria (e.g., Beggiatoa) were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae. Evolution Algae are polyphyletic thus their origin cannot be traced back to single hypothetical common ancestor. It is thought that they came into existence when photosynthetic coccoid cyanobacteria got phagocytized by a unicellular heterotrophic eukaryote (a protist), giving rise to double-membranous primary plastids. Such symbiogenic events (primary symbiogenesis) are believed to have occurred more than 1.5 billion years ago during the Calymmian period, early in Boring Billion, but it is difficult to track the key events because of so much time gap. Primary symbiogenesis gave rise to three divisions of archaeplastids, namely the Viridiplantae (green algae and later plants), Rhodophyta (red algae) and Glaucophyta ("grey algae"), whose plastids further spread into other protist lineages through eukaryote-eukaryote predation, engulfments and subsequent endosymbioses (secondary and tertiary symbiogenesis). This process of serial cell "capture" and "enslavement" explains the diversity of photosynthetic eukaryotes. Recent genomic and phylogenomic approaches have significantly clarified plastid genome evolution, the horizontal movement of endosymbiont genes to the "host" nuclear genome, and plastid spread throughout the eukaryotic tree of life. Relationship to land plants Fossils of isolated spores suggest land plants may have been around as long as 475 million years ago (mya) during the Late Cambrian/Early Ordovician period, from sessile shallow freshwater charophyte algae much like Chara, which likely got stranded ashore when riverine/lacustrine water levels dropped during dry seasons. These charophyte algae probably already developed filamentous thalli and holdfasts that superficially resembled plant stems and roots, and probably had an isomorphic alternation of generations. They perhaps evolved some 850 mya and might even be as early as 1 Gya during the late phase of the Boring Billion. Morphology A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns. Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are Colonial: small, regular groups of motile cells Capsoid: individual non-motile cells embedded in mucilage Coccoid: individual non-motile cells with cell walls Palmelloid: nonmotile cells embedded in mucilage Filamentous: a string of connected nonmotile cells, sometimes branching Parenchymatous: cells forming a thallus with partial differentiation of tissues In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes. Turfs The term algal turf is commonly used but poorly defined. Algal turfs are thick, carpet-like beds of seaweed that retain sediment and compete with foundation species like corals and kelps, and they are usually less than 15 cm tall. Such a turf may consist of one or more species, and will generally cover an area in the order of a square metre or more. Some common characteristics are listed: Algae that form aggregations that have been described as turfs include diatoms, cyanobacteria, chlorophytes, phaeophytes and rhodophytes. Turfs are often composed of numerous species at a wide range of spatial scales, but monospecific turfs are frequently reported. Turfs can be morphologically highly variable over geographic scales and even within species on local scales and can be difficult to identify in terms of the constituent species. Turfs have been defined as short algae, but this has been used to describe height ranges from less than 0.5 cm to more than 10 cm. In some regions, the descriptions approached heights which might be described as canopies (20 to 30 cm). Physiology Many algae, particularly species of the Characeae, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials. Plant hormones are found not only in higher plants, but in algae, too. Symbiotic algae Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are: Lichens Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature, they do not occur separate from lichens. It is unknown when they began to associate. One or more mycobiont associates with the same phycobiont species, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont. Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae (aerophytes) growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them. Coral reefs Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus Symbiodinium to be in a healthy condition. The loss of Symbiodinium from the host is known as coral bleaching, a condition which leads to the deterioration of a reef. Sea sponges Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges (Halichondria panicea). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species. Life cycle Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have life cycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae. Numbers The Algal Collection of the US National Herbarium (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles, the UK Biodiversity Steering Group Report estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..." Regional and group estimates have been made, as well: 5,000–5,500 species of red algae worldwide "some 1,300 in Australian Seas" 400 seaweed species for the western coastline of South Africa, and 212 species from the coast of KwaZulu-Natal. Some of these are duplicates, as the range extends across both coasts, and the total recorded is probably about 500 species. Most of these are listed in List of seaweeds of South Africa. These exclude phytoplankton and crustose corallines. 669 marine species from California (US) 642 in the check-list of Britain and Ireland and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton. The most recent estimate suggests 72,500 algal species worldwide. Distribution The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of cryptogamic plants by spores. Spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an adult organism depends on the species and the environmental conditions where the spore lands. The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces. To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, Ulva reticulata and U. fasciata travelled from the mainland to Hawaii in this manner. Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, Clathromorphum is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies." Ecology Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as Navicula pennata have been recorded to a depth of . A type of algae, Ancylonema nordenskioeldii, was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. The same algae was found in the Italian Alps, after pink ice appeared on parts of the Presena glacier. The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms. Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease. On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont). Cultural associations In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao (藻鑒堂島), which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent". Cultivation Seaweed farming Bioreactors Uses Agar Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar. Alginates Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours. Between 100,000 and 170,000 wet tons of Macrocystis are harvested annually in New Mexico for alginate extraction and abalone feed. Energy source To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025. Fertilizer For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales: Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner. As food Algae are used as foods in many countries: China consumes more than 70 species, including fat choy, a cyanobacterium considered a vegetable; Japan, over 20 species such as nori and aonori; Ireland, dulse; Chile, cochayuyo. Laver is used to make laverbread in Wales, where it is known as . In Korea, green laver is used to make . Three forms of algae used as food: Chlorella: This form of alga is found in freshwater and contains photosynthetic pigments in its chloroplast. Klamath AFA: A subspecies of Aphanizomenon flos-aquae found wild in many bodies of water worldwide but harvested only from Upper Klamath Lake, Oregon. Spirulina: Known otherwise as a cyanobacterium (a prokaryote or a "blue-green alga") The oils from some algae have high levels of unsaturated fatty acids. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain. Pollution control Sewage can be treated with algae, reducing the use of large amounts of toxic chemicals that would otherwise be needed. Algae can be used to capture fertilizers in runoff from farms. When subsequently harvested, the enriched algae can be used as fertilizer. Aquaria and ponds can be filtered using algae, which absorb nutrients from the water in a device called an algae scrubber, also known as an algae turf scrubber. Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds. Polymers Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose-based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards. Bioremediation The alga Stichococcus bacillaris has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance. Pigments The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents. The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples. Stabilizing substances Carrageenan, from the red alga Chondrus crispus, is used as a stabilizer in milk products. Additional images
Biology and health sciences
Biology
null
634
https://en.wikipedia.org/wiki/Analysis%20of%20variance
Analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences between groups. It uses F-test by comparing variance between groups and taking noise, or assumed normal distribution of group, into consideration by dividing by variance between elements in a group. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means. History While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. Around 1800, Laplace and Gauss developed the least-squares method for combining observations, which improved upon methods then used in astronomy and geodesy. It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827, Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800, astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885. Ronald Fisher introduced the term variance and proposed its formal analysis in a 1918 article on theoretical population genetics, The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance to data analysis was published in 1921, Studies in Crop Variation I. This divided the variation of a time series into components representing annual causes and slow deterioration. Fisher's next piece, Studies in Crop Variation II, written with Winifred Mackenzie and published in 1923, studied the variation in yield across plots sown with different varieties and subjected to different fertiliser treatments. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers. Randomization models were developed by several researchers. The first was published in Polish by Jerzy Neyman in 1923. Example The analysis of variance can be used to describe otherwise complex relations among variables. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show is likely to be rather complicated, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is to explain the distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn't reasonable to conclude that the groups are, in fact, separate in any meaningful way). In the illustrations to the right, groups are identified as X1, X2, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn't allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange). An attempt to explain the weight distribution by grouping dogs as pet vs working breed and less athletic vs more athletic would probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguish X1 and X2 reliably. Grouping dogs according to a coin flip might produce distributions that look similar. An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship. Classes of models There are three classes of models used in the analysis of variance, and these are outlined here. Fixed-effects models The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. Random-effects models Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model. Mixed-effects models A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. Example Teaching experiments could be performed by a college or university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with multiple competing definitions. Assumptions The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data. Textbook analysis using a normal distribution The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses: Independence of observations – this is an assumption of the model that simplifies the statistical analysis. Normality – the distributions of the residuals are normal. Equality (or "homogeneity") of variances, called homoscedasticity—the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors () are independent and Randomization-based analysis In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox. Unit-treatment additivity In its simplest form, the assumption of unit-treatment additivity states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is The assumption of unit-treatment additivity implies that, for every treatment , the th treatment has exactly the same effect on every experiment unit. The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling. Derived linear model Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. Statistical models for observational data However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. Summary of assumptions The normal-model based ANOVA analysis assumes the independence, normality, and homogeneity of variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition. Characteristics ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance result is independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding. Algorithm The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial: "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean". Partitioning of the sum of squares ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means, and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect. The F-test The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic where MS is mean square, is the number of treatments and is the total number of cases to the F-distribution with being the numerator degrees of freedom and the denominator degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution. The expected value of F is (where is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result: The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (α). If F ≥ FCritical, the null hypothesis is rejected. The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum. The ANOVA F-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions. Extended algorithm ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..." For a single factor The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. There are some alternatives to conventional one-way analysis of variance, e.g.: Welch's heteroscedastic F test, Welch's heteroscedastic F test with trimmed means and Winsorized variances, Brown-Forsythe test, Alexander-Govern test, James second order test and Kruskal-Wallis test, available in onewaytests R It is useful to represent each data point in the following form, called a statistical model: where i = 1, 2, 3, ..., R j = 1, 2, 3, ..., C μ = overall average (mean) τj = differential effect (response) associated with the j level of X; this assumes that overall the values of τj add to zero (that is, ) εij = noise or error associated with the particular ij data value That is, we envision an additive model that says every data point can be represented by summing three quantities: the true mean, averaged over all factor levels being investigated, plus an incremental component associated with the particular column (factor level), plus a final component associated with everything else affecting that specific data value. For multiple factors ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results. Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications. Associated analysis Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. Preparatory analysis The number of experimental units In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval. Power analysis Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. Effect size Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes. Model confirmation Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. Follow-up tests A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are "planned" (a priori) or "post hoc." Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term "post hoc" is inconsistently used). The follow-up tests may be "simple" pairwise comparisons of individual group means or may be "compound" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for the multiple comparisons problem. Follow-up tests to identify which specific groups, variables, or factors have statistically different means include the Tukey's range test, and Duncan's new multiple range test. In turn, these tests are often followed with a Compact Letter Display (CLD) methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience. Study designs There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model. Some popular designs use the following types of ANOVA: One-way ANOVA is used to test for differences among two or more independent groups (means), e.g. different levels of urea application in a crop, or different levels of antibiotic action on several different bacterial species, or different levels of effect of some medicine on groups of patients. However, should these groups not be independent, and there is an order in the groups (such as mild, moderate and severe disease), or in the dose of a drug (such as 5 mg/mL, 10 mg/mL, 20 mg/mL) given to the same group of patients, then a linear trend estimation should be used. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by . Factorial ANOVA is used when there is more than one factor. Repeated measures ANOVA is used when the same subjects are used for each factor (e.g., in a longitudinal study). Multivariate analysis of variance (MANOVA) is used when there is more than one response variable. Cautions Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; unbalanced experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred. Generalizations ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. The Kruskal-Wallis test and the Friedman test are nonparametric tests which do not rely on an assumption of normality. Connection to linear regression Below we make clear the connection between multi-way ANOVA and linear regression. Linearly re-order the data so that -th observation is associated with a response and factors where denotes the different factors and is the total number of factors. In one-way ANOVA and in two-way ANOVA . Furthermore, we assume the -th factor has levels, namely . Now, we can one-hot encode the factors into the dimensional vector . The one-hot encoding function is defined such that the -th entry of is The vector is the concatenation of all of the above vectors for all . Thus, . In order to obtain a fully general -way interaction ANOVA we must also concatenate every additional interaction term in the vector and then add an intercept term. Let that vector be . With this notation in place, we now have the exact connection with linear regression. We simply regress response against the vector . However, there is a concern about identifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can use F-statistics or other methods to determine the relevance of the individual factors. Example We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels. Define if and if , i.e. is the one-hot encoding of the first factor and is the one-hot encoding of the second factor. With that, where the last term is an intercept term. For a more concrete example suppose that Then,
Mathematics
Statistics
null
639
https://en.wikipedia.org/wiki/Alkane
Alkane
In organic chemistry, an alkane, or paraffin (a historical trivial name that also has other meanings), is an acyclic saturated hydrocarbon. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Alkanes have the general chemical formula . The alkanes range in complexity from the simplest case of methane (), where n = 1 (sometimes called the parent molecule), to arbitrarily large and complex molecules, like pentacontane () or 6-ethyl-2-methyl-5-(1-methylethyl) octane, an isomer of tetradecane (). The International Union of Pure and Applied Chemistry (IUPAC) defines alkanes as "acyclic branched or unbranched hydrocarbons having the general formula , and therefore consisting entirely of hydrogen atoms and saturated carbon atoms". However, some sources use the term to denote any saturated hydrocarbon, including those that are either monocyclic (i.e. the cycloalkanes) or polycyclic, despite them having a distinct general formula (e.g. cycloalkanes are ). In an alkane, each carbon atom is sp3-hybridized with 4 sigma bonds (either C–C or C–H), and each hydrogen atom is joined to one of the carbon atoms (in a C–H bond). The longest series of linked carbon atoms in a molecule is known as its carbon skeleton or carbon backbone. The number of carbon atoms may be considered as the size of the alkane. One group of the higher alkanes are waxes, solids at standard ambient temperature and pressure (SATP), for which the number of carbon atoms in the carbon backbone is greater than about 17. With their repeated – units, the alkanes constitute a homologous series of organic compounds in which the members differ in molecular mass by multiples of 14.03 u (the total mass of each such methylene-bridge unit, which comprises a single carbon atom of mass 12.01 u and two hydrogen atoms of mass ~1.01 u each). Methane is produced by methanogenic bacteria and some long-chain alkanes function as pheromones in certain animal species or as protective waxes in plants and fungi. Nevertheless, most alkanes do not have much biological activity. They can be viewed as molecular trees upon which can be hung the more active/reactive functional groups of biological molecules. The alkanes have two main commercial sources: petroleum (crude oil) and natural gas. An alkyl group is an alkane-based molecular fragment that bears one open valence for bonding. They are generally abbreviated with the symbol for any organyl group, R, although Alk is sometimes used to specifically symbolize an alkyl group (as opposed to an alkenyl group or aryl group). Structure and classification Ordinarily the C-C single bond distance is . Saturated hydrocarbons can be linear, branched, or cyclic. The third group is sometimes called cycloalkanes. Very complicated structures are possible by combining linear, branch, cyclic alkanes. Isomerism Alkanes with more than three carbon atoms can be arranged in various ways, forming structural isomers. The simplest isomer of an alkane is the one in which the carbon atoms are arranged in a single chain with no branches. This isomer is sometimes called the n-isomer (n for "normal", although it is not necessarily the most common). However, the chain of carbon atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of carbon atoms. For example, for acyclic alkanes: C1: methane only C2: ethane only C3: propane only C4: 2 isomers: butane and isobutane C5: 3 isomers: pentane, isopentane, and neopentane C6: 5 isomers: hexane, 2-methylpentane, 3-methylpentane, 2,2-dimethylbutane, and 2,3-dimethylbutane C7: 9 isomers: heptane, 2-methylhexane, 3-methylhexane, 2,2-dimethylpentane, 2,3-dimethylpentane, 2,4-dimethylpentane, 3,3-dimethylpentane, 3-ethylpentane, 2,2,3-trimethylbutane C8: 18 isomers: octane, 2-methylheptane, 3-methylheptane, 4-methylheptane, 2,2-dimethylhexane, 2,3-dimethylhexane, 2,4-dimethylhexane, 2,5-dimethylhexane, 3,3-dimethylhexane, 3,4-dimethylhexane, 3-ethylhexane, 2,2,3-trimethylpentane, 2,2,4-trimethylpentane, 2,3,3-trimethylpentane, 2,3,4-trimethylpentane, 3-ethyl-2-methylpentane, 3-ethyl-3-methylpentane, 2,2,3,3-tetramethylbutane C9: 35 isomers C10: 75 isomers C12: 355 isomers C32: 27,711,253,769 isomers C60: 22,158,734,535,770,411,074,184 isomers, many of which are not stable Branched alkanes can be chiral. For example, 3-methylhexane and its higher homologues are chiral due to their stereogenic center at carbon atom number 3. The above list only includes differences of connectivity, not stereochemistry. In addition to the alkane isomers, the chain of carbon atoms may form one or more rings. Such compounds are called cycloalkanes, and are also excluded from the above list because changing the number of rings changes the molecular formula. For example, cyclobutane and methylcyclopropane are isomers of each other (C4H8), but are not isomers of butane (C4H10). Branched alkanes are more thermodynamically stable than their linear (or less branched) isomers. For example, the highly branched 2,2,3,3-tetramethylbutane is about 1.9 kcal/mol more stable than its linear isomer, n-octane. Nomenclature The IUPAC nomenclature (systematic way of naming compounds) for alkanes is based on identifying hydrocarbon chains. Unbranched, saturated hydrocarbon chains are named systematically with a Greek numerical prefix denoting the number of carbons and the suffix "-ane". In 1866, August Wilhelm von Hofmann suggested systematizing nomenclature by using the whole sequence of vowels a, e, i, o and u to create suffixes -ane, -ene, -ine (or -yne), -one, -une, for the hydrocarbons CnH2n+2, CnH2n, CnH2n−2, CnH2n−4, CnH2n−6. In modern nomenclature, the first three specifically name hydrocarbons with single, double and triple bonds; while "-one" now represents a ketone. Linear alkanes Straight-chain alkanes are sometimes indicated by the prefix "n-" or "n-"(for "normal") where a non-linear isomer exists. Although this is not strictly necessary and is not part of the IUPAC naming system, the usage is still common in cases where one wishes to emphasize or distinguish between the straight-chain and branched-chain isomers, e.g., "n-butane" rather than simply "butane" to differentiate it from isobutane. Alternative names for this group used in the petroleum industry are linear paraffins or n-paraffins. The first eight members of the series (in terms of number of carbon atoms) are named as follows: methane CH4 – one carbon and 4 hydrogen ethane C2H6 – two carbon and 6 hydrogen propane C3H8 – three carbon and 8 hydrogen butane C4H10 – four carbon and 10 hydrogen pentane C5H12 – five carbon and 12 hydrogen hexane C6H14 – six carbon and 14 hydrogen heptane C7H16 – seven carbons and 16 hydrogen octane C8H18 – eight carbons and 18 hydrogen The first four names were derived from methanol, ether, propionic acid and butyric acid. Alkanes with five or more carbon atoms are named by adding the suffix -ane to the appropriate numerical multiplier prefix with elision of any terminal vowel (-a or -o) from the basic numerical term. Hence, pentane, C5H12; hexane, C6H14; heptane, C7H16; octane, C8H18; etc. The numeral prefix is generally Greek; however, alkanes with a carbon atom count ending in nine, for example nonane, use the Latin prefix non-. Branched alkanes Simple branched alkanes often have a common name using a prefix to distinguish them from linear alkanes, for example n-pentane, isopentane, and neopentane. IUPAC naming conventions can be used to produce a systematic name. The key steps in the naming of more complicated branched alkanes are as follows: Identify the longest continuous chain of carbon atoms Name this longest root chain using standard naming rules Name each side chain by changing the suffix of the name of the alkane from "-ane" to "-yl" Number the longest continuous chain in order to give the lowest possible numbers for the side-chains Number and name the side chains before the name of the root chain If there are multiple side chains of the same type, use prefixes such as "di-" and "tri-" to indicate it as such, and number each one. Add side chain names in alphabetical (disregarding "di-" etc. prefixes) order in front of the name of the root chain Saturated cyclic hydrocarbons Though technically distinct from the alkanes, this class of hydrocarbons is referred to by some as the "cyclic alkanes." As their description implies, they contain one or more rings. Simple cycloalkanes have a prefix "cyclo-" to distinguish them from alkanes. Cycloalkanes are named as per their acyclic counterparts with respect to the number of carbon atoms in their backbones, e.g., cyclopentane (C5H10) is a cycloalkane with 5 carbon atoms just like pentane (C5H12), but they are joined up in a five-membered ring. In a similar manner, propane and cyclopropane, butane and cyclobutane, etc. Substituted cycloalkanes are named similarly to substituted alkanes – the cycloalkane ring is stated, and the substituents are according to their position on the ring, with the numbering decided by the Cahn–Ingold–Prelog priority rules. Trivial/common names The trivial (non-systematic) name for alkanes is 'paraffins'. Together, alkanes are known as the 'paraffin series'. Trivial names for compounds are usually historical artifacts. They were coined before the development of systematic names, and have been retained due to familiar usage in industry. Cycloalkanes are also called naphthenes. Branched-chain alkanes are called isoparaffins. "Paraffin" is a general term and often does not distinguish between pure compounds and mixtures of isomers, i.e., compounds of the same chemical formula, e.g., pentane and isopentane. In IUPAC The following trivial names are retained in the IUPAC system: isobutane for 2-methylpropane isopentane for 2-methylbutane neopentane for 2,2-dimethylpropane. Non-IUPAC Some non-IUPAC trivial names are occasionally used: cetane, for hexadecane cerane, for hexacosane Physical properties All alkanes are colorless. Alkanes with the lowest molecular weights are gases, those of intermediate molecular weight are liquids, and the heaviest are waxy solids. Table of alkanes Boiling point Alkanes experience intermolecular van der Waals forces. The cumulative effects of these intermolecular forces give rise to greater boiling points of alkanes. Two factors influence the strength of the van der Waals forces: the number of electrons surrounding the molecule, which increases with the alkane's molecular weight the surface area of the molecule Under standard conditions, from CH4 to C4H10 alkanes are gaseous; from C5H12 to C17H36 they are liquids; and after C18H38 they are solids. As the boiling point of alkanes is primarily determined by weight, it should not be a surprise that the boiling point has an almost linear relationship with the size (molecular weight) of the molecule. As a rule of thumb, the boiling point rises 20–30 °C for each carbon added to the chain; this rule applies to other homologous series. A straight-chain alkane will have a boiling point higher than a branched-chain alkane due to the greater surface area in contact, and thus greater van der Waals forces, between adjacent molecules. For example, compare isobutane (2-methylpropane) and n-butane (butane), which boil at −12 and 0 °C, and 2,2-dimethylbutane and 2,3-dimethylbutane which boil at 50 and 58 °C, respectively. On the other hand, cycloalkanes tend to have higher boiling points than their linear counterparts due to the locked conformations of the molecules, which give a plane of intermolecular contact. Melting points The melting points of the alkanes follow a similar trend to boiling points for the same reason as outlined above. That is, (all other things being equal) the larger the molecule the higher the melting point. However, alkanes' melting points follow a more complex pattern, due to variations in the properties of their solid crystals. One difference in crystal structure that even-numbered alkanes (from hexane onwards) tend to form denser-packed crystals compared to their odd-numbered neighbors. This causes them to have a greater enthalpy of fusion (amount of energy required to melt them), raising their melting point. A second difference in crystal structure is that even-numbered alkanes (from octane onwards) tend to form more rotationally-ordered crystals compared to their odd-numbered neighbors. This causes them to have a greater entropy of fusion (increase in disorder from the solid to the liquid state), lowering their melting point. While these effects operate in opposing directions, the first effect tends to be slightly stronger, leading even-numbered alkanes to have slightly higher melting points than the average of their odd-numbered neighbors. This trend does not apply to methane, which has an unusually high melting point, higher than both ethane and propane. This is because it has a very low entropy of fusion, attributable to its high molecular symmetry and the rotational disorder in solid methane near its melting point (Methane I). The melting points of branched-chain alkanes can be either higher or lower than those of the corresponding straight-chain alkanes, again depending on these two factors. More symmetric alkanes tend towards higher melting points, due to enthalpic effects when they form ordered crystals, and entropic effects when they form disordered crystals (e.g. neopentane). Conductivity and solubility Alkanes do not conduct electricity in any way, nor are they substantially polarized by an electric field. For this reason, they do not form hydrogen bonds and are insoluble in polar solvents such as water. Since the hydrogen bonds between individual water molecules are aligned away from an alkane molecule, the coexistence of an alkane and water leads to an increase in molecular order (a reduction in entropy). As there is no significant bonding between water molecules and alkane molecules, the second law of thermodynamics suggests that this reduction in entropy should be minimized by minimizing the contact between alkane and water: Alkanes are said to be hydrophobic as they are insoluble in water. Their solubility in nonpolar solvents is relatively high, a property that is called lipophilicity. Alkanes are, for example, miscible in all proportions among themselves. The density of the alkanes usually increases with the number of carbon atoms but remains less than that of water. Hence, alkanes form the upper layer in an alkane–water mixture. Molecular geometry The molecular structure of the alkanes directly affects their physical and chemical characteristics. It is derived from the electron configuration of carbon, which has four valence electrons. The carbon atoms in alkanes are described as sp3 hybrids; that is to say that, to a good approximation, the valence electrons are in orbitals directed towards the corners of a tetrahedron which are derived from the combination of the 2s orbital and the three 2p orbitals. Geometrically, the angle between the bonds are cos−1(−) ≈ 109.47°. This is exact for the case of methane, while larger alkanes containing a combination of C–H and C–C bonds generally have bonds that are within several degrees of this idealized value. Bond lengths and bond angles An alkane has only C–H and C–C single bonds. The former result from the overlap of an sp3 orbital of carbon with the 1s orbital of a hydrogen; the latter by the overlap of two sp3 orbitals on adjacent carbon atoms. The bond lengths amount to 1.09 × 10−10 m for a C–H bond and 1.54 × 10−10 m for a C–C bond. The spatial arrangement of the bonds is similar to that of the four sp3 orbitals—they are tetrahedrally arranged, with an angle of 109.47° between them. Structural formulae that represent the bonds as being at right angles to one another, while both common and useful, do not accurately depict the geometry. Conformation The spatial arrangement of the C-C and C-H bonds are described by the torsion angles of the molecule is known as its conformation. In ethane, the simplest case for studying the conformation of alkanes, there is nearly free rotation about a carbon–carbon single bond. Two limiting conformations are important: eclipsed conformation and staggered conformation. The staggered conformation is 12.6 kJ/mol (3.0 kcal/mol) lower in energy (more stable) than the eclipsed conformation (the least stable). In highly branched alkanes, the bond angle may differ from the optimal value (109.5°) to accommodate bulky groups. Such distortions introduce a tension in the molecule, known as steric hindrance or strain. Strain substantially increases reactivity. Spectroscopic properties Spectroscopic signatures for alkanes are obtainable by the major characterization techniques. Infrared spectroscopy The C-H stretching mode gives a strong absorptions between 2850 and 2960 cm−1 and weaker bands for the C-C stretching mode absorbs between 800 and 1300 cm−1. The carbon–hydrogen bending modes depend on the nature of the group: methyl groups show bands at 1450 cm−1 and 1375 cm−1, while methylene groups show bands at 1465 cm−1 and 1450 cm−1. Carbon chains with more than four carbon atoms show a weak absorption at around 725 cm−1. NMR spectroscopy The proton resonances of alkanes are usually found at δH = 0.5–1.5. The carbon-13 resonances depend on the number of hydrogen atoms attached to the carbon: δC = 8–30 (primary, methyl, –CH3), 15–55 (secondary, methylene, –CH2–), 20–60 (tertiary, methyne, C–H) and quaternary. The carbon-13 resonance of quaternary carbon atoms is characteristically weak, due to the lack of nuclear Overhauser effect and the long relaxation time, and can be missed in weak samples, or samples that have not been run for a sufficiently long time. Mass spectrometry Since alkanes have high ionization energies, their electron impact mass spectra show weak currents for their molecular ions. The fragmentation pattern can be difficult to interpret, but in the case of branched chain alkanes, the carbon chain is preferentially cleaved at tertiary or quaternary carbons due to the relative stability of the resulting free radicals. The mass spectra for straight-chain alkanes is illustrated by that for dodecane: the fragment resulting from the loss of a single methyl group (M − 15) is absent, fragments are more intense than the molecular ion and are spaced by intervals of 14 mass units, corresponding to loss of CH2 groups. Chemical properties Alkanes are only weakly reactive with most chemical compounds. They only reacts with the strongest of electrophilic reagents by virtue of their strong C–H bonds (~100 kcal/mol) and C–C bonds (~90 kcal/mol). They are also relatively unreactive toward free radicals. This inertness is the source of the term paraffins (with the meaning here of "lacking affinity"). In crude oil the alkane molecules have remained chemically unchanged for millions of years. Acid-base behavior The acid dissociation constant (pKa) values of all alkanes are estimated to range from 50 to 70, depending on the extrapolation method, hence they are extremely weak acids that are practically inert to bases (see: carbon acids). They are also extremely weak bases, undergoing no observable protonation in pure sulfuric acid (H0 ~ −12), although superacids that are at least millions of times stronger have been known to protonate them to give hypercoordinate alkanium ions (see: methanium ion). Thus, a mixture of antimony pentafluoride (SbF5) and fluorosulfonic acid (HSO3F), called magic acid, can protonate alkanes. Reactions with oxygen (combustion reaction) All alkanes react with oxygen in a combustion reaction, although they become increasingly difficult to ignite as the number of carbon atoms increases. The general equation for complete combustion is: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO2 or CnH2n+2 + () O2 → (n + 1) H2O + n CO2 In the absence of sufficient oxygen, carbon monoxide or even soot can be formed, as shown below: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO CnH2n+2 + (n + ) O2 → (n + 1) H2O + n C For example, methane: 2 CH4 + 3 O2 → 4 H2O + 2 CO CH4 + O2 → 2 H2O + C See the alkane heat of formation table for detailed data. The standard enthalpy change of combustion, ΔcH⊖, for alkanes increases by about 650 kJ/mol per CH2 group. Branched-chain alkanes have lower values of ΔcH⊖ than straight-chain alkanes of the same number of carbon atoms, and so can be seen to be somewhat more stable. Biodegradation Some organisms are capable of metalbolizing alkanes. The methane monooxygenases convert methane to methanol. For higher alkanes, cytochrome P450 convert alkanes to alcohols, which are then susceptible to degradation. Free radical reactions Free radicals, molecules with unpaired electrons, play a large role in most reactions of alkanes. Free radical halogenation reactions occur with halogens, leading to the production of haloalkanes. The hydrogen atoms of the alkane are progressively replaced by halogen atoms. The reaction of alkanes and fluorine is highly exothermic and can lead to an explosion. These reactions are an important industrial route to halogenated hydrocarbons. There are three steps: Initiation the halogen radicals form by homolysis. Usually, energy in the form of heat or light is required. Chain reaction or Propagation then takes place—the halogen radical abstracts a hydrogen from the alkane to give an alkyl radical. This reacts further. Chain termination where the radicals recombine. Experiments have shown that all halogenation produces a mixture of all possible isomers, indicating that all hydrogen atoms are susceptible to reaction. The mixture produced, however, is not statistical: Secondary and tertiary hydrogen atoms are preferentially replaced due to the greater stability of secondary and tertiary free-radicals. An example can be seen in the monobromination of propane: In the Reed reaction, sulfur dioxide and chlorine convert hydrocarbons to sulfonyl chlorides under the influence of light. Under some conditions, alkanes will undergo Nitration. C-H activation Certain transition metal complexes promote non-radical reactions with alkanes, resulting in so C–H bond activation reactions. Cracking Cracking breaks larger molecules into smaller ones. This reaction requires heat and catalysts. The thermal cracking process follows a homolytic mechanism with formation of free radicals. The catalytic cracking process involves the presence of acid catalysts (usually solid acids such as silica-alumina and zeolites), which promote a heterolytic (asymmetric) breakage of bonds yielding pairs of ions of opposite charges, usually a carbocation. Carbon-localized free radicals and cations are both highly unstable and undergo processes of chain rearrangement, C–C scission in position beta (i.e., cracking) and intra- and intermolecular hydrogen transfer or hydride transfer. In both types of processes, the corresponding reactive intermediates (radicals, ions) are permanently regenerated, and thus they proceed by a self-propagating chain mechanism. The chain of reactions is eventually terminated by radical or ion recombination. Isomerization and reformation Dragan and his colleague were the first to report about isomerization in alkanes. Isomerization and reformation are processes in which straight-chain alkanes are heated in the presence of a platinum catalyst. In isomerization, the alkanes become branched-chain isomers. In other words, it does not lose any carbons or hydrogens, keeping the same molecular weight. In reformation, the alkanes become cycloalkanes or aromatic hydrocarbons, giving off hydrogen as a by-product. Both of these processes raise the octane number of the substance. Butane is the most common alkane that is put under the process of isomerization, as it makes many branched alkanes with high octane numbers. Other reactions In steam reforming, alkanes react with steam in the presence of a nickel catalyst to give hydrogen and carbon monoxide. Occurrence Occurrence of alkanes in the Universe Alkanes form a small portion of the atmospheres of the outer gas planets such as Jupiter (0.1% methane, 2 ppm ethane), Saturn (0.2% methane, 5 ppm ethane), Uranus (1.99% methane, 2.5 ppm ethane) and Neptune (1.5% methane, 1.5 ppm ethane). Titan (1.6% methane), a satellite of Saturn, was examined by the Huygens probe, which indicated that Titan's atmosphere periodically rains liquid methane onto the moon's surface. Also on Titan, the Cassini mission has imaged seasonal methane/ethane lakes near the polar regions of Titan. Methane and ethane have also been detected in the tail of the comet Hyakutake. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which is thought to imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Alkanes have also been detected in meteorites such as carbonaceous chondrites. Occurrence of alkanes on Earth Traces of methane gas (about 0.0002% or 1745 ppb) occur in the Earth's atmosphere, produced primarily by methanogenic microorganisms, such as Archaea in the gut of ruminants. The most important commercial sources for alkanes are natural gas and oil. Natural gas contains primarily methane and ethane, with some propane and butane: oil is a mixture of liquid alkanes and other hydrocarbons. These hydrocarbons were formed when marine animals and plants (zooplankton and phytoplankton) died and sank to the bottom of ancient seas and were covered with sediments in an anoxic environment and converted over many millions of years at high temperatures and high pressure to their current form. Natural gas resulted thereby for example from the following reaction: C6H12O6 → 3 CH4 + 3 CO2 These hydrocarbon deposits, collected in porous rocks trapped beneath impermeable cap rocks, comprise commercial oil fields. They have formed over millions of years and once exhausted cannot be readily replaced. The depletion of these hydrocarbons reserves is the basis for what is known as the energy crisis. Alkanes have a low solubility in water, so the content in the oceans is negligible; however, at high pressures and low temperatures (such as at the bottom of the oceans), methane can co-crystallize with water to form a solid methane clathrate (methane hydrate). Although this cannot be commercially exploited at the present time, the amount of combustible energy of the known methane clathrate fields exceeds the energy content of all the natural gas and oil deposits put together. Methane extracted from methane clathrate is, therefore, a candidate for future fuels. Biological occurrence Aside from petroleum and natural gas, alkanes occur significantly in nature only as methane, which is produced by some archaea by the process of methanogenesis. These organisms are found in the gut of termites and cows. The methane is produced from carbon dioxide or other organic compounds. Energy is released by the oxidation of hydrogen: CO2 + 4 H2 → CH4 + 2 H2O It is probable that our current deposits of natural gas were formed in a similar way. Certain types of bacteria can metabolize alkanes: they prefer even-numbered carbon chains as they are easier to degrade than odd-numbered chains. Alkanes play a negligible role in higher organisms, with rare exception. Some yeasts, e.g., Candida tropicale, Pichia sp., Rhodotorula sp., can use alkanes as a source of carbon or energy. The fungus Amorphotheca resinae prefers the longer-chain alkanes in aviation fuel, and can cause serious problems for aircraft in tropical regions. In plants, the solid long-chain alkanes are found in the plant cuticle and epicuticular wax of many species, but are only rarely major constituents. They protect the plant against water loss, prevent the leaching of important minerals by the rain, and protect against bacteria, fungi, and harmful insects. The carbon chains in plant alkanes are usually odd-numbered, between 27 and 33 carbon atoms in length, and are made by the plants by decarboxylation of even-numbered fatty acids. The exact composition of the layer of wax is not only species-dependent but also changes with the season and such environmental factors as lighting conditions, temperature or humidity. The Jeffrey pine is noted for producing exceptionally high levels of n-heptane in its resin, for which reason its distillate was designated as the zero point for one octane rating. Floral scents have also long been known to contain volatile alkane components, and n-nonane is a significant component in the scent of some roses. Emission of gaseous and volatile alkanes such as ethane, pentane, and hexane by plants has also been documented at low levels, though they are not generally considered to be a major component of biogenic air pollution. Edible vegetable oils also typically contain small fractions of biogenic alkanes with a wide spectrum of carbon numbers, mainly 8 to 35, usually peaking in the low to upper 20s, with concentrations up to dozens of milligrams per kilogram (parts per million by weight) and sometimes over a hundred for the total alkane fraction. Alkanes are found in animal products, although they are less important than unsaturated hydrocarbons. One example is the shark liver oil, which is approximately 14% pristane (2,6,10,14-tetramethylpentadecane, C19H40). They are important as pheromones, chemical messenger materials, on which insects depend for communication. In some species, e.g. the support beetle Xylotrechus colonus, pentacosane (C25H52), 3-methylpentaicosane (C26H54) and 9-methylpentaicosane (C26H54) are transferred by body contact. With others like the tsetse fly Glossina morsitans morsitans, the pheromone contains the four alkanes 2-methylheptadecane (C18H38), 17,21-dimethylheptatriacontane (C39H80), 15,19-dimethylheptatriacontane (C39H80) and 15,19,23-trimethylheptatriacontane (C40H82), and acts by smell over longer distances. Waggle-dancing honey bees produce and release two alkanes, tricosane and pentacosane. Ecological relations One example, in which both plant and animal alkanes play a role, is the ecological relationship between the sand bee (Andrena nigroaenea) and the early spider orchid (Ophrys sphegodes); the latter is dependent for pollination on the former. Sand bees use pheromones in order to identify a mate; in the case of A. nigroaenea, the females emit a mixture of tricosane (C23H48), pentacosane (C25H52) and heptacosane (C27H56) in the ratio 3:3:1, and males are attracted by specifically this odor. The orchid takes advantage of this mating arrangement to get the male bee to collect and disseminate its pollen; parts of its flower not only resemble the appearance of sand bees but also produce large quantities of the three alkanes in the same ratio as female sand bees. As a result, numerous males are lured to the blooms and attempt to copulate with their imaginary partner: although this endeavor is not crowned with success for the bee, it allows the orchid to transfer its pollen, which will be dispersed after the departure of the frustrated male to other blooms. Production Petroleum refining The most important source of alkanes is natural gas and crude oil. Alkanes are separated in an oil refinery by fractional distillation. Unsaturated hydrocarbons are converted to alkanes by hydrogenation: (R = alkyl) Another route to alkanes is hydrogenolysis, which entails cleavage of C-heteroatom bonds using hydrogen. In industry, the main substrates are organonitrogen and organosulfur impurities, i.e. the heteroatoms are N and S. The specific processes are called hydrodenitrification and hydrodesulfurization: Hydrogenolysis can be applied to the conversion of virtually any functional group into hydrocarbons. Substrates include haloalkanes, alcohols, aldehydes, ketones, carboxylic acids, etc. Both hydrogenolysis and hydrogenation are practiced in refineries. The can be effected by using lithium aluminium hydride, Clemmenson reduction and other specialized routes. Coal Coal is a more traditional precursor to alkanes. A wide range of technologies have been intensively practiced for centuries. Simply heating coal gives alkanes, leaving behind coke. Relevant technologies include the Bergius process and coal liquifaction. Partial combustion of coal and related solid organic compounds generates carbon monoxide, which can be hydrogenated using the Fischer–Tropsch process. This technology allows the synthesize liquid hydrocarbons, including alkanes. This method is used to produce substitutes for petroleum distillates. Laboratory preparation Rarely is there any interest in the synthesis of alkanes, since they are usually commercially available and less valued than virtually any precursor. The best-known method is hydrogenation of alkenes. Many C-X bonds can be converted to C-H bonds using lithium aluminium hydride, Clemmenson reduction, and other specialized routes. Hydrolysis of Alkyl Grignard reagents and alkyl lithium compounds gives alkanes. Applications Fuels The dominant use of alkanes is as fuels. Propane and butane, easily liquified gases, are commonly known as liquified petroleum gas (LPG). From pentane to octane the alkanes are highly volatile liquids. They are used as fuels in internal combustion engines, as they vaporize easily on entry into the combustion chamber without forming droplets, which would impair the uniformity of the combustion. Branched-chain alkanes are preferred as they are much less prone to premature ignition, which causes knocking, than their straight-chain homologues. This propensity to premature ignition is measured by the octane rating of the fuel, where 2,2,4-trimethylpentane (isooctane) has an arbitrary value of 100, and heptane has a value of zero. Apart from their use as fuels, the middle alkanes are also good solvents for nonpolar substances. Alkanes from nonane to, for instance, hexadecane (an alkane with sixteen carbon atoms) are liquids of higher viscosity, less and less suitable for use in gasoline. They form instead the major part of diesel and aviation fuel. Diesel fuels are characterized by their cetane number, cetane being an old name for hexadecane. However, the higher melting points of these alkanes can cause problems at low temperatures and in polar regions, where the fuel becomes too thick to flow correctly. Precursors to chemicals By the process of cracking, alkanes can be converted to alkenes. Simple alkenes are precursors to polymers, such as polyethylene and polypropylene. When the cracking is taken to extremes, alkanes can be converted to carbon black, which is a significant tire component. Chlorination of methane gives chloromethanes, which are used as solvents and building blocks for complex compounds. Similarly treatment of methane with sulfur gives carbon disulfide. Still other chemicals are prepared by reaction with sulfur trioxide and nitric oxide Other Some light hydrocarbons are used as aerosol sprays. Alkanes from hexadecane upwards form the most important components of fuel oil and lubricating oil. In the latter function, they work at the same time as anti-corrosive agents, as their hydrophobic nature means that water cannot reach the metal surface. Many solid alkanes find use as paraffin wax, for example, in candles. This should not be confused however with true wax, which consists primarily of esters. Alkanes with a chain length of approximately 35 or more carbon atoms are found in bitumen, used, for example, in road surfacing. However, the higher alkanes have little value and are usually split into lower alkanes by cracking. Hazards Alkanes are highly flammable, but they have low toxicities. Methane "is toxicologically virtually inert." Alkanes can be asphyxiants and narcotic.
Physical sciences
Hydrocarbons
null
655
https://en.wikipedia.org/wiki/Abacus
Abacus
An abacus (: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool which was used from ancient times in the ancient Near East, Europe, China, and Russia, until the adoption of the Hindu–Arabic numeral system. An abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic. Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations). In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in many countries such as Japan and China. Etymology The word abacus dates to at least 1387 AD when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek () which means something without a base, and colloquially, any piece of rectangular material. Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, ()). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion. Greek probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (), or "dust" (in the post-Biblical sense "sand used as a writing surface"). Both abacuses and abaci are used as plurals. The user of an abacus is called an abacist. History Mesopotamia The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system. Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations". Egypt Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered. Persia At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire – which is how the abacus may have been exported to other countries. Greece The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution. The Salamis Tablet, found on the Greek island Salamis in 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble in length, wide, and thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other. Rome The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles () were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system. Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus. One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling a bi-quinary coded decimal system related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions). Medieval Europe The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons. Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved. China The earliest known written documentation of the Chinese abacus dates to the 2nd century BC. The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in a bi-quinary coded decimal-like system. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center. The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus. In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five. Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it. In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao). The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this ancient Chinese calculation system 市用制 (Shì yòng zhì) allows use with a hexadecimal numeral system (or any base up to 18) which is used for traditional Chinese measures of weight [(jīn (斤) and liǎng (兩)]. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower). Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians. India The Abhidharmakośabhāṣya of Vasubandhu (316–396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus. Japan In Japan, the abacus is called soroban (, lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s. Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a 1:4 device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus. The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position. The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument. Korea The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty. Native America Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli – the account -; and tzintzin – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh , who were students dedicated to taking the accounts of skies, from childhood. The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row. The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed. The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures. Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles. The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum. Russia The Russian abacus, the schoty (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color. The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia. According to Yakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974. The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians. School abacus Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic. In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image). The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires. The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework. Feynman vs the abacus Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for more complex operations, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods. Neurological analysis Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time. Renaissance abacuses Binary abacus The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position. Visually impaired users An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root. Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades. Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.
Technology
Basics_3
null
656
https://en.wikipedia.org/wiki/Acid
Acid
An acid is a molecule or ion capable of either donating a proton (i.e. hydrogen ion, H+), known as a Brønsted–Lowry acid, or forming a covalent bond with an electron pair, known as a Lewis acid. The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+. Aqueous Arrhenius acids have characteristic properties that provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word acid is derived from the Latin , meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic. Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride that is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid. The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital that can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly or by releasing protons (H+) into the solution, which then accept electron pairs. Hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair, however, and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an acid is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as such. Definitions and concepts Modern definitions are concerned with the fundamental chemical reactions common to all acids. Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted–Lowry definitions are the most relevant. The Brønsted–Lowry definition is the most widely used definition; unless otherwise specified, acid–base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base. Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted–Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms. Arrhenius acids In 1884, Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+), later described as protons or hydrons. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Chemists often write H+(aq) and refer to the hydrogen ion when describing acid–base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion (H3O+) or other forms (H5O2+, H9O4+). Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as hydrogen chloride and acetic acid. An Arrhenius base, on the other hand, is a substance that increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules: H3O + OH ⇌ H2O(liq) + H2O(liq) Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it. In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7. Brønsted–Lowry acids While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid–base reactions involve the transfer of a proton. A Brønsted–Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted–Lowry base. Brønsted–Lowry acid–base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste: Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted–Lowry acid. Brønsted–Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition: H3O + Cl + NH3 → Cl + NH(aq) + H2O HCl(benzene) + NH3(benzene) → NH4Cl(s) HCl(g) + NH3(g) → NH4Cl(s) As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid. Lewis acids A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid–base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid–base reactions are proton transfer reactions while Lewis acid–base reactions are electron pair transfers. Many Lewis acids are not Brønsted–Lowry acids. Contrast how the following reactions are described in terms of acid–base chemistry: In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer. The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen. Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time, they also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). This article deals mostly with Brønsted acids rather than Lewis acids. Dissociation and equilibrium Reactions of acids are often generalized in the form , where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid. Acid–base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). The acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as . In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant K is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means the concentration of H2O. The acid dissociation constant Ka is generally used in the context of acid–base reactions. The numerical value of Ka is equal to the product (multiplication) of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+. The stronger of two acids will have a higher Ka than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for Ka spans many orders of magnitude, a more manageable constant, pKa is more frequently used, where pKa = −log10 Ka. Stronger acids have a smaller pKa than weaker acids. Experimentally determined pKa at 25 °C in aqueous solution are often quoted in textbooks and reference material. Nomenclature Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid. Classical naming system: In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride. Acid strength The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base. Stronger acids have a larger acid dissociation constant, Ka and a lower pKa than weaker acids. Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable. Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. The strongest known acid is helium hydride ion, with a proton affinity of 177.8kJ/mol. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations. While Ka measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's Ka. Lewis acid strength in non-aqueous solutions Lewis acids have been classified in the ECW model and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent. Chemical characteristics Monoprotic acids Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):      Ka Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH). Polyprotic acids Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate). Some macromolecules such as proteins and nucleic acids can have a very large number of acidic protons. A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.     Ka1       Ka2 The first dissociation constant is typically greater than the second (i.e., Ka1 > Ka2). For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which Ka1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the Ka2 is intermediate strength. The large Ka1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid can lose one proton to form bicarbonate anion and lose a second to form carbonate anion (CO). Both Ka values are small, but Ka1 > Ka2 . A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where Ka1 > Ka2 > Ka3.      Ka1       Ka2      Ka3 An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive Ka values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion. Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, α (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases: A plot of these fractional concentrations against pH, for given K1 and K2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general n -protic acid that has been deprotonated i -times: where K0 = 1 and the other K-terms are the dissociation constants for the acid. Neutralization Neutralization is the reaction between an acid and a base, producing a salt and neutralized base; for example, hydrochloric acid and sodium hydroxide form sodium chloride and water: HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq) Neutralization is the basis of titration, where a pH indicator shows equivalence point when the equivalent number of moles of a base have been added to an acid. It is often wrongly assumed that neutralization should result in a solution with pH 7.0, which is only the case with similar acid and base strengths during a reaction. Neutralization with a base weaker than the acid results in a weakly acidic salt. An example is the weakly acidic ammonium chloride, which is produced from the strong acid hydrogen chloride and the weak base ammonia. Conversely, neutralizing a weak acid with a strong base gives a weakly basic salt (e.g., sodium fluoride from hydrogen fluoride and sodium hydroxide). Weak acid–weak base equilibrium In order for a protonated acid to lose a proton, the pH of the system must rise above the pKa of the acid. The decreased concentration of H+ in that basic solution shifts the equilibrium towards the conjugate base form (the deprotonated form of the acid). In lower-pH (more acidic) solutions, there is a high enough H+ concentration in the solution to cause the acid to remain in its protonated form. Solutions of weak acids and salts of their conjugate bases form buffer solutions. Titration To determine the concentration of an acid in an aqueous solution, an acid–base titration is commonly performed. A strong base solution with a known concentration, usually NaOH or KOH, is added to neutralize the acid solution according to the color change of the indicator with the amount of base added. The titration curve of an acid titrated by a base has two axes, with the base volume on the x-axis and the solution's pH value on the y-axis. The pH of the solution always goes up as the base is added to the solution. Example: Diprotic acid For each diprotic acid titration curve, from left to right, there are two midpoints, two equivalence points, and two buffer regions. Equivalence points Due to the successive dissociation processes, there are two equivalence points in the titration curve of a diprotic acid. The first equivalence point occurs when all first hydrogen ions from the first ionization are titrated. In other words, the amount of OH− added equals the original amount of H2A at the first equivalence point. The second equivalence point occurs when all hydrogen ions are titrated. Therefore, the amount of OH− added equals twice the amount of H2A at this time. For a weak diprotic acid titrated by a strong base, the second equivalence point must occur at pH above 7 due to the hydrolysis of the resulted salts in the solution. At either equivalence point, adding a drop of base will cause the steepest rise of the pH value in the system. Buffer regions and midpoints A titration curve for a diprotic acid contains two midpoints where pH=pKa. Since there are two different Ka values, the first midpoint occurs at pH=pKa1 and the second one occurs at pH=pKa2. Each segment of the curve that contains a midpoint at its center is called the buffer region. Because the buffer regions consist of the acid and its conjugate base, it can resist pH changes when base is added until the next equivalent points. Applications of acids In industry Acids are fundamental reagents in treating almost all processes in modern industry. Sulfuric acid, a diprotic acid, is the most widely used acid in industry, and is also the most-produced industrial chemical in the world. It is mainly used in producing fertilizer, detergent, batteries and dyes, as well as used in processing many products such like removing impurities. According to the statistics data in 2011, the annual production of sulfuric acid was around 200 million tonnes in the world. For example, phosphate minerals react with sulfuric acid to produce phosphoric acid for the production of phosphate fertilizers, and zinc is produced by dissolving zinc oxide into sulfuric acid, purifying the solution and electrowinning. In the chemical industry, acids react in neutralization reactions to produce salts. For example, nitric acid reacts with ammonia to produce ammonium nitrate, a fertilizer. Additionally, carboxylic acids can be esterified with alcohols, to produce esters. Acids are often used to remove rust and other corrosion from metals in a process known as pickling. They may be used as an electrolyte in a wet cell battery, such as sulfuric acid in a car battery. In food Tartaric acid is an important component of some commonly used foods like unripened mangoes and tamarind. Natural fruits and vegetables also contain acids. Citric acid is present in oranges, lemon and other citrus fruits. Oxalic acid is present in tomatoes, spinach, and especially in carambola and rhubarb; rhubarb leaves and unripe carambolas are toxic because of high concentrations of oxalic acid. Ascorbic acid (Vitamin C) is an essential vitamin for the human body and is present in such foods as amla (Indian gooseberry), lemon, citrus fruits, and guava. Many acids can be found in various kinds of food as additives, as they alter their taste and serve as preservatives. Phosphoric acid, for example, is a component of cola drinks. Acetic acid is used in day-to-day life as vinegar. Citric acid is used as a preservative in sauces and pickles. Carbonic acid is one of the most common acid additives that are widely added in soft drinks. During the manufacturing process, CO2 is usually pressurized to dissolve in these drinks to generate carbonic acid. Carbonic acid is very unstable and tends to decompose into water and CO2 at room temperature and pressure. Therefore, when bottles or cans of these kinds of soft drinks are opened, the soft drinks fizz and effervesce as CO2 bubbles come out. Certain acids are used as drugs. Acetylsalicylic acid (Aspirin) is used as a pain killer and for bringing down fevers. In human bodies Acids play important roles in the human body. The hydrochloric acid present in the stomach aids digestion by breaking down large and complex food molecules. Amino acids are required for synthesis of proteins required for growth and repair of body tissues. Fatty acids are also required for growth and repair of body tissues. Nucleic acids are important for the manufacturing of DNA and RNA and transmitting of traits to offspring through genes. Carbonic acid is important for maintenance of pH equilibrium in the body. Human bodies contain a variety of organic and inorganic compounds, among those dicarboxylic acids play an essential role in many biological behaviors. Many of those acids are amino acids, which mainly serve as materials for the synthesis of proteins. Other weak acids serve as buffers with their conjugate bases to keep the body's pH from undergoing large scale changes that would be harmful to cells. The rest of the dicarboxylic acids also participate in the synthesis of various biologically important compounds in human bodies. Acid catalysis Acids are used as catalysts in industrial and organic chemistry; for example, sulfuric acid is used in very large quantities in the alkylation process to produce gasoline. Some acids, such as sulfuric, phosphoric, and hydrochloric acids, also effect dehydration and condensation reactions. In biochemistry, many enzymes employ acid catalysis. Biological occurrence Many biologically important molecules are acids. Nucleic acids, which contain acidic phosphate groups, include DNA and RNA. Nucleic acids contain the genetic code that determines many of an organism's characteristics, and is passed from parents to offspring. DNA contains the chemical blueprint for the synthesis of proteins, which are made up of amino acid subunits. Cell membranes contain fatty acid esters such as phospholipids. An α-amino acid has a central carbon (the α or alpha carbon) that is covalently bonded to a carboxyl group (thus they are carboxylic acids), an amino group, a hydrogen atom and a variable group. The variable group, also called the R group or side chain, determines the identity and many of the properties of a specific amino acid. In glycine, the simplest amino acid, the R group is a hydrogen atom, but in all other amino acids it is contains one or more carbon atoms bonded to hydrogens, and may contain other elements such as sulfur, oxygen or nitrogen. With the exception of glycine, naturally occurring amino acids are chiral and almost invariably occur in the L-configuration. Peptidoglycan, found in some bacterial cell walls contains some D-amino acids. At physiological pH, typically around 7, free amino acids exist in a charged form, where the acidic carboxyl group (-COOH) loses a proton (-COO−) and the basic amine group (-NH2) gains a proton (-NH). The entire molecule has a net neutral charge and is a zwitterion, with the exception of amino acids with basic or acidic side chains. Aspartic acid, for example, possesses one protonated amine and two deprotonated carboxyl groups, for a net charge of −1 at physiological pH. Fatty acids and fatty acid derivatives are another group of carboxylic acids that play a significant role in biology. These contain long hydrocarbon chains and a carboxylic acid group on one end. The cell membrane of nearly all organisms is primarily made up of a phospholipid bilayer, a micelle of hydrophobic fatty acid esters with polar, hydrophilic phosphate "head" groups. Membranes contain additional components, some of which can participate in acid–base reactions. In humans and many other animals, hydrochloric acid is a part of the gastric acid secreted within the stomach to help hydrolyze proteins and polysaccharides, as well as converting the inactive pro-enzyme, pepsinogen into the enzyme, pepsin. Some organisms produce acids for defense; for example, ants produce formic acid. Acid–base equilibrium plays a critical role in regulating mammalian breathing. Oxygen gas (O2) drives cellular respiration, the process by which animals release the chemical potential energy stored in food, producing carbon dioxide (CO2) as a byproduct. Oxygen and carbon dioxide are exchanged in the lungs, and the body responds to changing energy demands by adjusting the rate of ventilation. For example, during periods of exertion the body rapidly breaks down stored carbohydrates and fat, releasing CO2 into the blood stream. In aqueous solutions such as blood CO2 exists in equilibrium with carbonic acid and bicarbonate ion. It is the decrease in pH that signals the brain to breathe faster and deeper, expelling the excess CO2 and resupplying the cells with O2. Cell membranes are generally impermeable to charged or large, polar molecules because of the lipophilic fatty acyl chains comprising their interior. Many biologically important molecules, including a number of pharmaceutical agents, are organic weak acids that can cross the membrane in their protonated, uncharged form but not in their charged form (i.e., as the conjugate base). For this reason the activity of many drugs can be enhanced or inhibited by the use of antacids or acidic foods. The charged form, however, is often more soluble in blood and cytosol, both aqueous environments. When the extracellular environment is more acidic than the neutral pH within the cell, certain acids will exist in their neutral form and will be membrane soluble, allowing them to cross the phospholipid bilayer. Acids that lose a proton at the intracellular pH will exist in their soluble, charged form and are thus able to diffuse through the cytosol to their target. Ibuprofen, aspirin and penicillin are examples of drugs that are weak acids. Common acids Mineral acids (inorganic acids) Hydrogen halides and their solutions: hydrofluoric acid (HF), hydrochloric acid (HCl), hydrobromic acid (HBr), hydroiodic acid (HI) Halogen oxoacids: hypochlorous acid (HClO), chlorous acid (HClO2), chloric acid (HClO3), perchloric acid (HClO4), and corresponding analogs for bromine and iodine Hypofluorous acid (HFO), the only known oxoacid for fluorine. Sulfuric acid (H2SO4) Fluorosulfuric acid (HSO3F) Nitric acid (HNO3) Phosphoric acid (H3PO4) Fluoroantimonic acid (HSbF6) Fluoroboric acid (HBF4) Hexafluorophosphoric acid (HPF6) Chromic acid (H2CrO4) Boric acid (H3BO3) Sulfonic acids A sulfonic acid has the general formula RS(=O)2–OH, where R is an organic radical. Methanesulfonic acid (or mesylic acid, CH3SO3H) Ethanesulfonic acid (or esylic acid, CH3CH2SO3H) Benzenesulfonic acid (or besylic acid, C6H5SO3H) p-Toluenesulfonic acid (or tosylic acid, CH3C6H4SO3H) Trifluoromethanesulfonic acid (or triflic acid, CF3SO3H) Polystyrene sulfonic acid (sulfonated polystyrene, [CH2CH(C6H4)SO3H]n) Carboxylic acids A carboxylic acid has the general formula R-C(O)OH, where R is an organic radical. The carboxyl group -C(O)OH contains a carbonyl group, C=O, and a hydroxyl group, O-H. Acetic acid (CH3COOH) Citric acid (C6H8O7) Formic acid (HCOOH) Gluconic acid HOCH2-(CHOH)4-COOH Lactic acid (CH3-CHOH-COOH) Oxalic acid (HOOC-COOH) Tartaric acid (HOOC-CHOH-CHOH-COOH) Halogenated carboxylic acids Halogenation at alpha position increases acid strength, so that the following acids are all stronger than acetic acid. Fluoroacetic acid Trifluoroacetic acid Chloroacetic acid Dichloroacetic acid Trichloroacetic acid Vinylogous carboxylic acids Normal carboxylic acids are the direct union of a carbonyl group and a hydroxyl group. In vinylogous carboxylic acids, a carbon-carbon double bond separates the carbonyl and hydroxyl groups. Ascorbic acid Nucleic acids Deoxyribonucleic acid (DNA) Ribonucleic acid (RNA)
Physical sciences
Inorganic compounds
null
657
https://en.wikipedia.org/wiki/Bitumen
Bitumen
Bitumen ( , ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In American English, the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century, the term asphaltum was in general use. The word derives from the Ancient Greek word (), which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world is the Pitch Lake of southwest Trinidad, which is estimated to contain 10 million tons. About 70% of annual bitumen production is destined for road construction, its primary use. In this application, bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant. In material sciences and engineering, the terms asphalt and bitumen are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term bitumen for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, bitumen is the prevalent term in much of the world; however, in American English, asphalt is more commonly used. To help avoid confusion, the terms "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. to distinguish it from asphalt concrete. Colloquially, various forms of bitumen are sometimes referred to as "tar", as in the name of the La Brea Tar Pits. Naturally occurring bitumen is sometimes specified by the term crude bitumen. Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England. Terminology Etymology The Latin word traces to the Proto-Indo-European root *gʷet- "pitch". The expression "bitumen" originated in the Sanskrit, where we find the words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating", "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo Saxon word "cwidu" (Mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada". The word "ašphalt" is claimed to have been derived from the Accadian term "asphaltu" or "sphallo", meaning "to split". It was later adopted by the Homeric Greeks in the form of the adjective ἄσφαλἤς, ἐς signifying "firm", "stable", "secure", and the corresponding verb ἄσφαλίξω, ίσω meaning "to make firm or stable", "to secure". The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphalton, asphaltum, which is the latinisation of the Greek (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of". The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall. From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads. Modern terminology Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today. In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac"). In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit". "Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material. Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around. Composition Normal composition The components of bitumen include four main classes of compounds: Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large". Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian tar sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake. Additives, mixtures and contaminants For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen". Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement. Occurrence The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50°C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum. Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea. Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US. The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States. The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage. Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen. Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis. Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands. History Paleolithic times Bitumen use goes back to the Middle Paleolithic, where it was shaped into tool handles or used as an adhesive for attaching stone tools to hafts. The earliest evidence of bitumen use was discovered when archeologists identified bitumen material on Levallois flint artefacts that date to about 71,000 years BP at the Umm el Tlel open-air site, located on the northern slope of the Qdeir Plateau in el Kowm Basin in Central Syria. Microscopic analyses found bituminous residue on two-thirds of the stone artefacts, suggesting that bitumen was an important and frequently-used component of tool making for people in that region at that time. Geochemical analyses of the asphaltic residues places its source to localized natural bitumen outcroppings in the Bichri Massif, about 40 km northeast of the Umm el Tlel archeological site. A re-examination of artifacts uncovered in 1908 at Le Moustier rock shelters in France has identified Mousterian stone tools that were attached to grips made of ochre and bitumen. The grips were formulated with 55% ground goethite ochre and 45% cooked liquid bitumen to create a moldable putty that hardened into handles. Earlier, less-careful excavations at Le Moustier prevent conclusive identification of the archaeological culture and age, but the European Mousterian style of these tools suggests they are associated with Neanderthals during the late Middle Paleolithic into the early Upper Paleolithic between 60,000 and 35,000 years before present. It is the earliest evidence of multicomponent adhesive in Europe. Ancient times The use of natural bitumen for waterproofing and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro. In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon. The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent. Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake). In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC. In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China. In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer. Bitumen was also used to waterproof plank canoes used by indigenous peoples in pre-colonial southern California. Continental Europe In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships. An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation". But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835. United Kingdom Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources. The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)". Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington. In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s". In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company. Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm. United States The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes. Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial. In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889. In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways. Canada Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance." The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site. Photography and art Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes. Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle. Modern use Global use The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended. A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable. The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as: Highways Airport runways Footways and pedestrian ways Car parks Racetracks Tennis courts Roofing Damp proofing Dams Reservoir and pool linings Soundproofing Pipe coatings Cable coatings Paints Building water proofing Tile underlying waterproofing Newspaper ink production Rolled asphalt concrete The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe. Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required. The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness. When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use. Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways. Mastic asphalt Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick. Bitumen emulsion Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles—higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50μm with a main fraction between 1μm and 10μm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge. During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1). Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5μm and 10μm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations. T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1). Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing. Synthetic crude oil Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States. In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants. Non-upgraded crude bitumen Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States. Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude. Radioactive waste encapsulation matrix Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging. Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons. Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations. Other uses Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process. Bitumen is also commonly used as a ground in the etching process of intaglio printmaking. Production About 164,000,000 tons were produced in 2019. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500°C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous. Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns. Oil sands Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years. Alternatives and bioasphalt Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently. Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots. Albanian deposits Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120°C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%. Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale. Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine. Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags. A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission. Recycling Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder. Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties. Economics Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material. During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states: "In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton." The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years." The Middle East is a significant exporter of bitumen, particularly to India and China. According to the Argus Bitumen Report (2024/07/12), India is the largest importer, driven by extensive infrastructure projects. The report projects a CAGR of 4.5% for India's bitumen imports over the next five years, while China's imports are expected to grow at a CAGR of 3.8%. The current export price to India is approximately $350 per metric ton, and for China, it is around $360 per metric ton. The Middle East's strategic advantage in crude oil production underpins its capacity to meet these demands. Health and safety People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5mg/m3 over a 15-minute period. Bitumen is a largely inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199°C (390°F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans. In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods. A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen.
Technology
Building materials
null
662
https://en.wikipedia.org/wiki/Apollo%2011
Apollo 11
Apollo 11 was a spaceflight conducted from July 16 to July 24, 1969 by the United States and launched by NASA. It marked the first time that humans landed on the Moon. Commander Neil Armstrong and Lunar Module Pilot Buzz Aldrin landed the Apollo Lunar Module Eagle on July 20, 1969, at 20:17 UTC, and Armstrong became the first person to step onto the Moon's surface six hours and 39 minutes later, on July 21 at 02:56 UTC. Aldrin joined him 19 minutes later, and they spent about two and a quarter hours together exploring the site they had named Tranquility Base upon landing. Armstrong and Aldrin collected of lunar material to bring back to Earth as pilot Michael Collins flew the Command Module Columbia in lunar orbit, and were on the Moon's surface for 21 hours, 36 minutes, before lifting off to rejoin Columbia. Apollo 11 was launched by a Saturn V rocket from Kennedy Space Center on Merritt Island, Florida, on July 16 at 13:32 UTC, and it was the fifth crewed mission of NASA's Apollo program. The Apollo spacecraft had three parts: a command module (CM) with a cabin for the three astronauts, the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon and an ascent stage to place the astronauts back into lunar orbit. After being sent to the Moon by the Saturn V's third stage, the astronauts separated the spacecraft from it and traveled for three days until they entered lunar orbit. Armstrong and Aldrin then moved into Eagle and landed in the Sea of Tranquility on July 20. The astronauts used Eagles ascent stage to lift off from the lunar surface and rejoin Collins in the command module. They jettisoned Eagle before they performed the maneuvers that propelled Columbia out of the last of its 30 lunar orbits onto a trajectory back to Earth. They returned to Earth and splashed down in the Pacific Ocean on July 24 after more than eight days in space. Armstrong's first step onto the lunar surface was broadcast on live TV to a worldwide audience. He described the event as "one small step for [a] man, one giant leap for mankind." Apollo 11 effectively proved U.S. victory in the Space Race to demonstrate spaceflight superiority, by fulfilling a national goal proposed in 1961 by President John F. Kennedy, "before this decade is out, of landing a man on the Moon and returning him safely to the Earth." Background In the late 1950s and early 1960s, the United States was engaged in the Cold War, a geopolitical rivalry with the Soviet Union. On October 4, 1957, the Soviet Union launched Sputnik 1, the first artificial satellite. This surprise success fired fears and imaginations around the world. It demonstrated that the Soviet Union had the capability to deliver nuclear weapons over intercontinental distances, and challenged American claims of military, economic, and technological superiority. This precipitated the Sputnik crisis, and triggered the Space Race to prove which superpower would achieve superior spaceflight capability. President Dwight D. Eisenhower responded to the Sputnik challenge by creating the National Aeronautics and Space Administration (NASA), and initiating Project Mercury, which aimed to launch a man into Earth orbit. But on April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person in space, and the first to orbit the Earth. Nearly a month later, on May 5, 1961, Alan Shepard became the first American in space, completing a 15-minute suborbital journey. After being recovered from the Atlantic Ocean, he received a congratulatory telephone call from Eisenhower's successor, John F. Kennedy. Since the Soviet Union had higher lift capacity launch vehicles, Kennedy chose, from among options presented by NASA, a challenge beyond the capacity of the existing generation of rocketry, so that the US and Soviet Union would be starting from a position of equality. A crewed mission to the Moon would serve this purpose. On May 25, 1961, Kennedy addressed the United States Congress on "Urgent National Needs" and declared: On September 12, 1962, Kennedy delivered another speech before a crowd of about 40,000 people in the Rice University football stadium in Houston, Texas. A widely quoted refrain from the middle portion of the speech reads as follows: In spite of that, the proposed program faced the opposition of many Americans and was dubbed a "moondoggle" by Norbert Wiener, a mathematician at the Massachusetts Institute of Technology. The effort to land a man on the Moon already had a name: Project Apollo. When Kennedy met with Nikita Khrushchev, the Premier of the Soviet Union in June 1961, he proposed making the Moon landing a joint project, but Khrushchev did not take up the offer. Kennedy again proposed a joint expedition to the Moon in a speech to the United Nations General Assembly on September 20, 1963. The idea of a joint Moon mission was abandoned after Kennedy's death. An early and crucial decision was choosing lunar orbit rendezvous over both direct ascent and Earth orbit rendezvous. A space rendezvous is an orbital maneuver in which two spacecraft navigate through space and meet up. In July 1962 NASA head James Webb announced that lunar orbit rendezvous would be used and that the Apollo spacecraft would have three major parts: a command module (CM) with a cabin for the three astronauts, and the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon, and an ascent stage to place the astronauts back into lunar orbit. This design meant the spacecraft could be launched by a single Saturn V rocket that was then under development. Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor device, including metal–oxide–semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit (IC) chips in the Apollo Guidance Computer (AGC). Project Apollo was abruptly halted by the Apollo 1 fire on January 27, 1967, in which astronauts Gus Grissom, Ed White, and Roger B. Chaffee died, and the subsequent investigation. In October 1968, Apollo 7 evaluated the command module in Earth orbit, and in December Apollo 8 tested it in lunar orbit. In March 1969, Apollo 9 put the lunar module through its paces in Earth orbit, and in May Apollo 10 conducted a "dress rehearsal" in lunar orbit. By July 1969, all was in readiness for Apollo 11 to take the final step onto the Moon. The Soviet Union appeared to be winning the Space Race by beating the US to firsts, but its early lead was overtaken by the US Gemini program and Soviet failure to develop the N1 launcher, which would have been comparable to the Saturn V. The Soviets tried to beat the US to return lunar material to the Earth by means of uncrewed probes. On July 13, three days before Apollo 11's launch, the Soviet Union launched Luna 15, which reached lunar orbit before Apollo 11. During descent, a malfunction caused Luna 15 to crash in Mare Crisium about two hours before Armstrong and Aldrin took off from the Moon's surface to begin their voyage home. The Nuffield Radio Astronomy Laboratories radio telescope in England recorded transmissions from Luna 15 during its descent, and these were released in July 2009 for the 40th anniversary of Apollo 11. Personnel Prime crew The initial crew assignment of Commander Neil Armstrong, Command Module Pilot (CMP) Jim Lovell, and Lunar Module Pilot (LMP) Buzz Aldrin on the backup crew for Apollo 9 was officially announced on November 20, 1967. Lovell and Aldrin had previously flown together as the crew of Gemini 12. Due to design and manufacturing delays in the LM, Apollo 8 and Apollo 9 swapped prime and backup crews, and Armstrong's crew became the backup for Apollo 8. Based on the normal crew rotation scheme, Armstrong was then expected to command Apollo 11. There would be one change. Michael Collins, the CMP on the Apollo 8 crew, began experiencing trouble with his legs. Doctors diagnosed the problem as a bony growth between his fifth and sixth vertebrae, requiring surgery. Lovell took his place on the Apollo 8 crew, and when Collins recovered he joined Armstrong's crew as CMP. In the meantime, Fred Haise filled in as backup LMP, and Aldrin as backup CMP for Apollo 8. Apollo 11 was the second American mission where all the crew members had prior spaceflight experience, the first being Apollo 10. The next was STS-26 in 1988. Deke Slayton gave Armstrong the option to replace Aldrin with Lovell, since some thought Aldrin was difficult to work with. Armstrong had no issues working with Aldrin but thought it over for a day before declining. He thought Lovell deserved to command his own mission (eventually Apollo 13). The Apollo 11 prime crew had none of the close cheerful camaraderie characterized by that of Apollo 12. Instead, they forged an amiable working relationship. Armstrong in particular was notoriously aloof, but Collins, who considered himself a loner, confessed to rebuffing Aldrin's attempts to create a more personal relationship. Aldrin and Collins described the crew as "amiable strangers". Armstrong did not agree with the assessment, and said "... all the crews I was on worked very well together." Backup crew The backup crew consisted of Lovell as Commander, William Anders as CMP, and Haise as LMP. Anders had flown with Lovell on Apollo 8. In early 1969, Anders accepted a job with the National Aeronautics and Space Council effective August 1969, and announced he would retire as an astronaut at that time. Ken Mattingly was moved from the support crew into parallel training with Anders as backup CMP in case Apollo 11 was delayed past its intended July launch date, at which point Anders would be unavailable. By the normal crew rotation in place during Apollo, Lovell, Mattingly, and Haise were scheduled to fly on Apollo 14, but the three of them were bumped to Apollo 13: there was a crew issue for Apollo 13 as none of them except Edgar Mitchell flew in space again. George Mueller rejected the crew and this was the first time an Apollo crew was rejected. To give Alan Shepard more training time, Lovell's crew were bumped to Apollo 13. Mattingly would later be replaced by Jack Swigert as CMP on Apollo 13. Support crew During Projects Mercury and Gemini, each mission had a prime and a backup crew. For Apollo, a third crew of astronauts was added, known as the support crew. The support crew maintained the flight plan, checklists and mission ground rules, and ensured the prime and backup crews were apprised of changes. They developed procedures, especially those for emergency situations, so these were ready for when the prime and backup crews came to train in the simulators, allowing them to concentrate on practicing and mastering them. For Apollo 11, the support crew consisted of Ken Mattingly, Ronald Evans and Bill Pogue. Capsule communicators The capsule communicator (CAPCOM) was an astronaut at the Mission Control Center in Houston, Texas, who was the only person who communicated directly with the flight crew. For Apollo 11, the CAPCOMs were: Charles Duke, Ronald Evans, Bruce McCandless II, James Lovell, William Anders, Ken Mattingly, Fred Haise, Don L. Lind, Owen K. Garriott and Harrison Schmitt. Flight directors The flight directors for this mission were: Other key personnel Other key personnel who played important roles in the Apollo 11 mission include the following. Preparations Insignia The Apollo 11 mission emblem was designed by Collins, who wanted a symbol for "peaceful lunar landing by the United States". At Lovell's suggestion, he chose the bald eagle, the national bird of the United States, as the symbol. Tom Wilson, a simulator instructor, suggested an olive branch in its beak to represent their peaceful mission. Collins added a lunar background with the Earth in the distance. The sunlight in the image was coming from the wrong direction; the shadow should have been in the lower part of the Earth instead of the left. Aldrin, Armstrong and Collins decided the Eagle and the Moon would be in their natural colors, and decided on a blue and gold border. Armstrong was concerned that "eleven" would not be understood by non-English speakers, so they went with "Apollo 11", and they decided not to put their names on the patch, so it would "be representative of everyone who had worked toward a lunar landing". An illustrator at the Manned Spacecraft Center (MSC) did the artwork, which was then sent off to NASA officials for approval. The design was rejected. Bob Gilruth, the director of the MSC felt the talons of the eagle looked "too warlike". After some discussion, the olive branch was moved to the talons. When the Eisenhower dollar coin was released in 1971, the patch design provided the eagle for its reverse side. The design was also used for the smaller Susan B. Anthony dollar unveiled in 1979. Call signs After the crew of Apollo 10 named their spacecraft Charlie Brown and Snoopy, assistant manager for public affairs Julian Scheer wrote to George Low, the Manager of the Apollo Spacecraft Program Office at the MSC, to suggest the Apollo 11 crew be less flippant in naming their craft. The name Snowcone was used for the CM and Haystack was used for the LM in both internal and external communications during early mission planning. The LM was named Eagle after the motif which was featured prominently on the mission insignia. At Scheer's suggestion, the CM was named Columbia after Columbiad, the giant cannon that launched a spacecraft (also from Florida) in Jules Verne's 1865 novel From the Earth to the Moon. It also referred to Columbia, a historical name of the United States. In Collins' 1976 book, he said Columbia was in reference to Christopher Columbus. Mementos The astronauts had personal preference kits (PPKs), small bags containing personal items of significance they wanted to take with them on the mission. Five PPKs were carried on Apollo 11: three (one for each astronaut) were stowed on Columbia before launch, and two on Eagle. Neil Armstrong's LM PPK contained a piece of wood from the Wright brothers' 1903 Wright Flyers left propeller and a piece of fabric from its wing, along with a diamond-studded astronaut pin originally given to Slayton by the widows of the Apollo 1 crew. This pin had been intended to be flown on that mission and given to Slayton afterwards, but following the disastrous launch pad fire and subsequent funerals, the widows gave the pin to Slayton. Armstrong took it with him on Apollo 11. Site selection NASA's Apollo Site Selection Board announced five potential landing sites on February 8, 1968. These were the result of two years' worth of studies based on high-resolution photography of the lunar surface by the five uncrewed probes of the Lunar Orbiter program and information about surface conditions provided by the Surveyor program. The best Earth-bound telescopes could not resolve features with the resolution Project Apollo required. The landing site had to be close to the lunar equator to minimize the amount of propellant required, clear of obstacles to minimize maneuvering, and flat to simplify the task of the landing radar. Scientific value was not a consideration. Areas that appeared promising on photographs taken on Earth were often found to be totally unacceptable. The original requirement that the site be free of craters had to be relaxed, as no such site was found. Five sites were considered: Sites 1 and 2 were in the Sea of Tranquility (Mare Tranquillitatis); Site 3 was in the Central Bay (); and Sites 4 and 5 were in the Ocean of Storms (Oceanus Procellarum). The final site selection was based on seven criteria: The site needed to be smooth, with relatively few craters; with approach paths free of large hills, tall cliffs or deep craters that might confuse the landing radar and cause it to issue incorrect readings; reachable with a minimum amount of propellant; allowing for delays in the launch countdown; providing the Apollo spacecraft with a free-return trajectory, one that would allow it to coast around the Moon and safely return to Earth without requiring any engine firings should a problem arise on the way to the Moon; with good visibility during the landing approach, meaning the Sun would be between 7 and 20 degrees behind the LM; and a general slope of less than two degrees in the landing area. The requirement for the Sun angle was particularly restrictive, limiting the launch date to one day per month. A landing just after dawn was chosen to limit the temperature extremes the astronauts would experience. The Apollo Site Selection Board selected Site 2, with Sites 3 and 5 as backups in the event of the launch being delayed. In May 1969, Apollo 10's lunar module flew to within of Site 2, and reported it was acceptable. First-step decision During the first press conference after the Apollo 11 crew was announced, the first question was, "Which one of you gentlemen will be the first man to step onto the lunar surface?" Slayton told the reporter it had not been decided, and Armstrong added that it was "not based on individual desire". One of the first versions of the egress checklist had the lunar module pilot exit the spacecraft before the commander, which matched what had been done on Gemini missions, where the commander had never performed the spacewalk. Reporters wrote in early 1969 that Aldrin would be the first man to walk on the Moon, and Associate Administrator George Mueller told reporters he would be first as well. Aldrin heard that Armstrong would be the first because Armstrong was a civilian, which made Aldrin livid. Aldrin attempted to persuade other lunar module pilots he should be first, but they responded cynically about what they perceived as a lobbying campaign. Attempting to stem interdepartmental conflict, Slayton told Aldrin that Armstrong would be first since he was the commander. The decision was announced in a press conference on April 14, 1969. For decades, Aldrin believed the final decision was largely driven by the lunar module's hatch location. Because the astronauts had their spacesuits on and the spacecraft was so small, maneuvering to exit the spacecraft was difficult. The crew tried a simulation in which Aldrin left the spacecraft first, but he damaged the simulator while attempting to egress. While this was enough for mission planners to make their decision, Aldrin and Armstrong were left in the dark on the decision until late spring. Slayton told Armstrong the plan was to have him leave the spacecraft first, if he agreed. Armstrong said, "Yes, that's the way to do it." The media accused Armstrong of exercising his commander's prerogative to exit the spacecraft first. Chris Kraft revealed in his 2001 autobiography that a meeting occurred between Gilruth, Slayton, Low, and himself to make sure Aldrin would not be the first to walk on the Moon. They argued that the first person to walk on the Moon should be like Charles Lindbergh, a calm and quiet person. They made the decision to change the flight plan so the commander was the first to egress from the spacecraft. Pre-launch The ascent stage of LM-5 Eagle arrived at the Kennedy Space Center on January 8, 1969, followed by the descent stage four days later, and CSM-107 Columbia on January 23. There were several differences between Eagle and Apollo 10's LM-4 Snoopy; Eagle had a VHF radio antenna to facilitate communication with the astronauts during their EVA on the lunar surface; a lighter ascent engine; more thermal protection on the landing gear; and a package of scientific experiments known as the Early Apollo Scientific Experiments Package (EASEP). The only change in the configuration of the command module was the removal of some insulation from the forward hatch. The CSM was mated on January 29, and moved from the Operations and Checkout Building to the Vehicle Assembly Building on April 14. The S-IVB third stage of Saturn V AS-506 had arrived on January 18, followed by the S-II second stage on February 6, S-IC first stage on February 20, and the Saturn V Instrument Unit on February 27. At 12:30 on May 20, the assembly departed the Vehicle Assembly Building atop the crawler-transporter, bound for Launch Pad 39A, part of Launch Complex 39, while Apollo 10 was still on its way to the Moon. A countdown test commenced on June 26, and concluded on July 2. The launch complex was floodlit on the night of July 15, when the crawler-transporter carried the mobile service structure back to its parking area. In the early hours of the morning, the fuel tanks of the S-II and S-IVB stages were filled with liquid hydrogen. Fueling was completed by three hours before launch. Launch operations were partly automated, with 43 programs written in the ATOLL programming language. Slayton roused the crew shortly after 04:00, and they showered, shaved, and had the traditional pre-flight breakfast of steak and eggs with Slayton and the backup crew. They then donned their space suits and began breathing pure oxygen. At 06:30, they headed out to Launch Complex 39. Haise entered Columbia about three hours and ten minutes before launch time. Along with a technician, he helped Armstrong into the left-hand couch at 06:54. Five minutes later, Collins joined him, taking up his position on the right-hand couch. Finally, Aldrin entered, taking the center couch. Haise left around two hours and ten minutes before launch. The closeout crew sealed the hatch, and the cabin was purged and pressurized. The closeout crew then left the launch complex about an hour before launch time. The countdown became automated at three minutes and twenty seconds before launch time. Over 450 personnel were at the consoles in the firing room. Mission Launch and flight to lunar orbit An estimated one million spectators watched the launch of Apollo 11 from the highways and beaches in the vicinity of the launch site. Dignitaries included the Chief of Staff of the United States Army, General William Westmoreland, four cabinet members, 19 state governors, 40 mayors, 60 ambassadors and 200 congressmen. Vice President Spiro Agnew viewed the launch with former president Lyndon B. Johnson and his wife Lady Bird Johnson. Around 3,500 media representatives were present. About two-thirds were from the United States; the rest came from 55 other countries. The launch was televised live in 33 countries, with an estimated 25 million viewers in the United States alone. Millions more around the world listened to radio broadcasts. President Richard Nixon viewed the launch from his office in the White House with his NASA liaison officer, Apollo astronaut Frank Borman. Saturn V AS-506 launched Apollo 11 on July 16, 1969, at 13:32:00 UTC (9:32:00 EDT). At 13.2 seconds into the flight, the launch vehicle began to roll into its flight azimuth of 72.058°. Full shutdown of the first-stage engines occurred about 2 minutes and 42 seconds into the mission, followed by separation of the S-IC and ignition of the S-II engines. The second stage engines then cut off and separated at about 9 minutes and 8 seconds, allowing the first ignition of the S-IVB engine a few seconds later. Apollo 11 entered a near-circular Earth orbit at an altitude of by , twelve minutes into its flight. After one and a half orbits, a second ignition of the S-IVB engine pushed the spacecraft onto its trajectory toward the Moon with the trans-lunar injection (TLI) burn at 16:22:13 UTC. About 30 minutes later, with Collins in the left seat and at the controls, the transposition, docking, and extraction maneuver was performed. This involved separating Columbia from the spent S-IVB stage, turning around, and docking with Eagle still attached to the stage. After the LM was extracted, the combined spacecraft headed for the Moon, while the rocket stage flew on a trajectory past the Moon. This was done to avoid the third stage colliding with the spacecraft, the Earth, or the Moon. A slingshot effect from passing around the Moon threw it into an orbit around the Sun. On July 19 at 17:21:50 UTC, Apollo 11 passed behind the Moon and fired its service propulsion engine to enter lunar orbit. In the thirty orbits that followed, the crew saw passing views of their landing site in the southern Sea of Tranquility about southwest of the crater Sabine D. The site was selected in part because it had been characterized as relatively flat and smooth by the automated Ranger 8 and Surveyor 5 landers and the Lunar Orbiter mapping spacecraft, and because it was unlikely to present major landing or EVA challenges. It lay about southeast of the Surveyor 5 landing site, and southwest of Ranger 8's crash site. Lunar descent At 12:52:00 UTC on July 20, Aldrin and Armstrong entered Eagle, and began the final preparations for lunar descent. At 17:44:00 Eagle separated from Columbia. Collins, alone aboard Columbia, inspected Eagle as it pirouetted before him to ensure the craft was not damaged, and that the landing gear was correctly deployed. Armstrong exclaimed: "The Eagle has wings!" As the descent began, Armstrong and Aldrin found themselves passing landmarks on the surface two or three seconds early, and reported that they were "long"; they would land miles west of their target point. Eagle was traveling too fast. The problem could have been mascons—concentrations of high mass in a region or regions of the Moon's crust that contains a gravitational anomaly, potentially altering Eagle trajectory. Flight Director Gene Kranz speculated that it could have resulted from extra air pressure in the docking tunnel, or a result of Eagles pirouette maneuver. Five minutes into the descent burn, and above the surface of the Moon, the LM guidance computer (LGC) distracted the crew with the first of several unexpected 1201 and 1202 program alarms. Inside Mission Control Center, computer engineer Jack Garman told Guidance Officer Steve Bales it was safe to continue the descent, and this was relayed to the crew. The program alarms indicated "executive overflows", meaning the guidance computer could not complete all its tasks in real-time and had to postpone some of them. Margaret Hamilton, the Director of Apollo Flight Computer Programming at the MIT Charles Stark Draper Laboratory later recalled: During the mission, the cause was diagnosed as the rendezvous radar switch being in the wrong position, causing the computer to process data from both the rendezvous and landing radars at the same time. Software engineer Don Eyles concluded in a 2005 Guidance and Control Conference paper that the problem was due to a hardware design bug previously seen during testing of the first uncrewed LM in Apollo 5. Having the rendezvous radar on (so it was warmed up in case of an emergency landing abort) should have been irrelevant to the computer, but an electrical phasing mismatch between two parts of the rendezvous radar system could cause the stationary antenna to appear to the computer as dithering back and forth between two positions, depending upon how the hardware randomly powered up. The extra spurious cycle stealing, as the rendezvous radar updated an involuntary counter, caused the computer alarms. Landing When Armstrong again looked outside, he saw that the computer's landing target was in a boulder-strewn area just north and east of a crater (later determined to be West crater), so he took semi-automatic control. Armstrong considered landing short of the boulder field so they could collect geological samples from it, but could not since their horizontal velocity was too high. Throughout the descent, Aldrin called out navigation data to Armstrong, who was busy piloting Eagle. Now above the surface, Armstrong knew their propellant supply was dwindling and was determined to land at the first possible landing site. Armstrong found a clear patch of ground and maneuvered the spacecraft towards it. As he got closer, now above the surface, he discovered his new landing site had a crater in it. He cleared the crater and found another patch of level ground. They were now from the surface, with only 90 seconds of propellant remaining. Lunar dust kicked up by the LM's engine began to impair his ability to determine the spacecraft's motion. Some large rocks jutted out of the dust cloud, and Armstrong focused on them during his descent so he could determine the spacecraft's speed. A light informed Aldrin that at least one of the probes hanging from Eagle footpads had touched the surface a few moments before the landing and he said: "Contact light!" Armstrong was supposed to immediately shut the engine down, as the engineers suspected the pressure caused by the engine's own exhaust reflecting off the lunar surface could make it explode, but he forgot. Three seconds later, Eagle landed and Armstrong shut the engine down. Aldrin immediately said "Okay, engine stop. ACA—out of detent." Armstrong acknowledged: "Out of detent. Auto." Aldrin continued: "Mode control—both auto. Descent engine command override off. Engine arm—off. 413 is in." ACA was the Attitude Control Assembly—the LM's control stick. Output went to the LGC to command the reaction control system (RCS) jets to fire. "Out of Detent" meant the stick had moved away from its centered position; it was spring-centered like the turn indicator in a car. Address 413 of the Abort Guidance System (AGS) contained the variable that indicated the LM had landed. Eagle landed at 20:17:40 UTC on Sunday July 20 with of usable fuel remaining. Information available to the crew and mission controllers during the landing showed the LM had enough fuel for another 25 seconds of powered flight before an abort without touchdown would have become unsafe, but post-mission analysis showed that the real figure was probably closer to 50 seconds. Apollo 11 landed with less fuel than most subsequent missions, and the astronauts encountered a premature low fuel warning. This was later found to be the result of the propellant sloshing more than expected, uncovering a fuel sensor. On subsequent missions, extra anti-slosh baffles were added to the tanks to prevent this. Armstrong acknowledged Aldrin's completion of the post-landing checklist with "Engine arm is off", before responding to the CAPCOM, Charles Duke, with the words, "Houston, Tranquility Base here. The Eagle has landed." Armstrong's unrehearsed change of call sign from "Eagle" to "Tranquility Base" emphasized to listeners that landing was complete and successful. Duke expressed the relief at Mission Control: "Roger, Twan—Tranquility, we copy you on the ground. You got a bunch of guys about to turn blue. We're breathing again. Thanks a lot." Two and a half hours after landing, before preparations began for the EVA, Aldrin radioed to Earth: He then took communion privately. At this time NASA was still fighting a lawsuit brought by atheist Madalyn Murray O'Hair (who had objected to the Apollo 8 crew reading from the Book of Genesis) demanding that their astronauts refrain from broadcasting religious activities while in space. For this reason, Aldrin chose to refrain from directly mentioning taking communion on the Moon. Aldrin was an elder at the Webster Presbyterian Church, and his communion kit was prepared by the pastor of the church, Dean Woodruff. Webster Presbyterian possesses the chalice used on the Moon and commemorates the event each year on the Sunday closest to July 20. The schedule for the mission called for the astronauts to follow the landing with a five-hour sleep period, but they chose to begin preparations for the EVA early, thinking they would be unable to sleep. Lunar surface operations Preparations for Neil Armstrong and Buzz Aldrin to walk on the Moon began at 23:43 UTC. These took longer than expected; three and a half hours instead of two. During training on Earth, everything required had been neatly laid out in advance, but on the Moon the cabin contained a large number of other items as well, such as checklists, food packets, and tools. Six hours and thirty-nine minutes after landing, Armstrong and Aldrin were ready to go outside, and Eagle was depressurized. Eagles hatch was opened at 02:39:33. Armstrong initially had some difficulties squeezing through the hatch with his portable life support system (PLSS). Some of the highest heart rates recorded from Apollo astronauts occurred during LM egress and ingress. At 02:51 Armstrong began his descent to the lunar surface. The remote control unit on his chest kept him from seeing his feet. Climbing down the nine-rung ladder, Armstrong pulled a D-ring to deploy the modular equipment stowage assembly (MESA) folded against Eagle side and activate the TV camera. Apollo 11 used slow-scan television (TV) incompatible with broadcast TV, so it was displayed on a special monitor and a conventional TV camera viewed this monitor (thus, a broadcast of a broadcast), significantly reducing the quality of the picture. The signal was received at Goldstone in the United States, but with better fidelity by Honeysuckle Creek Tracking Station near Canberra in Australia. Minutes later the feed was switched to the more sensitive Parkes radio telescope in Australia. Despite some technical and weather difficulties, black and white images of the first lunar EVA were received and broadcast to at least 600 million people on Earth. Copies of this video in broadcast format were saved and are widely available, but recordings of the original slow scan source transmission from the lunar surface were likely destroyed during routine magnetic tape re-use at NASA. After describing the surface dust as "very fine-grained" and "almost like a powder", at 02:56:15, six and a half hours after landing, Armstrong stepped off Eagle landing pad and declared: "That's one small step for [a] man, one giant leap for mankind." Armstrong intended to say "That's one small step for a man", but the word "a" is not audible in the transmission, and thus was not initially reported by most observers of the live broadcast. When later asked about his quote, Armstrong said he believed he said "for a man", and subsequent printed versions of the quote included the "a" in square brackets. One explanation for the absence may be that his accent caused him to slur the words "for a" together; another is the intermittent nature of the audio and video links to Earth, partly because of storms near Parkes Observatory. A more recent digital analysis of the tape claims to reveal the "a" may have been spoken but obscured by static. Other analysis points to the claims of static and slurring as "face-saving fabrication", and that Armstrong himself later admitted to misspeaking the line. About seven minutes after stepping onto the Moon's surface, Armstrong collected a contingency soil sample using a sample bag on a stick. He then folded the bag and tucked it into a pocket on his right thigh. This was to guarantee there would be some lunar soil brought back in case an emergency required the astronauts to abandon the EVA and return to the LM. Twelve minutes after the sample was collected, he removed the TV camera from the MESA and made a panoramic sweep, then mounted it on a tripod. The TV camera cable remained partly coiled and presented a tripping hazard throughout the EVA. Still photography was accomplished with a Hasselblad camera that could be operated hand-held or mounted on Armstrong's Apollo space suit. Aldrin joined Armstrong on the surface. He described the view with the simple phrase: "Magnificent desolation." Armstrong said moving in the lunar gravity, one-sixth of Earth's, was "even perhaps easier than the simulations ... It's absolutely no trouble to walk around." Aldrin joined him on the surface and tested methods for moving around, including two-footed kangaroo hops. The PLSS backpack created a tendency to tip backward, but neither astronaut had serious problems maintaining balance. Loping became the preferred method of movement. The astronauts reported that they needed to plan their movements six or seven steps ahead. The fine soil was quite slippery. Aldrin remarked that moving from sunlight into Eagle shadow produced no temperature change inside the suit, but the helmet was warmer in sunlight, so he felt cooler in shadow. The MESA failed to provide a stable work platform and was in shadow, slowing work somewhat. As they worked, the moonwalkers kicked up gray dust, which soiled the outer part of their suits. The astronauts planted the Lunar Flag Assembly containing a flag of the United States on the lunar surface, in clear view of the TV camera. Aldrin remembered, "Of all the jobs I had to do on the Moon the one I wanted to go the smoothest was the flag raising." But the astronauts struggled with the telescoping rod and could only insert the pole about into the hard lunar surface. Aldrin was afraid it might topple in front of TV viewers, but gave "a crisp West Point salute". Before Aldrin could take a photo of Armstrong with the flag, President Richard Nixon spoke to them through a telephone-radio transmission, which Nixon called "the most historic phone call ever made from the White House." Nixon originally had a long speech prepared to read during the phone call, but Frank Borman, who was at the White House as a NASA liaison during Apollo 11, convinced Nixon to keep his words brief. They deployed the EASEP, which included a Passive Seismic Experiment Package used to measure moonquakes and a retroreflector array used for the lunar laser ranging experiment. Then Armstrong walked from the LM to take photographs at the rim of Little West Crater while Aldrin collected two core samples. He used the geologist's hammer to pound in the tubes—the only time the hammer was used on Apollo 11—but was unable to penetrate more than deep. The astronauts then collected rock samples using scoops and tongs on extension handles. Many of the surface activities took longer than expected, so they had to stop documenting sample collection halfway through the allotted 34 minutes. Aldrin shoveled of soil into the box of rocks to pack them in tightly. Two types of rocks were found in the geological samples: basalt and breccia. Three new minerals were discovered in the rock samples collected by the astronauts: armalcolite, tranquillityite, and pyroxferroite. Armalcolite was named after Armstrong, Aldrin, and Collins. All have subsequently been found on Earth. While on the surface, Armstrong uncovered a plaque mounted on the LM ladder, bearing two drawings of Earth (of the Western and Eastern Hemispheres), an inscription, and signatures of the astronauts and President Nixon. The inscription read: At the behest of the Nixon administration to add a reference to God, NASA included the vague date as a reason to include A.D., which stands for Anno Domini ("in the year of our Lord"). Mission Control used a coded phrase to warn Armstrong his metabolic rates were high, and that he should slow down. He was moving rapidly from task to task as time ran out. As metabolic rates remained generally lower than expected for both astronauts throughout the walk, Mission Control granted the astronauts a 15-minute extension. In a 2010 interview, Armstrong explained that NASA limited the first moonwalk's time and distance because there was no empirical proof of how much cooling water the astronauts' PLSS backpacks would consume to handle their body heat generation while working on the Moon. Lunar ascent Aldrin entered Eagle first. With some difficulty the astronauts lifted film and two sample boxes containing of lunar surface material to the LM hatch using a flat cable pulley device called the Lunar Equipment Conveyor (LEC). This proved to be an inefficient tool, and later missions preferred to carry equipment and samples up to the LM by hand. Armstrong reminded Aldrin of a bag of memorial items in his sleeve pocket, and Aldrin tossed the bag down. Armstrong then jumped onto the ladder's third rung, and climbed into the LM. After transferring to LM life support, the explorers lightened the ascent stage for the return to lunar orbit by tossing out their PLSS backpacks, lunar overshoes, an empty Hasselblad camera, and other equipment. The hatch was closed again at 05:11:13. They then pressurized the LM and settled down to sleep. Presidential speech writer William Safire had prepared an In Event of Moon Disaster announcement for Nixon to read in the event the Apollo 11 astronauts were stranded on the Moon. The remarks were in a memo from Safire to Nixon's White House Chief of Staff H. R. Haldeman, in which Safire suggested a protocol the administration might follow in reaction to such a disaster. According to the plan, Mission Control would "close down communications" with the LM, and a clergyman would "commend their souls to the deepest of the deep" in a public ritual likened to burial at sea. The last line of the prepared text contained an allusion to Rupert Brooke's World War I poem "The Soldier". The script for the speech does not make reference to Collins; as he remained onboard Columbia in orbit around the Moon, it was expected that he would be able to return the module to Earth in the event of a mission failure. While moving inside the cabin, Aldrin accidentally damaged the circuit breaker that would arm the main engine for liftoff from the Moon. There was a concern this would prevent firing the engine, stranding them on the Moon. The nonconductive tip of a Duro felt-tip pen was sufficient to activate the switch. After more than hours on the lunar surface, in addition to the scientific instruments, the astronauts left behind: an Apollo 1 mission patch in memory of astronauts Roger Chaffee, Gus Grissom, and Edward White, who died when their command module caught fire during a test in January 1967; two memorial medals of Soviet cosmonauts Vladimir Komarov and Yuri Gagarin, who died in 1967 and 1968 respectively; a memorial bag containing a gold replica of an olive branch as a traditional symbol of peace; and a silicon message disk carrying the goodwill statements by presidents Eisenhower, Kennedy, Johnson, and Nixon along with messages from leaders of 73 countries around the world. The disk also carries a listing of the leadership of the US Congress, a listing of members of the four committees of the House and Senate responsible for the NASA legislation, and the names of NASA's past and then-current top management. After about seven hours of rest, the crew was awakened by Houston to prepare for the return flight. At that time, unknown to them, some hundred kilometers away from them the Soviet probe Luna 15 was about to descend and impact. Despite having been known to be orbiting the Moon at the same time, through a ground-breaking precautious goodwill exchange of data, the mission control of Luna 15 unexpectedly hastened its robotic sample-return mission, initiating descent, in an attempt to return before Apollo 11. Just two hours before Apollo 11's launch Luna 15 crashed at 15:50 UTC, with British astronomers monitoring Luna 15 and recording the situation one commented: "I say, this has really been drama of the highest order", bringing the Space Race to a culmination. Roughly two hours later, at 17:54:00 UTC, the Apollo 11 crew on the surface safely lifted off in Eagle ascent stage to rejoin Collins aboard Columbia in lunar orbit. Film taken from the LM ascent stage upon liftoff from the Moon reveals the American flag, planted some from the descent stage, whipping violently in the exhaust of the ascent stage engine. Aldrin looked up in time to witness the flag topple: "The ascent stage of the LM separated ... I was concentrating on the computers, and Neil was studying the attitude indicator, but I looked up long enough to see the flag fall over." Subsequent Apollo missions planted their flags farther from the LM. Columbia in lunar orbit During his day flying solo around the Moon, Collins never felt lonely. Although it has been said "not since Adam has any human known such solitude", Collins felt very much a part of the mission. In his autobiography he wrote: "this venture has been structured for three men, and I consider my third to be as necessary as either of the other two". In the 48 minutes of each orbit when he was out of radio contact with the Earth while Columbia passed round the far side of the Moon, the feeling he reported was not fear or loneliness, but rather "awareness, anticipation, satisfaction, confidence, almost exultation". One of Collins' first tasks was to identify the lunar module on the ground. To give Collins an idea where to look, Mission Control radioed that they believed the lunar module landed about off target. Each time he passed over the suspected lunar landing site, he tried in vain to find the module. On his first orbits on the back side of the Moon, Collins performed maintenance activities such as dumping excess water produced by the fuel cells and preparing the cabin for Armstrong and Aldrin to return. Just before he reached the dark side on the third orbit, Mission Control informed Collins there was a problem with the temperature of the coolant. If it became too cold, parts of Columbia might freeze. Mission Control advised him to assume manual control and implement Environmental Control System Malfunction Procedure 17. Instead, Collins flicked the switch on the system from automatic to manual and back to automatic again, and carried on with normal housekeeping chores, while keeping an eye on the temperature. When Columbia came back around to the near side of the Moon again, he was able to report that the problem had been resolved. For the next couple of orbits, he described his time on the back side of the Moon as "relaxing". After Aldrin and Armstrong completed their EVA, Collins slept so he could be rested for the rendezvous. While the flight plan called for Eagle to meet up with Columbia, Collins was prepared for a contingency in which he would fly Columbia down to meet Eagle. Return Eagle rendezvoused with Columbia at 21:24 UTC on July 21, and the two docked at 21:35. Eagles ascent stage was jettisoned into lunar orbit at 23:41. Just before the Apollo 12 flight, it was noted that Eagle was still likely to be orbiting the Moon. Later NASA reports mentioned that Eagle orbit had decayed, resulting in it impacting in an "uncertain location" on the lunar surface. In 2021, however, some calculations show that the lander may still be in orbit. On July 23, the last night before splashdown, the three astronauts made a television broadcast in which Collins commented: Aldrin added: Armstrong concluded: On the return to Earth, a bearing at the Guam tracking station failed, potentially preventing communication on the last segment of the Earth return. A regular repair was not possible in the available time but the station director, Charles Force, had his ten-year-old son Greg use his small hands to reach into the housing and pack it with grease. Greg was later thanked by Armstrong. Splashdown and quarantine The aircraft carrier , under the command of Captain Carl J. Seiberlich, was selected as the primary recovery ship (PRS) for Apollo 11 on June 5, replacing its sister ship, the LPH , which had recovered Apollo 10 on May 26. Hornet was then at her home port of Long Beach, California. On reaching Pearl Harbor on July 5, Hornet embarked the Sikorsky SH-3 Sea King helicopters of HS-4, a unit which specialized in recovery of Apollo spacecraft, specialized divers of UDT Detachment Apollo, a 35-man NASA recovery team, and about 120 media representatives. To make room, most of Hornets air wing was left behind in Long Beach. Special recovery equipment was also loaded, including a boilerplate command module used for training. On July 12, with Apollo 11 still on the launch pad, Hornet departed Pearl Harbor for the recovery area in the central Pacific, in the vicinity of . A presidential party consisting of Nixon, Borman, Secretary of State William P. Rogers and National Security Advisor Henry Kissinger flew to Johnston Atoll on Air Force One, then to the command ship USS Arlington in Marine One. After a night on board, they would fly to Hornet in Marine One for a few hours of ceremonies. On arrival aboard Hornet, the party was greeted by the Commander-in-Chief, Pacific Command (CINCPAC), Admiral John S. McCain Jr., and NASA Administrator Thomas O. Paine, who flew to Hornet from Pago Pago in one of Hornets carrier onboard delivery aircraft. Weather satellites were not yet common, but US Air Force Captain Hank Brandli had access to top-secret spy satellite images. He realized that a storm front was headed for the Apollo recovery area. Poor visibility which could make locating the capsule difficult, and strong upper-level winds which "would have ripped their parachutes to shreds" according to Brandli, posed a serious threat to the safety of the mission. Brandli alerted Navy Captain Willard S. Houston Jr., the commander of the Fleet Weather Center at Pearl Harbor, who had the required security clearance. On their recommendation, Rear Admiral Donald C. Davis, commander of Manned Spaceflight Recovery Forces, Pacific, advised NASA to change the recovery area, each man risking his career. A new location was selected northeast. This altered the flight plan. A different sequence of computer programs was used, one never before attempted. In a conventional entry, trajectory event P64 was followed by P67. For a skip-out re-entry, P65 and P66 were employed to handle the exit and entry parts of the skip. In this case, because they were extending the re-entry but not actually skipping out, P66 was not invoked and instead, P65 led directly to P67. The crew were also warned they would not be in a full-lift (heads-down) attitude when they entered P67. The first program's acceleration subjected the astronauts to ; the second, to . Before dawn on July 24, Hornet launched four Sea King helicopters and three Grumman E-1 Tracers. Two of the E-1s were designated as "air boss" while the third acted as a communications relay aircraft. Two of the Sea Kings carried divers and recovery equipment. The third carried photographic equipment, and the fourth carried the decontamination swimmer and the flight surgeon. At 16:44 UTC (05:44 local time) Columbias drogue parachutes were deployed. This was observed by the helicopters. Seven minutes later Columbia struck the water forcefully east of Wake Island, south of Johnston Atoll, and from Hornet, at . with seas and winds at from the east were reported under broken clouds at with visibility of at the recovery site. Reconnaissance aircraft flying to the original splashdown location reported the conditions Brandli and Houston had predicted. During splashdown, Columbia landed upside down but was righted within ten minutes by flotation bags activated by the astronauts. A diver from the Navy helicopter hovering above attached a sea anchor to prevent it from drifting. More divers attached flotation collars to stabilize the module and positioned rafts for astronaut extraction. The divers then passed biological isolation garments (BIGs) to the astronauts, and assisted them into the life raft. The possibility of bringing back pathogens from the lunar surface was considered remote, but NASA took precautions at the recovery site. The astronauts were rubbed down with a sodium hypochlorite solution and Columbia wiped with Povidone-iodine to remove any lunar dust that might be present. The astronauts were winched on board the recovery helicopter. BIGs were worn until they reached isolation facilities on board Hornet. The raft containing decontamination materials was intentionally sunk. After touchdown on Hornet at 17:53 UTC, the helicopter was lowered by the elevator into the hangar bay, where the astronauts walked the to the Mobile quarantine facility (MQF), where they would begin the Earth-based portion of their 21 days of quarantine. This practice would continue for two more Apollo missions, Apollo 12 and Apollo 14, before the Moon was proven to be barren of life, and the quarantine process dropped. Nixon welcomed the astronauts back to Earth. He told them: "[A]s a result of what you've done, the world has never been closer together before." After Nixon departed, Hornet was brought alongside the Columbia, which was lifted aboard by the ship's crane, placed on a dolly and moved next to the MQF. It was then attached to the MQF with a flexible tunnel, allowing the lunar samples, film, data tapes and other items to be removed. Hornet returned to Pearl Harbor, where the MQF was loaded onto a Lockheed C-141 Starlifter and airlifted to the Manned Spacecraft Center. The astronauts arrived at the Lunar Receiving Laboratory at 10:00 UTC on July 28. Columbia was taken to Ford Island for deactivation, and its pyrotechnics made safe. It was then taken to Hickham Air Force Base, from whence it was flown to Houston in a Douglas C-133 Cargomaster, reaching the Lunar Receiving Laboratory on July 30. In accordance with the Extra-Terrestrial Exposure Law, a set of regulations promulgated by NASA on July 16 to codify its quarantine protocol, the astronauts continued in quarantine. After three weeks in confinement (first in the Apollo spacecraft, then in their trailer on Hornet, and finally in the Lunar Receiving Laboratory), the astronauts were given a clean bill of health. On August 10, 1969, the Interagency Committee on Back Contamination met in Atlanta and lifted the quarantine on the astronauts, on those who had joined them in quarantine (NASA physician William Carpentier and MQF project engineer John Hirasaki), and on Columbia itself. Loose equipment from the spacecraft remained in isolation until the lunar samples were released for study. Celebrations On August 13, the three astronauts rode in ticker-tape parades in their honor in New York and Chicago, with an estimated six million attendees. On the same evening in Los Angeles there was an official state dinner to celebrate the flight, attended by members of Congress, 44 governors, Chief Justice of the United States Warren E. Burger and his predecessor, Earl Warren, and ambassadors from 83 nations at the Century Plaza Hotel. Nixon and Agnew honored each astronaut with a presentation of the Presidential Medal of Freedom. The three astronauts spoke before a joint session of Congress on September 16, 1969. They presented two US flags, one to the House of Representatives and the other to the Senate, that they had carried with them to the surface of the Moon. The flag of American Samoa on Apollo 11 is on display at the Jean P. Haydon Museum in Pago Pago, the capital of American Samoa. This celebration began a 38-day world tour that brought the astronauts to 22 countries and included visits with many world leaders. The crew toured from September 29 to November 5. The world tour started in Mexico City and ended in Tokyo. Stops on the tour in order were: Mexico City, Bogota, Buenos Aires, Rio de Janeiro, Las Palmas in the Canary Islands, Madrid, Paris, Amsterdam, Brussels, Oslo, Cologne, Berlin, London, Rome, Belgrade, Ankara, Kinshasa, Tehran, Mumbai, Dhaka, Bangkok, Darwin, Sydney, Guam, Seoul, Tokyo and Honolulu. Many nations honored the first human Moon landing with special features in magazines or by issuing Apollo 11 commemorative postage stamps or coins. Legacy Cultural significance Humans walking on the Moon and returning safely to Earth accomplished Kennedy's goal set eight years earlier. In Mission Control during the Apollo 11 landing, Kennedy's speech flashed on the screen, followed by the words "TASK ACCOMPLISHED, July 1969". The success of Apollo 11 demonstrated the United States' technological superiority; and with the success of Apollo 11, America had won the Space Race. New phrases permeated into the English language. "If they can send a man to the Moon, why can't they ...?" became a common saying following Apollo 11. Armstrong's words on the lunar surface also spun off various parodies. While most people celebrated the accomplishment, disenfranchised Americans saw it as a symbol of the divide in America, evidenced by protesters led by Ralph Abernathy outside of Kennedy Space Center the day before Apollo 11 launched. NASA Administrator Thomas Paine met with Abernathy at the occasion, both hoping that the space program can spur progress also in other regards, such as poverty in the US. Paine was then asked, and agreed, to host protesters as spectators at the launch, and Abernathy, awestruck by the spectacle, prayed for the astronauts. Racial and financial inequalities frustrated citizens who wondered why money spent on the Apollo program was not spent taking care of humans on Earth. A poem by Gil Scott-Heron called "Whitey on the Moon" (1970) illustrated the racial inequality in the United States that was highlighted by the Space Race. The poem starts with: Twenty percent of the world's population watched humans walk on the Moon for the first time. While Apollo 11 sparked the interest of the world, the follow-on Apollo missions did not hold the interest of the nation. One possible explanation was the shift in complexity. Landing someone on the Moon was an easy goal to understand; lunar geology was too abstract for the average person. Another is that Kennedy's goal of landing humans on the Moon had already been accomplished. A well-defined objective helped Project Apollo accomplish its goal, but after it was completed it was hard to justify continuing the lunar missions. While most Americans were proud of their nation's achievements in space exploration, only once during the late 1960s did the Gallup Poll indicate that a majority of Americans favored "doing more" in space as opposed to "doing less". By 1973, 59 percent of those polled favored cutting spending on space exploration. The Space Race had been won, and Cold War tensions were easing as the US and Soviet Union entered the era of détente. This was also a time when inflation was rising, which put pressure on the government to reduce spending. What saved the space program was that it was one of the few government programs that had achieved something great. Drastic cuts, warned Caspar Weinberger, the deputy director of the Office of Management and Budget, might send a signal that "our best years are behind us". After the Apollo 11 mission, officials from the Soviet Union said landing humans on the Moon was dangerous and unnecessary. At the time the Soviet Union was attempting to retrieve lunar samples robotically. The Soviets publicly denied there was a race to the Moon, and indicated they were not making an attempt. Mstislav Keldysh said in July 1969, "We are concentrating wholly on the creation of large satellite systems." It was revealed in 1989 that the Soviets had tried to send people to the Moon, but were unable due to technological difficulties. The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it. The Apollo 11 landing is referenced in the songs "Armstrong, Aldrin and Collins" by the Byrds on the 1969 album Ballad of Easy Rider, "Coon on the Moon" by Howlin' Wolf on the 1973 album The Back Door Wolf, and "One Small Step" by Ayreon on the 2000 album Universal Migrator Part 1: The Dream Sequencer. Spacecraft The command module Columbia went on a tour of the United States, visiting 49 state capitals, the District of Columbia, and Anchorage, Alaska. In 1971, it was transferred to the Smithsonian Institution, and was displayed at the National Air and Space Museum (NASM) in Washington, DC. It was in the central Milestones of Flight exhibition hall in front of the Jefferson Drive entrance, sharing the main hall with other pioneering flight vehicles such as the Wright Flyer, Spirit of St. Louis, Bell X-1, North American X-15 and Friendship 7. Columbia was moved in 2017 to the NASM Mary Baker Engen Restoration Hangar at the Steven F. Udvar-Hazy Center in Chantilly, Virginia, to be readied for a four-city tour titled Destination Moon: The Apollo 11 Mission. This included Space Center Houston from October 14, 2017, to March 18, 2018, the Saint Louis Science Center from April 14 to September 3, 2018, the Senator John Heinz History Center in Pittsburgh from September 29, 2018, to February 18, 2019, and its last location at Museum of Flight in Seattle from March 16 to September 2, 2019. Continued renovations at the Smithsonian allowed time for an additional stop for the capsule, and it was moved to the Cincinnati Museum Center. The ribbon cutting ceremony was on September 29, 2019. For 40 years Armstrong's and Aldrin's space suits were displayed in the museum's Apollo to the Moon exhibit, until it permanently closed on December 3, 2018, to be replaced by a new gallery which was scheduled to open in 2022. A special display of Armstrong's suit was unveiled for the 50th anniversary of Apollo 11 in July 2019. The quarantine trailer, the flotation collar and the flotation bags are in the Smithsonian's Steven F. Udvar-Hazy Center annex near Washington Dulles International Airport in Chantilly, Virginia, where they are on display along with a test lunar module. The descent stage of the LM Eagle remains on the Moon. In 2009, the Lunar Reconnaissance Orbiter (LRO) imaged the various Apollo landing sites on the surface of the Moon, for the first time with sufficient resolution to see the descent stages of the lunar modules, scientific instruments, and foot trails made by the astronauts. The remains of the ascent stage are assumed to lie at an unknown location on the lunar surface. The ascent stage, Eagle, was not tracked after it was jettisoned. The lunar gravity field is sufficiently non-uniform to make low Moon orbits unstable after a short time, leading the orbiting object to impact the surface. However, using a program developed by NASA, and high-resolution lunar gravity data, a paper was published, in 2021, indicating that Eagle might still be in orbit as late as 2020. Using the orbital elements published by NASA, a Monte Carlo method was used to generate parameter sets that bracket the uncertainties in these elements. All simulations, of the orbit, predicted that Eagle would never impact the lunar surface. In March 2012 a team of specialists financed by Amazon founder Jeff Bezos located the F-1 engines from the S-IC stage that launched Apollo 11 into space. They were found on the Atlantic seabed using advanced sonar scanning. His team brought parts of two of the five engines to the surface. In July 2013, a conservator discovered a serial number under the rust on one of the engines raised from the Atlantic, which NASA confirmed was from Apollo 11. The S-IVB third stage which performed Apollo 11's trans-lunar injection remains in a solar orbit near to that of Earth. Moon rocks The main repository for the Apollo Moon rocks is the Lunar Sample Laboratory Facility at the Lyndon B. Johnson Space Center in Houston, Texas. For safekeeping, there is also a smaller collection stored at White Sands Test Facility near Las Cruces, New Mexico. Most of the rocks are stored in nitrogen to keep them free of moisture. They are handled only indirectly, using special tools. Over 100 research laboratories worldwide conduct studies of the samples; approximately 500 samples are prepared and sent to investigators every year. In November 1969, Nixon asked NASA to make up about 250 presentation Apollo 11 lunar sample displays for 135 nations, the fifty states of the United States and its possessions, and the United Nations. Each display included Moon dust from Apollo 11 and flags, including one of the Soviet Union, taken along by Apollo 11. The rice-sized particles were four small pieces of Moon soil weighing about 50 mg and were enveloped in a clear acrylic button about as big as a United States half-dollar coin. This acrylic button magnified the grains of lunar dust. Nixon gave the Apollo 11 lunar sample displays as goodwill gifts in 1970. Experiment results The Passive Seismic Experiment ran until the command uplink failed on August 25, 1969. The downlink failed on December 14, 1969. , the Lunar Laser Ranging experiment remains operational. Moonwalk camera The Hasselblad camera used during the moonwalk was thought to be lost or left on the Moon surface. Lunar Module Eagle memorabilia In 2015, after Armstrong died in 2012, his widow contacted the National Air and Space Museum to inform them she had found a white cloth bag in one of Armstrong's closets. The bag contained various items, which should have been left behind in the Lunar Module Eagle, including the 16mm Data Acquisition Camera that had been used to capture images of the first Moon landing. The camera is currently on display at the National Air and Space Museum. Anniversary events 40th anniversary On July 15, 2009, Life.com released a photo gallery of previously unpublished photos of the astronauts taken by Life photographer Ralph Morse prior to the Apollo 11 launch. From July 16 to 24, 2009, NASA streamed the original mission audio on its website in real time 40 years to the minute after the events occurred. It is in the process of restoring the video footage and has released a preview of key moments. In July 2010, air-to-ground voice recordings and film footage shot in Mission Control during the Apollo 11 powered descent and landing was re-synchronized and released for the first time. The John F. Kennedy Presidential Library and Museum set up an Adobe Flash website that rebroadcasts the transmissions of Apollo 11 from launch to landing on the Moon. On July 20, 2009, Armstrong, Aldrin, and Collins met with President Barack Obama at the White House. "We expect that there is, as we speak, another generation of kids out there who are looking up at the sky and are going to be the next Armstrong, Collins, and Aldrin", Obama said. "We want to make sure that NASA is going to be there for them when they want to take their journey." On August 7, 2009, an act of Congress awarded the three astronauts a Congressional Gold Medal, the highest civilian award in the United States. The bill was sponsored by Florida Senator Bill Nelson and Florida Representative Alan Grayson. A group of British scientists interviewed as part of the anniversary events reflected on the significance of the Moon landing: 50th anniversary On June 10, 2015, Congressman Bill Posey introduced resolution H.R. 2726 to the 114th session of the United States House of Representatives directing the United States Mint to design and sell commemorative coins in gold, silver and clad for the 50th anniversary of the Apollo 11 mission. On January 24, 2019, the Mint released the Apollo 11 Fiftieth Anniversary commemorative coins to the public on its website. A documentary film, Apollo 11, with restored footage of the 1969 event, premiered in IMAX on March 1, 2019, and broadly in theaters on March 8. The Smithsonian Institute's National Air and Space Museum and NASA sponsored the "Apollo 50 Festival" on the National Mall in Washington DC. The three-day (July 18 to 20, 2019) outdoor festival featured hands-on exhibits and activities, live performances, and speakers such as Adam Savage and NASA scientists. As part of the festival, a projection of the tall Saturn V rocket was displayed on the east face of the tall Washington Monument from July 16 through the 20th from 9:30 pm until 11:30 pm (EDT). The program also included a 17-minute show that combined full-motion video projected on the Washington Monument to recreate the assembly and launch of the Saturn V rocket. The projection was joined by a wide recreation of the Kennedy Space Center countdown clock and two large video screens showing archival footage to recreate the time leading up to the moon landing. There were three shows per night on July 19–20, with the last show on Saturday, delayed slightly so the portion where Armstrong first set foot on the Moon would happen exactly 50 years to the second after the actual event. On July 19, 2019, the Google Doodle paid tribute to the Apollo 11 Moon Landing, complete with a link to an animated YouTube video with voiceover by astronaut Michael Collins. Aldrin, Collins, and Armstrong's sons were hosted by President Donald Trump in the Oval Office. Films and documentaries Footprints on the Moon, a 1969 documentary film by Bill Gibson and Barry Coe, about the Apollo 11 mission Moonwalk One, a 1971 documentary film by Theo Kamecke Apollo 11: As It Happened, a 1994 six-hour documentary on ABC News' coverage of the event First Man, 2018 film by Damien Chazelle based on the 2005 James R. Hansen book First Man: The Life of Neil A. Armstrong. Apollo 11, a 2019 documentary film by Todd Douglas Miller with restored footage of the 1969 event Chasing the Moon, a July 2019 PBS three-night six-hour documentary, directed by Robert Stone, examined the events leading up to the Apollo 11 mission. An accompanying book of the same name was also released. 8 Days: To the Moon and Back, a PBS and BBC Studios 2019 documentary film by Anthony Philipson re-enacting major portions of the Apollo 11 mission using mission audio recordings, new studio footage, NASA and news archives, and computer-generated imagery.
Technology
Crewed vehicles
null
664
https://en.wikipedia.org/wiki/Astronaut
Astronaut
An astronaut (from the Ancient Greek (), meaning 'star', and (), meaning 'sailor') is a person trained, equipped, and deployed by a human spaceflight program to serve as a commander or crew member aboard a spacecraft. Although generally reserved for professional space travelers, the term is sometimes applied to anyone who travels into space, including scientists, politicians, journalists, and tourists. "Astronaut" technically applies to all human space travelers regardless of nationality. However, astronauts fielded by Russia or the Soviet Union are typically known instead as cosmonauts (from the Russian "kosmos" (космос), meaning "space", also borrowed from Greek ). Comparatively recent developments in crewed spaceflight made by China have led to the rise of the term taikonaut (from the Mandarin "tàikōng" (), meaning "space"), although its use is somewhat informal and its origin is unclear. In China, the People's Liberation Army Astronaut Corps astronauts and their foreign counterparts are all officially called hángtiānyuán (, meaning "heaven navigator" or literally "heaven-sailing staff"). Since 1961, 600 astronauts have flown in space. Until 2002, astronauts were sponsored and trained exclusively by governments, either by the military or by civilian space agencies. With the suborbital flight of the privately funded SpaceShipOne in 2004, a new category of astronaut was created: the commercial astronaut. Definition The criteria for what constitutes human spaceflight vary, with some focus on the point where the atmosphere becomes so thin that centrifugal force, rather than aerodynamic force, carries a significant portion of the weight of the flight object. The (FAI) Sporting Code for astronautics recognizes only flights that exceed the Kármán line, at an altitude of . In the United States, professional, military, and commercial astronauts who travel above an altitude of are awarded astronaut wings. , 552 people from 36 countries have reached or more in altitude, of whom 549 reached low Earth orbit or beyond. Of these, 24 people have traveled beyond low Earth orbit, either to lunar orbit, the lunar surface, or, in one case, a loop around the Moon. Three of the 24—Jim Lovell, John Young and Eugene Cernan—did so twice. , under the U.S. definition, 558 people qualify as having reached space, above altitude. Of eight X-15 pilots who exceeded in altitude, only one, Joseph A. Walker, exceeded 100 kilometers (about 62.1 miles) and he did it two times, becoming the first person in space twice. Space travelers have spent over 41,790 man-days (114.5-man-years) in space, including over 100 astronaut-days of spacewalks. , the man with the longest cumulative time in space is Oleg Kononenko, who has spent over 1100 days in space. Peggy A. Whitson holds the record for the most time in space by a woman, at 675 days. Terminology In 1959, when both the United States and Soviet Union were planning, but had yet to launch humans into space, NASA Administrator T. Keith Glennan and his Deputy Administrator, Hugh Dryden, discussed whether spacecraft crew members should be called astronauts or cosmonauts. Dryden preferred "cosmonaut", on the grounds that flights would occur in and to the broader cosmos, while the "astro" prefix suggested flight specifically to the stars. Most NASA Space Task Group members preferred "astronaut", which survived by common usage as the preferred American term. When the Soviet Union launched the first man into space, Yuri Gagarin in 1961, they chose a term which anglicizes to "cosmonaut". Astronaut A professional space traveler is called an astronaut. The first known use of the term "astronaut" in the modern sense was by Neil R. Jones in his 1930 short story "The Death's Head Meteor". The word itself had been known earlier; for example, in Percy Greg's 1880 book Across the Zodiac, "astronaut" referred to a spacecraft. In Les Navigateurs de l'infini (1925) by J.-H. Rosny aîné, the word astronautique (astronautics) was used. The word may have been inspired by "aeronaut", an older term for an air traveler first applied in 1784 to balloonists. An early use of "astronaut" in a non-fiction publication is Eric Frank Russell's poem "The Astronaut", appearing in the November 1934 Bulletin of the British Interplanetary Society. The first known formal use of the term astronautics in the scientific community was the establishment of the annual International Astronautical Congress in 1950, and the subsequent founding of the International Astronautical Federation the following year. NASA applies the term astronaut to any crew member aboard NASA spacecraft bound for Earth orbit or beyond. NASA also uses the term as a title for those selected to join its Astronaut Corps. The European Space Agency similarly uses the term astronaut for members of its Astronaut Corps. Cosmonaut By convention, an astronaut employed by the Russian Federal Space Agency (or its predecessor, the Soviet space program) is called a cosmonaut in English texts. The word is an Anglicization of kosmonavt ( ). Other countries of the former Eastern Bloc use variations of the Russian kosmonavt, such as the (although Poles also used , and the two words are considered synonyms). Coinage of the term has been credited to Soviet aeronautics (or "cosmonautics") pioneer Mikhail Tikhonravov (1900–1974). The first cosmonaut was Soviet Air Force pilot Yuri Gagarin, also the first person in space. He was part of the first six Soviet citizens, with German Titov, Yevgeny Khrunov, Andriyan Nikolayev, Pavel Popovich, and Grigoriy Nelyubov, who were given the title of pilot-cosmonaut in January 1961. Valentina Tereshkova was the first female cosmonaut and the first and youngest woman to have flown in space with a solo mission on the Vostok 6 in 1963. On 14 March 1995, Norman Thagard became the first American to ride to space on board a Russian launch vehicle, and thus became the first "American cosmonaut". Taikonaut In Chinese, the term (, "cosmos navigating personnel") is used for astronauts and cosmonauts in general, while (, "navigating celestial-heaven personnel") is used for Chinese astronauts. Here, (, literally "heaven-navigating", or spaceflight) is strictly defined as the navigation of outer space within the local star system, i.e. Solar System. The phrase (, "spaceman") is often used in Hong Kong and Taiwan. The term taikonaut is used by some English-language news media organizations for professional space travelers from China. The word has featured in the Longman and Oxford English dictionaries, and the term became more common in 2003 when China sent its first astronaut Yang Liwei into space aboard the Shenzhou 5 spacecraft. This is the term used by Xinhua News Agency in the English version of the Chinese People's Daily since the advent of the Chinese space program. The origin of the term is unclear; as early as May 1998, Chiew Lee Yih () from Malaysia used it in newsgroups. Parastronaut For its 2022 Astronaut Group, the European Space Agency envisioned recruiting an astronaut with a physical disability, a category they called "parastronauts", with the intention but not guarantee of spaceflight. The categories of disability considered for the program were individuals with lower limb deficiency (either through amputation or congenital), leg length difference, or a short stature (less than ). On 23 November 2022, John McFall was selected to be the first ESA parastronaut. Other terms With the rise of space tourism, NASA and the Russian Federal Space Agency agreed to use the term "spaceflight participant" to distinguish those space travelers from professional astronauts on missions coordinated by those two agencies. While no nation other than Russia (and previously the Soviet Union), the United States, and China have launched a crewed spacecraft, several other nations have sent people into space in cooperation with one of these countries, e.g. the Soviet-led Interkosmos program. Inspired partly by these missions, other synonyms for astronaut have entered occasional English usage. For example, the term spationaut () is sometimes used to describe French space travelers, from the Latin word for "space"; the Malay term (deriving from angkasa meaning 'space') was used to describe participants in the Angkasawan program (note its similarity with the Indonesian term antariksawan). Plans of the Indian Space Research Organisation to launch its crewed Gaganyaan spacecraft have spurred at times public discussion if another term than astronaut should be used for the crew members, suggesting vyomanaut (from the Sanskrit word meaning 'sky' or 'space') or gagannaut (from the Sanskrit word for 'sky'). In Finland, the NASA astronaut Timothy Kopra, a Finnish American, has sometimes been referred to as , from the Finnish word . Across Germanic languages, the word for "astronaut" typically translates to "space traveler", as it does with German's Raumfahrer, Dutch's ruimtevaarder, Swedish's rymdfarare, and Norwegian's romfarer. As of 2021 in the United States, astronaut status is conferred on a person depending on the authorizing agency: one who flies in a vehicle above for NASA or the military is considered an astronaut (with no qualifier) one who flies in a vehicle to the International Space Station in a mission coordinated by NASA and Roscosmos is a spaceflight participant one who flies above in a non-NASA vehicle as a crewmember and demonstrates activities during flight that are essential to public safety, or contribute to human space flight safety, is considered a commercial astronaut by the Federal Aviation Administration one who flies to the International Space Station as part of a "privately funded, dedicated commercial spaceflight on a commercial launch vehicle dedicated to the mission ... to conduct approved commercial and marketing activities on the space station (or in a commercial segment attached to the station)" is considered a private astronaut by NASA (as of 2020, nobody has yet qualified for this status) a generally-accepted but unofficial term for a paying non-crew passenger who flies a private non-NASA or military vehicles above is a space tourist (as of 2020, nobody has yet qualified for this status) On July 20, 2021, the FAA issued an order redefining the eligibility criteria to be an astronaut in response to the private suborbital spaceflights of Jeff Bezos and Richard Branson. The new criteria states that one must have "[d]emonstrated activities during flight that were essential to public safety, or contributed to human space flight safety" to qualify as an astronaut. This new definition excludes Bezos and Branson. Space travel milestones The first human in space was Soviet Yuri Gagarin, who was launched on 12 April 1961, aboard Vostok 1 and orbited around the Earth for 108 minutes. The first woman in space was Soviet Valentina Tereshkova, who launched on 16 June 1963, aboard Vostok 6 and orbited Earth for almost three days. Alan Shepard became the first American and second person in space on 5 May 1961, on a 15-minute sub-orbital flight aboard Freedom 7. The first American to orbit the Earth was John Glenn, aboard Friendship 7 on 20 February 1962. The first American woman in space was Sally Ride, during Space Shuttle Challenger's mission STS-7, on 18 June 1983. In 1992, Mae Jemison became the first African American woman to travel in space aboard STS-47. Cosmonaut Alexei Leonov was the first person to conduct an extravehicular activity (EVA), (commonly called a "spacewalk"), on 18 March 1965, on the Soviet Union's Voskhod 2 mission. This was followed two and a half months later by astronaut Ed White who made the first American EVA on NASA's Gemini 4 mission. The first crewed mission to orbit the Moon, Apollo 8, included American William Anders who was born in Hong Kong, making him the first Asian-born astronaut in 1968. The Soviet Union, through its Intercosmos program, allowed people from other "socialist" (i.e. Warsaw Pact and other Soviet-allied) countries to fly on its missions, with the notable exceptions of France and Austria participating in Soyuz TM-7 and Soyuz TM-13, respectively. An example is Czechoslovak Vladimír Remek, the first cosmonaut from a country other than the Soviet Union or the United States, who flew to space in 1978 on a Soyuz-U rocket. Rakesh Sharma became the first Indian citizen to travel to space. He was launched aboard Soyuz T-11, on 2 April 1984. On 23 July 1980, Pham Tuan of Vietnam became the first Asian in space when he flew aboard Soyuz 37. Also in 1980, Cuban Arnaldo Tamayo Méndez became the first person of Hispanic and black African descent to fly in space, and in 1983, Guion Bluford became the first African American to fly into space. In April 1985, Taylor Wang became the first ethnic Chinese person in space. The first person born in Africa to fly in space was Patrick Baudry (France), in 1985. In 1985, Saudi Arabian Prince Sultan Bin Salman Bin AbdulAziz Al-Saud became the first Arab Muslim astronaut in space. In 1988, Abdul Ahad Mohmand became the first Afghan to reach space, spending nine days aboard the Mir space station. With the increase of seats on the Space Shuttle, the U.S. began taking international astronauts. In 1983, Ulf Merbold of West Germany became the first non-US citizen to fly in a US spacecraft. In 1984, Marc Garneau became the first of eight Canadian astronauts to fly in space (through 2010). In 1985, Rodolfo Neri Vela became the first Mexican-born person in space. In 1991, Helen Sharman became the first Briton to fly in space. In 2002, Mark Shuttleworth became the first citizen of an African country to fly in space, as a paying spaceflight participant. In 2003, Ilan Ramon became the first Israeli to fly in space, although he died during a re-entry accident. On 15 October 2003, Yang Liwei became China's first astronaut on the Shenzhou 5 spacecraft. On 30 May 2020, Doug Hurley and Bob Behnken became the first astronauts to launch to orbit on a private crewed spacecraft, Crew Dragon. Age milestones The youngest person to reach space is Oliver Daemen, who was 18 years and 11 months old when he made a suborbital spaceflight on Blue Origin NS-16. Daemen, who was a commercial passenger aboard the New Shepard, broke the record of Soviet cosmonaut Gherman Titov, who was 25 years old when he flew Vostok 2. Titov remains the youngest human to reach orbit; he rounded the planet 17 times. Titov was also the first person to suffer space sickness and the first person to sleep in space, twice. The oldest person to reach space is William Shatner, who was 90 years old when he made a suborbital spaceflight on Blue Origin NS-18. The oldest person to reach orbit is John Glenn, one of the Mercury 7, who was 77 when he flew on STS-95. Duration and distance milestones The longest time spent in space was by Russian Valeri Polyakov, who spent 438 days there. As of 2006, the most spaceflights by an individual astronaut is seven, a record held by both Jerry L. Ross and Franklin Chang-Diaz. The farthest distance from Earth an astronaut has traveled was , when Jim Lovell, Jack Swigert, and Fred Haise went around the Moon during the Apollo 13 emergency. Civilian and non-government milestones The first civilian in space was Valentina Tereshkova aboard Vostok 6 (she also became the first woman in space on that mission). Tereshkova was only honorarily inducted into the USSR's Air Force, which did not accept female pilots at that time. A month later, Joseph Albert Walker became the first American civilian in space when his X-15 Flight 90 crossed the line, qualifying him by the international definition of spaceflight. Walker had joined the US Army Air Force but was not a member during his flight. The first people in space who had never been a member of any country's armed forces were both Konstantin Feoktistov and Boris Yegorov aboard Voskhod 1. The first non-governmental space traveler was Byron K. Lichtenberg, a researcher from the Massachusetts Institute of Technology who flew on STS-9 in 1983. In December 1990, Toyohiro Akiyama became the first paying space traveler and the first journalist in space for Tokyo Broadcasting System, a visit to Mir as part of an estimated $12 million (USD) deal with a Japanese TV station, although at the time, the term used to refer to Akiyama was "Research Cosmonaut". Akiyama suffered severe space sickness during his mission, which affected his productivity. The first self-funded space tourist was Dennis Tito on board the Russian spacecraft Soyuz TM-3 on 28 April 2001. Self-funded travelers The first person to fly on an entirely privately funded mission was Mike Melvill, piloting SpaceShipOne flight 15P on a suborbital journey, although he was a test pilot employed by Scaled Composites and not an actual paying space tourist. Jared Isaacman was the first person to self-fund a mission to orbit, commanding Inspiration4 in 2021. Nine others have paid Space Adventures to fly to the International Space Station: Dennis Tito (American): 28 April – 6 May 2001 Mark Shuttleworth (South African): 25 April – 5 May 2002 Gregory Olsen (American): 1–11 October 2005 Anousheh Ansari (Iranian / American): 18–29 September 2006 Charles Simonyi (Hungarian / American): 7–21 April 2007, 26 March – 8 April 2009 Richard Garriott (British / American): 12–24 October 2008 Guy Laliberté (Canadian): 30 September 2009 – 11 October 2009 Yusaku Maezawa and Yozo Hirano (both Japanese): 8 – 24 December 2021 Training The first NASA astronauts were selected for training in 1959. Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection. Selection was initially limited to military pilots. The earliest astronauts for both the US and the USSR tended to be jet fighter pilots, and were often test pilots. Once selected, NASA astronauts go through twenty months of training in a variety of areas, including training for extravehicular activity in a facility such as NASA's Neutral Buoyancy Laboratory. Astronauts-in-training (astronaut candidates) may also experience short periods of weightlessness (microgravity) in an aircraft called the "Vomit Comet," the nickname given to a pair of modified KC-135s (retired in 2000 and 2004, respectively, and replaced in 2005 with a C-9) which perform parabolic flights. Astronauts are also required to accumulate a number of flight hours in high-performance jet aircraft. This is mostly done in T-38 jet aircraft out of Ellington Field, due to its proximity to the Johnson Space Center. Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are conducted from Edwards Air Force Base. Astronauts in training must learn how to control and fly the Space Shuttle; further, it is vital that they are familiar with the International Space Station so they know what they must do when they get there. NASA candidacy requirements The candidate must be a citizen of the United States. The candidate must complete a master's degree in a STEM field, including engineering, biological science, physical science, computer science or mathematics. The candidate must have at least two years of related professional experience obtained after degree completion or at least 1,000 hours pilot-in-command time on jet aircraft. The candidate must be able to pass the NASA long-duration flight astronaut physical. The candidate must also have skills in leadership, teamwork and communications. The master's degree requirement can also be met by: Two years of work toward a doctoral program in a related science, technology, engineering or math field. A completed Doctor of Medicine or Doctor of Osteopathic Medicine degree. Completion of a nationally recognized test pilot school program. Mission Specialist Educator Applicants must have a bachelor's degree with teaching experience, including work at the kindergarten through twelfth grade level. An advanced degree, such as a master's degree or a doctoral degree, is not required, but is strongly desired. Mission Specialist Educators, or "Educator Astronauts", were first selected in 2004; as of 2007, there are three NASA Educator astronauts: Joseph M. Acaba, Richard R. Arnold, and Dorothy Metcalf-Lindenburger. Barbara Morgan, selected as back-up teacher to Christa McAuliffe in 1985, is considered to be the first Educator astronaut by the media, but she trained as a mission specialist. The Educator Astronaut program is a successor to the Teacher in Space program from the 1980s. Health risks of space travel Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, loss of eyesight, orthostatic intolerance, sleep disturbances, and radiation injury. A variety of large scale medical studies are being conducted in space via the National Space Biomedical Research Institute (NSBRI) to address these issues. Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study in which astronauts (including former ISS commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare. A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space. On 31 December 2012, a NASA-supported study reported that human spaceflight may harm the brain and accelerate the onset of Alzheimer's disease. In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars. Over the last decade, flight surgeons and scientists at NASA have seen a pattern of vision problems in astronauts on long-duration space missions. The syndrome, known as visual impairment intracranial pressure (VIIP), has been reported in nearly two-thirds of space explorers after long periods spent aboard the International Space Station (ISS). On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes. Being in space can be physiologically deconditioning on the body. It can affect the otolith organs and adaptive capabilities of the central nervous system. Zero gravity and cosmic rays can cause many implications for astronauts. In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts. A study by Russian scientists published in April 2019 stated that astronauts facing space radiation could face temporary hindrance of their memory centers. While this does not affect their intellectual capabilities, it temporarily hinders formation of new cells in brain's memory centers. The study conducted by Moscow Institute of Physics and Technology (MIPT) concluded this after they observed that mice exposed to neutron and gamma radiation did not impact the rodents' intellectual capabilities. A 2020 study conducted on the brains of eight male Russian cosmonauts after they returned from long stays aboard the International Space Station showed that long-duration spaceflight causes many physiological adaptions, including macro- and microstructural changes. While scientists still know little about the effects of spaceflight on brain structure, this study showed that space travel can lead to new motor skills (dexterity), but also slightly weaker vision, both of which could possibly be long lasting. It was the first study to provide clear evidence of sensorimotor neuroplasticity, which is the brain's ability to change through growth and reorganization. Food and drink An astronaut on the International Space Station requires about mass of food per meal each day (inclusive of about packaging mass per meal). Space Shuttle astronauts worked with nutritionists to select menus that appealed to their individual tastes. Five months before flight, menus were selected and analyzed for nutritional content by the shuttle dietician. Foods are tested to see how they will react in a reduced gravity environment. Caloric requirements are determined using a basal energy expenditure (BEE) formula. On Earth, the average American uses about of water every day. On board the ISS astronauts limit water use to only about per day. Insignia In Russia, cosmonauts are awarded Pilot-Cosmonaut of the Russian Federation upon completion of their missions, often accompanied with the award of Hero of the Russian Federation. This follows the practice established in the USSR where cosmonauts were usually awarded the title Hero of the Soviet Union. At NASA, those who complete astronaut candidate training receive a silver lapel pin. Once they have flown in space, they receive a gold pin. U.S. astronauts who also have active-duty military status receive a special qualification badge, known as the Astronaut Badge, after participation on a spaceflight. The United States Air Force also presents an Astronaut Badge to its pilots who exceed in altitude. Deaths , eighteen astronauts (fourteen men and four women) have died during four space flights. By nationality, thirteen were American, four were Russian (Soviet Union), and one was Israeli. , eleven people (all men) have died training for spaceflight: eight Americans and three Russians. Six of these were in crashes of training jet aircraft, one drowned during water recovery training, and four were due to fires in pure oxygen environments. Astronaut David Scott left a memorial consisting of a statuette titled Fallen Astronaut on the surface of the Moon during his 1971 Apollo 15 mission, along with a list of the names of eight of the astronauts and six cosmonauts known at the time to have died in service. The Space Mirror Memorial, which stands on the grounds of the Kennedy Space Center Visitor Complex, is maintained by the Astronauts Memorial Foundation and commemorates the lives of the men and women who have died during spaceflight and during training in the space programs of the United States. In addition to twenty NASA career astronauts, the memorial includes the names of an X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, and a civilian spaceflight participant.
Technology
Basics_10
null
666
https://en.wikipedia.org/wiki/Alkali%20metal
Alkali metal
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The alkali metals consist of the chemical elements lithium (Li), sodium (Na), potassium (K), rubidium (Rb), caesium (Cs), and francium (Fr). Together with hydrogen they constitute group 1, which lies in the s-block of the periodic table. All alkali metals have their outermost electron in an s-orbital: this shared electron configuration results in their having very similar characteristic properties. Indeed, the alkali metals provide the best example of group trends in properties in the periodic table, with elements exhibiting well-characterised homologous behaviour. This family of elements is also known as the lithium family after its leading element. The alkali metals are all shiny, soft, highly reactive metals at standard temperature and pressure and readily lose their outermost electron to form cations with charge +1. They can all be cut easily with a knife due to their softness, exposing a shiny surface that tarnishes rapidly in air due to oxidation by atmospheric moisture and oxygen (and in the case of lithium, nitrogen). Because of their high reactivity, they must be stored under oil to prevent reaction with air, and are found naturally only in salts and never as the free elements. Caesium, the fifth alkali metal, is the most reactive of all the metals. All the alkali metals react with water, with the heavier alkali metals reacting more vigorously than the lighter ones. All of the discovered alkali metals occur in nature as their compounds: in order of abundance, sodium is the most abundant, followed by potassium, lithium, rubidium, caesium, and finally francium, which is very rare due to its extremely high radioactivity; francium occurs only in minute traces in nature as an intermediate step in some obscure side branches of the natural decay chains. Experiments have been conducted to attempt the synthesis of element 119, which is likely to be the next member of the group; none were successful. However, ununennium may not be an alkali metal due to relativistic effects, which are predicted to have a large influence on the chemical properties of superheavy elements; even if it does turn out to be an alkali metal, it is predicted to have some differences in physical and chemical properties from its lighter homologues. Most alkali metals have many different applications. One of the best-known applications of the pure elements is the use of rubidium and caesium in atomic clocks, of which caesium atomic clocks form the basis of the second. A common application of the compounds of sodium is the sodium-vapour lamp, which emits light very efficiently. Table salt, or sodium chloride, has been used since antiquity. Lithium finds use as a psychiatric medication and as an anode in lithium batteries. Sodium, potassium and possibly lithium are essential elements, having major biological roles as electrolytes, and although the other alkali metals are not essential, they also have various effects on the body, both beneficial and harmful. History Sodium compounds have been known since ancient times; salt (sodium chloride) has been an important commodity in human activities. While potash has been used since ancient times, it was not understood for most of its history to be a fundamentally different substance from sodium mineral salts. Georg Ernst Stahl obtained experimental evidence which led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri-Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include either alkali in his list of chemical elements in 1789. Pure potassium was first isolated in 1807 in England by Humphry Davy, who derived it from caustic potash (KOH, potassium hydroxide) by the use of electrolysis of the molten salt with the newly invented voltaic pile. Previous attempts at electrolysis of the aqueous salt were unsuccessful due to potassium's extreme reactivity. Potassium was the first metal that was isolated by electrolysis. Later that same year, Davy reported extraction of sodium from the similar substance caustic soda (NaOH, lye) by a similar technique, demonstrating the elements, and thus the salts, to be different. Petalite () was discovered in 1800 by the Brazilian chemist José Bonifácio de Andrada in a mine on the island of Utö, Sweden. However, it was not until 1817 that Johan August Arfwedson, then working in the laboratory of the chemist Jöns Jacob Berzelius, detected the presence of a new element while analysing petalite ore. This new element was noted by him to form compounds similar to those of sodium and potassium, though its carbonate and hydroxide were less soluble in water and more alkaline than the other alkali metals. Berzelius gave the unknown material the name lithion/lithina, from the Greek word λιθoς (transliterated as lithos, meaning "stone"), to reflect its discovery in a solid mineral, as opposed to potassium, which had been discovered in plant ashes, and sodium, which was known partly for its high abundance in animal blood. He named the metal inside the material lithium. Lithium, sodium, and potassium were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner in 1850 as having similar properties. Rubidium and caesium were the first elements to be discovered using the spectroscope, invented in 1859 by Robert Bunsen and Gustav Kirchhoff. The next year, they discovered caesium in the mineral water from Bad Dürkheim, Germany. Their discovery of rubidium came the following year in Heidelberg, Germany, finding it in the mineral lepidolite. The names of rubidium and caesium come from the most prominent lines in their emission spectra: a bright red line for rubidium (from the Latin word rubidus, meaning dark red or bright red), and a sky-blue line for caesium (derived from the Latin word caesius, meaning sky-blue). Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music, where notes an octave apart have similar musical functions. His version put all the alkali metals then known (lithium to caesium), as well as copper, silver, and thallium (which show the +1 oxidation state characteristic of the alkali metals), together into a group. His table placed hydrogen with the halogens. After 1869, Dmitri Mendeleev proposed his periodic table placing lithium at the top of a group with sodium, potassium, rubidium, caesium, and thallium. Two years later, Mendeleev revised his table, placing hydrogen in group 1 above lithium, and also moving thallium to the boron group. In this 1871 version, copper, silver, and gold were placed twice, once as part of group IB, and once as part of a "group VIII" encompassing today's groups 8 to 11. After the introduction of the 18-column table, the group IB elements were moved to their current position in the d-block, while alkali metals were left in group IA. Later the group's name was changed to group 1 in 1988. The trivial name "alkali metals" comes from the fact that the hydroxides of the group 1 elements are all strong alkalis when dissolved in water. There were at least four erroneous and incomplete discoveries before Marguerite Perey of the Curie Institute in Paris, France discovered francium in 1939 by purifying a sample of actinium-227, which had been reported to have a decay energy of 220 keV. However, Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one that was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, caused by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure that she later revised to 1%. The next element below francium (eka-francium) in the periodic table would be ununennium (Uue), element 119. The synthesis of ununennium was first attempted in 1985 by bombarding a target of einsteinium-254 with calcium-48 ions at the superHILAC accelerator at the Lawrence Berkeley National Laboratory in Berkeley, California. No atoms were identified, leading to a limiting yield of 300 nb. + → * → no atoms It is highly unlikely that this reaction will be able to create any atoms of ununennium in the near future, given the extremely difficult task of making sufficient amounts of einsteinium-254, which is favoured for production of ultraheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms, to make a large enough target to increase the sensitivity of the experiment to the required level; einsteinium has not been found in nature and has only been produced in laboratories, and in quantities smaller than those needed for effective synthesis of superheavy elements. However, given that ununennium is only the first period 8 element on the extended periodic table, it may well be discovered in the near future through other reactions, and indeed an attempt to synthesise it is currently ongoing in Japan. Currently, none of the period 8 elements has been discovered yet, and it is also possible, due to drip instabilities, that only the lower period 8 elements, up to around element 128, are physically possible. No attempts at synthesis have been made for any heavier alkali metals: due to their extremely high atomic number, they would require new, more powerful methods and technology to make. Occurrence In the Solar System The Oddo–Harkins rule holds that elements with even atomic numbers are more common that those with odd atomic numbers, with the exception of hydrogen. This rule argues that elements with odd atomic numbers have one unpaired proton and are more likely to capture another, thus increasing their atomic number. In elements with even atomic numbers, protons are paired, with each member of the pair offsetting the spin of the other, enhancing stability. All the alkali metals have odd atomic numbers and they are not as common as the elements with even atomic numbers adjacent to them (the noble gases and the alkaline earth metals) in the Solar System. The heavier alkali metals are also less abundant than the lighter ones as the alkali metals from rubidium onward can only be synthesised in supernovae and not in stellar nucleosynthesis. Lithium is also much less abundant than sodium and potassium as it is poorly synthesised in both Big Bang nucleosynthesis and in stars: the Big Bang could only produce trace quantities of lithium, beryllium and boron due to the absence of a stable nucleus with 5 or 8 nucleons, and stellar nucleosynthesis could only pass this bottleneck by the triple-alpha process, fusing three helium nuclei to form carbon, and skipping over those three elements. On Earth The Earth formed from the same cloud of matter that formed the Sun, but the planets acquired different compositions during the formation and evolution of the solar system. In turn, the natural history of the Earth caused parts of this planet to have differing concentrations of the elements. The mass of the Earth is approximately 5.98 kg. It is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%); with the remaining 1.2% consisting of trace amounts of other elements. Due to planetary differentiation, the core region is believed to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The alkali metals, due to their high reactivity, do not occur naturally in pure form in nature. They are lithophiles and therefore remain close to the Earth's surface because they combine readily with oxygen and so associate strongly with silica, forming relatively low-density minerals that do not sink down into the Earth's core. Potassium, rubidium and caesium are also incompatible elements due to their large ionic radii. Sodium and potassium are very abundant on Earth, both being among the ten most common elements in Earth's crust; sodium makes up approximately 2.6% of the Earth's crust measured by weight, making it the sixth most abundant element overall and the most abundant alkali metal. Potassium makes up approximately 1.5% of the Earth's crust and is the seventh most abundant element. Sodium is found in many different minerals, of which the most common is ordinary salt (sodium chloride), which occurs in vast quantities dissolved in seawater. Other solid deposits include halite, amphibole, cryolite, nitratine, and zeolite. Many of these solid deposits occur as a result of ancient seas evaporating, which still occurs now in places such as Utah's Great Salt Lake and the Dead Sea. Despite their near-equal abundance in Earth's crust, sodium is far more common than potassium in the ocean, both because potassium's larger size makes its salts less soluble, and because potassium is bound by silicates in soil and what potassium leaches is absorbed far more readily by plant life than sodium. Despite its chemical similarity, lithium typically does not occur together with sodium or potassium due to its smaller size. Due to its relatively low reactivity, it can be found in seawater in large amounts; it is estimated that lithium concentration in seawater is approximately 0.14 to 0.25 parts per million (ppm) or 25 micromolar. Its diagonal relationship with magnesium often allows it to replace magnesium in ferromagnesium minerals, where its crustal concentration is about 18 ppm, comparable to that of gallium and niobium. Commercially, the most important lithium mineral is spodumene, which occurs in large deposits worldwide. Rubidium is approximately as abundant as zinc and more abundant than copper. It occurs naturally in the minerals leucite, pollucite, carnallite, zinnwaldite, and lepidolite, although none of these contain only rubidium and no other alkali metals. Caesium is more abundant than some commonly known elements, such as antimony, cadmium, tin, and tungsten, but is much less abundant than rubidium. Francium-223, the only naturally occurring isotope of francium, is the product of the alpha decay of actinium-227 and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1018 uranium atoms. It has been calculated that there are at most 30 grams of francium in the earth's crust at any time, due to its extremely short half-life of 22 minutes. Properties Physical and chemical The physical and chemical properties of the alkali metals can be readily explained by their having an ns1 valence electron configuration, which results in weak metallic bonding. Hence, all the alkali metals are soft and have low densities, melting and boiling points, as well as heats of sublimation, vaporisation, and dissociation. They all crystallise in the body-centered cubic crystal structure, and have distinctive flame colours because their outer s electron is very easily excited. Indeed, these flame test colours are the most common way of identifying them since all their salts with common ions are soluble. The ns1 configuration also results in the alkali metals having very large atomic and ionic radii, as well as very high thermal and electrical conductivity. Their chemistry is dominated by the loss of their lone valence electron in the outermost s-orbital to form the +1 oxidation state, due to the ease of ionising this electron and the very high second ionisation energy. Most of the chemistry has been observed only for the first five members of the group. The chemistry of francium is not well established due to its extreme radioactivity; thus, the presentation of its properties here is limited. What little is known about francium shows that it is very close in behaviour to caesium, as expected. The physical properties of francium are even sketchier because the bulk element has never been observed; hence any data that may be found in the literature are certainly speculative extrapolations. The alkali metals are more similar to each other than the elements in any other group are to each other. Indeed, the similarity is so great that it is quite difficult to separate potassium, rubidium, and caesium, due to their similar ionic radii; lithium and sodium are more distinct. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium. One of the very few properties of the alkali metals that does not display a very smooth trend is their reduction potentials: lithium's value is anomalous, being more negative than the others. This is because the Li+ ion has a very high hydration energy in the gas phase: though the lithium ion disrupts the structure of water significantly, causing a higher change in entropy, this high hydration energy is enough to make the reduction potentials indicate it as being the most electropositive alkali metal, despite the difficulty of ionising it in the gas phase. The stable alkali metals are all silver-coloured metals except for caesium, which has a pale golden tint: it is one of only three metals that are clearly coloured (the other two being copper and gold). Additionally, the heavy alkaline earth metals calcium, strontium, and barium, as well as the divalent lanthanides europium and ytterbium, are pale yellow, though the colour is much less prominent than it is for caesium. Their lustre tarnishes rapidly in air due to oxidation. All the alkali metals are highly reactive and are never found in elemental forms in nature. Because of this, they are usually stored in mineral oil or kerosene (paraffin oil). They react aggressively with the halogens to form the alkali metal halides, which are white ionic crystalline compounds that are all soluble in water except lithium fluoride (LiF). The alkali metals also react with water to form strongly alkaline hydroxides and thus should be handled with great care. The heavier alkali metals react more vigorously than the lighter ones; for example, when dropped into water, caesium produces a larger explosion than potassium if the same number of moles of each metal is used. The alkali metals have the lowest first ionisation energies in their respective periods of the periodic table because of their low effective nuclear charge and the ability to attain a noble gas configuration by losing just one electron. Not only do the alkali metals react with water, but also with proton donors like alcohols and phenols, gaseous ammonia, and alkynes, the last demonstrating the phenomenal degree of their reactivity. Their great power as reducing agents makes them very useful in liberating other metals from their oxides or halides. The second ionisation energy of all of the alkali metals is very high as it is in a full shell that is also closer to the nucleus; thus, they almost always lose a single electron, forming cations. The alkalides are an exception: they are unstable compounds which contain alkali metals in a −1 oxidation state, which is very unusual as before the discovery of the alkalides, the alkali metals were not expected to be able to form anions and were thought to be able to appear in salts only as cations. The alkalide anions have filled s-subshells, which gives them enough stability to exist. All the stable alkali metals except lithium are known to be able to form alkalides, and the alkalides have much theoretical interest due to their unusual stoichiometry and low ionisation potentials. Alkalides are chemically similar to the electrides, which are salts with trapped electrons acting as anions. A particularly striking example of an alkalide is "inverse sodium hydride", H+Na− (both ions being complexed), as opposed to the usual sodium hydride, Na+H−: it is unstable in isolation, due to its high energy resulting from the displacement of two electrons from hydrogen to sodium, although several derivatives are predicted to be metastable or stable. In aqueous solution, the alkali metal ions form aqua ions of the formula [M(H2O)n]+, where n is the solvation number. Their coordination numbers and shapes agree well with those expected from their ionic radii. In aqueous solution the water molecules directly attached to the metal ion are said to belong to the first coordination sphere, also known as the first, or primary, solvation shell. The bond between a water molecule and the metal ion is a dative covalent bond, with the oxygen atom donating both electrons to the bond. Each coordinated water molecule may be attached by hydrogen bonds to other water molecules. The latter are said to reside in the second coordination sphere. However, for the alkali metal cations, the second coordination sphere is not well-defined as the +1 charge on the cation is not high enough to polarise the water molecules in the primary solvation shell enough for them to form strong hydrogen bonds with those in the second coordination sphere, producing a more stable entity. The solvation number for Li+ has been experimentally determined to be 4, forming the tetrahedral [Li(H2O)4]+: while solvation numbers of 3 to 6 have been found for lithium aqua ions, solvation numbers less than 4 may be the result of the formation of contact ion pairs, and the higher solvation numbers may be interpreted in terms of water molecules that approach [Li(H2O)4]+ through a face of the tetrahedron, though molecular dynamic simulations may indicate the existence of an octahedral hexaaqua ion. There are also probably six water molecules in the primary solvation sphere of the sodium ion, forming the octahedral [Na(H2O)6]+ ion. While it was previously thought that the heavier alkali metals also formed octahedral hexaaqua ions, it has since been found that potassium and rubidium probably form the [K(H2O)8]+ and [Rb(H2O)8]+ ions, which have the square antiprismatic structure, and that caesium forms the 12-coordinate [Cs(H2O)12]+ ion. Lithium The chemistry of lithium shows several differences from that of the rest of the group as the small Li+ cation polarises anions and gives its compounds a more covalent character. Lithium and magnesium have a diagonal relationship due to their similar atomic radii, so that they show some similarities. For example, lithium forms a stable nitride, a property common among all the alkaline earth metals (magnesium's group) but unique among the alkali metals. In addition, among their respective groups, only lithium and magnesium form organometallic compounds with significant covalent character (e.g. LiMe and MgMe2). Lithium fluoride is the only alkali metal halide that is poorly soluble in water, and lithium hydroxide is the only alkali metal hydroxide that is not deliquescent. Conversely, lithium perchlorate and other lithium salts with large anions that cannot be polarised are much more stable than the analogous compounds of the other alkali metals, probably because Li+ has a high solvation energy. This effect also means that most simple lithium salts are commonly encountered in hydrated form, because the anhydrous forms are extremely hygroscopic: this allows salts like lithium chloride and lithium bromide to be used in dehumidifiers and air-conditioners. Francium Francium is also predicted to show some differences due to its high atomic weight, causing its electrons to travel at considerable fractions of the speed of light and thus making relativistic effects more prominent. In contrast to the trend of decreasing electronegativities and ionisation energies of the alkali metals, francium's electronegativity and ionisation energy are predicted to be higher than caesium's due to the relativistic stabilisation of the 7s electrons; also, its atomic radius is expected to be abnormally low. Thus, contrary to expectation, caesium is the most reactive of the alkali metals, not francium. All known physical properties of francium also deviate from the clear trends going from lithium to caesium, such as the first ionisation energy, electron affinity, and anion polarisability, though due to the paucity of known data about francium many sources give extrapolated values, ignoring that relativistic effects make the trend from lithium to caesium become inapplicable at francium. Some of the few properties of francium that have been predicted taking relativity into account are the electron affinity (47.2 kJ/mol) and the enthalpy of dissociation of the Fr2 molecule (42.1 kJ/mol). The CsFr molecule is polarised as Cs+Fr−, showing that the 7s subshell of francium is much more strongly affected by relativistic effects than the 6s subshell of caesium. Additionally, francium superoxide (FrO2) is expected to have significant covalent character, unlike the other alkali metal superoxides, because of bonding contributions from the 6p electrons of francium. Nuclear All the alkali metals have odd atomic numbers; hence, their isotopes must be either odd–odd (both proton and neutron number are odd) or odd–even (proton number is odd, but neutron number is even). Odd–odd nuclei have even mass numbers, whereas odd–even nuclei have odd mass numbers. Odd–odd primordial nuclides are rare because most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects. Due to the great rarity of odd–odd nuclei, almost all the primordial isotopes of the alkali metals are odd–even (the exceptions being the light stable isotope lithium-6 and the long-lived radioisotope potassium-40). For a given odd mass number, there can be only a single beta-stable nuclide, since there is not a difference in binding energy between even–odd and odd–even comparable to that between even–even and odd–odd, leaving other nuclides of the same mass number (isobars) free to beta decay toward the lowest-mass nuclide. An effect of the instability of an odd number of either type of nucleons is that odd-numbered elements, such as the alkali metals, tend to have fewer stable isotopes than even-numbered elements. Of the 26 monoisotopic elements that have only a single stable isotope, all but one have an odd atomic number and all but one also have an even number of neutrons. Beryllium is the single exception to both rules, due to its low atomic number. All of the alkali metals except lithium and caesium have at least one naturally occurring radioisotope: sodium-22 and sodium-24 are trace radioisotopes produced cosmogenically, potassium-40 and rubidium-87 have very long half-lives and thus occur naturally, and all isotopes of francium are radioactive. Caesium was also thought to be radioactive in the early 20th century, although it has no naturally occurring radioisotopes. (Francium had not been discovered yet at that time.) The natural long-lived radioisotope of potassium, potassium-40, makes up about 0.012% of natural potassium, and thus natural potassium is weakly radioactive. This natural radioactivity became a basis for a mistaken claim of the discovery for element 87 (the next alkali metal after caesium) in 1925. Natural rubidium is similarly slightly radioactive, with 27.83% being the long-lived radioisotope rubidium-87. Caesium-137, with a half-life of 30.17 years, is one of the two principal medium-lived fission products, along with strontium-90, which are responsible for most of the radioactivity of spent nuclear fuel after several years of cooling, up to several hundred years after use. It constitutes most of the radioactivity still left from the Chernobyl accident. Caesium-137 undergoes high-energy beta decay and eventually becomes stable barium-137. It is a strong emitter of gamma radiation. Caesium-137 has a very low rate of neutron capture and cannot be feasibly disposed of in this way, but must be allowed to decay. Caesium-137 has been used as a tracer in hydrologic studies, analogous to the use of tritium. Small amounts of caesium-134 and caesium-137 were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Goiânia accident and the Chernobyl disaster. As of 2005, caesium-137 is the principal source of radiation in the zone of alienation around the Chernobyl nuclear power plant. Its chemical properties as one of the alkali metals make it one of the most problematic of the short-to-medium-lifetime fission products because it easily moves and spreads in nature due to the high water solubility of its salts, and is taken up by the body, which mistakes it for its essential congeners sodium and potassium. Periodic trends The alkali metals are more similar to each other than the elements in any other group are to each other. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium. Atomic and ionic radii The atomic radii of the alkali metals increase going down the group. Because of the shielding effect, when an atom has more than one electron shell, each electron feels electric repulsion from the other electrons as well as electric attraction from the nucleus. In the alkali metals, the outermost electron only feels a net charge of +1, as some of the nuclear charge (which is equal to the atomic number) is cancelled by the inner electrons; the number of inner electrons of an alkali metal is always one less than the nuclear charge. Therefore, the only factor which affects the atomic radius of the alkali metals is the number of electron shells. Since this number increases down the group, the atomic radius must also increase down the group. The ionic radii of the alkali metals are much smaller than their atomic radii. This is because the outermost electron of the alkali metals is in a different electron shell than the inner electrons, and thus when it is removed the resulting atom has one fewer electron shell and is smaller. Additionally, the effective nuclear charge has increased, and thus the electrons are attracted more strongly towards the nucleus and the ionic radius decreases. First ionisation energy The first ionisation energy of an element or molecule is the energy required to move the most loosely held electron from one mole of gaseous atoms of the element or molecules to form one mole of gaseous ions with electric charge +1. The factors affecting the first ionisation energy are the nuclear charge, the amount of shielding by the inner electrons and the distance from the most loosely held electron from the nucleus, which is always an outer electron in main group elements. The first two factors change the effective nuclear charge the most loosely held electron feels. Since the outermost electron of alkali metals always feels the same effective nuclear charge (+1), the only factor which affects the first ionisation energy is the distance from the outermost electron to the nucleus. Since this distance increases down the group, the outermost electron feels less attraction from the nucleus and thus the first ionisation energy decreases. This trend is broken in francium due to the relativistic stabilisation and contraction of the 7s orbital, bringing francium's valence electron closer to the nucleus than would be expected from non-relativistic calculations. This makes francium's outermost electron feel more attraction from the nucleus, increasing its first ionisation energy slightly beyond that of caesium. The second ionisation energy of the alkali metals is much higher than the first as the second-most loosely held electron is part of a fully filled electron shell and is thus difficult to remove. Reactivity The reactivities of the alkali metals increase going down the group. This is the result of a combination of two factors: the first ionisation energies and atomisation energies of the alkali metals. Because the first ionisation energy of the alkali metals decreases down the group, it is easier for the outermost electron to be removed from the atom and participate in chemical reactions, thus increasing reactivity down the group. The atomisation energy measures the strength of the metallic bond of an element, which falls down the group as the atoms increase in radius and thus the metallic bond must increase in length, making the delocalised electrons further away from the attraction of the nuclei of the heavier alkali metals. Adding the atomisation and first ionisation energies gives a quantity closely related to (but not equal to) the activation energy of the reaction of an alkali metal with another substance. This quantity decreases going down the group, and so does the activation energy; thus, chemical reactions can occur faster and the reactivity increases down the group. Electronegativity Electronegativity is a chemical property that describes the tendency of an atom or a functional group to attract electrons (or electron density) towards itself. If the bond between sodium and chlorine in sodium chloride were covalent, the pair of shared electrons would be attracted to the chlorine because the effective nuclear charge on the outer electrons is +7 in chlorine but is only +1 in sodium. The electron pair is attracted so close to the chlorine atom that they are practically transferred to the chlorine atom (an ionic bond). However, if the sodium atom was replaced by a lithium atom, the electrons will not be attracted as close to the chlorine atom as before because the lithium atom is smaller, making the electron pair more strongly attracted to the closer effective nuclear charge from lithium. Hence, the larger alkali metal atoms (further down the group) will be less electronegative as the bonding pair is less strongly attracted towards them. As mentioned previously, francium is expected to be an exception. Because of the higher electronegativity of lithium, some of its compounds have a more covalent character. For example, lithium iodide (LiI) will dissolve in organic solvents, a property of most covalent compounds. Lithium fluoride (LiF) is the only alkali halide that is not soluble in water, and lithium hydroxide (LiOH) is the only alkali metal hydroxide that is not deliquescent. Melting and boiling points The melting point of a substance is the point where it changes state from solid to liquid while the boiling point of a substance (in liquid state) is the point where the vapour pressure of the liquid equals the environmental pressure surrounding the liquid and all the liquid changes state to gas. As a metal is heated to its melting point, the metallic bonds keeping the atoms in place weaken so that the atoms can move around, and the metallic bonds eventually break completely at the metal's boiling point. Therefore, the falling melting and boiling points of the alkali metals indicate that the strength of the metallic bonds of the alkali metals decreases down the group. This is because metal atoms are held together by the electromagnetic attraction from the positive ions to the delocalised electrons. As the atoms increase in size going down the group (because their atomic radius increases), the nuclei of the ions move further away from the delocalised electrons and hence the metallic bond becomes weaker so that the metal can more easily melt and boil, thus lowering the melting and boiling points. The increased nuclear charge is not a relevant factor due to the shielding effect. Density The alkali metals all have the same crystal structure (body-centred cubic) and thus the only relevant factors are the number of atoms that can fit into a certain volume and the mass of one of the atoms, since density is defined as mass per unit volume. The first factor depends on the volume of the atom and thus the atomic radius, which increases going down the group; thus, the volume of an alkali metal atom increases going down the group. The mass of an alkali metal atom also increases going down the group. Thus, the trend for the densities of the alkali metals depends on their atomic weights and atomic radii; if figures for these two factors are known, the ratios between the densities of the alkali metals can then be calculated. The resultant trend is that the densities of the alkali metals increase down the table, with an exception at potassium. Due to having the lowest atomic weight and the largest atomic radius of all the elements in their periods, the alkali metals are the least dense metals in the periodic table. Lithium, sodium, and potassium are the only three metals in the periodic table that are less dense than water: in fact, lithium is the least dense known solid at room temperature. Compounds The alkali metals form complete series of compounds with all usually encountered anions, which well illustrate group trends. These compounds can be described as involving the alkali metals losing electrons to acceptor species and forming monopositive ions. This description is most accurate for alkali halides and becomes less and less accurate as cationic and anionic charge increase, and as the anion becomes larger and more polarisable. For instance, ionic bonding gives way to metallic bonding along the series NaCl, Na2O, Na2S, Na3P, Na3As, Na3Sb, Na3Bi, Na. Hydroxides All the alkali metals react vigorously or explosively with cold water, producing an aqueous solution of a strongly basic alkali metal hydroxide and releasing hydrogen gas. This reaction becomes more vigorous going down the group: lithium reacts steadily with effervescence, but sodium and potassium can ignite, and rubidium and caesium sink in water and generate hydrogen gas so rapidly that shock waves form in the water that may shatter glass containers. When an alkali metal is dropped into water, it produces an explosion, of which there are two separate stages. The metal reacts with the water first, breaking the hydrogen bonds in the water and producing hydrogen gas; this takes place faster for the more reactive heavier alkali metals. Second, the heat generated by the first part of the reaction often ignites the hydrogen gas, causing it to burn explosively into the surrounding air. This secondary hydrogen gas explosion produces the visible flame above the bowl of water, lake or other body of water, not the initial reaction of the metal with water (which tends to happen mostly under water). The alkali metal hydroxides are the most basic known hydroxides. Recent research has suggested that the explosive behavior of alkali metals in water is driven by a Coulomb explosion rather than solely by rapid generation of hydrogen itself. All alkali metals melt as a part of the reaction with water. Water molecules ionise the bare metallic surface of the liquid metal, leaving a positively charged metal surface and negatively charged water ions. The attraction between the charged metal and water ions will rapidly increase the surface area, causing an exponential increase of ionisation. When the repulsive forces within the liquid metal surface exceeds the forces of the surface tension, it vigorously explodes. The hydroxides themselves are the most basic hydroxides known, reacting with acids to give salts and with alcohols to give oligomeric alkoxides. They easily react with carbon dioxide to form carbonates or bicarbonates, or with hydrogen sulfide to form sulfides or bisulfides, and may be used to separate thiols from petroleum. They react with amphoteric oxides: for example, the oxides of aluminium, zinc, tin, and lead react with the alkali metal hydroxides to give aluminates, zincates, stannates, and plumbates. Silicon dioxide is acidic, and thus the alkali metal hydroxides can also attack silicate glass. Intermetallic compounds The alkali metals form many intermetallic compounds with each other and the elements from groups 2 to 13 in the periodic table of varying stoichiometries, such as the sodium amalgams with mercury, including Na5Hg8 and Na3Hg. Some of these have ionic characteristics: taking the alloys with gold, the most electronegative of metals, as an example, NaAu and KAu are metallic, but RbAu and CsAu are semiconductors. NaK is an alloy of sodium and potassium that is very useful because it is liquid at room temperature, although precautions must be taken due to its extreme reactivity towards water and air. The eutectic mixture melts at −12.6 °C. An alloy of 41% caesium, 47% sodium, and 12% potassium has the lowest known melting point of any metal or alloy, −78 °C. Compounds with the group 13 elements The intermetallic compounds of the alkali metals with the heavier group 13 elements (aluminium, gallium, indium, and thallium), such as NaTl, are poor conductors or semiconductors, unlike the normal alloys with the preceding elements, implying that the alkali metal involved has lost an electron to the Zintl anions involved. Nevertheless, while the elements in group 14 and beyond tend to form discrete anionic clusters, group 13 elements tend to form polymeric ions with the alkali metal cations located between the giant ionic lattice. For example, NaTl consists of a polymeric anion (—Tl−—)n with a covalent diamond cubic structure with Na+ ions located between the anionic lattice. The larger alkali metals cannot fit similarly into an anionic lattice and tend to force the heavier group 13 elements to form anionic clusters. Boron is a special case, being the only nonmetal in group 13. The alkali metal borides tend to be boron-rich, involving appreciable boron–boron bonding involving deltahedral structures, and are thermally unstable due to the alkali metals having a very high vapour pressure at elevated temperatures. This makes direct synthesis problematic because the alkali metals do not react with boron below 700 °C, and thus this must be accomplished in sealed containers with the alkali metal in excess. Furthermore, exceptionally in this group, reactivity with boron decreases down the group: lithium reacts completely at 700 °C, but sodium at 900 °C and potassium not until 1200 °C, and the reaction is instantaneous for lithium but takes hours for potassium. Rubidium and caesium borides have not even been characterised. Various phases are known, such as LiB10, NaB6, NaB15, and KB6. Under high pressure the boron–boron bonding in the lithium borides changes from following Wade's rules to forming Zintl anions like the rest of group 13. Compounds with the group 14 elements Lithium and sodium react with carbon to form acetylides, Li2C2 and Na2C2, which can also be obtained by reaction of the metal with acetylene. Potassium, rubidium, and caesium react with graphite; their atoms are intercalated between the hexagonal graphite layers, forming graphite intercalation compounds of formulae MC60 (dark grey, almost black), MC48 (dark grey, almost black), MC36 (blue), MC24 (steel blue), and MC8 (bronze) (M = K, Rb, or Cs). These compounds are over 200 times more electrically conductive than pure graphite, suggesting that the valence electron of the alkali metal is transferred to the graphite layers (e.g. ). Upon heating of KC8, the elimination of potassium atoms results in the conversion in sequence to KC24, KC36, KC48 and finally KC60. KC8 is a very strong reducing agent and is pyrophoric and explodes on contact with water. While the larger alkali metals (K, Rb, and Cs) initially form MC8, the smaller ones initially form MC6, and indeed they require reaction of the metals with graphite at high temperatures around 500 °C to form. Apart from this, the alkali metals are such strong reducing agents that they can even reduce buckminsterfullerene to produce solid fullerides MnC60; sodium, potassium, rubidium, and caesium can form fullerides where n = 2, 3, 4, or 6, and rubidium and caesium additionally can achieve n = 1. When the alkali metals react with the heavier elements in the carbon group (silicon, germanium, tin, and lead), ionic substances with cage-like structures are formed, such as the silicides M4Si4 (M = K, Rb, or Cs), which contains M+ and tetrahedral ions. The chemistry of alkali metal germanides, involving the germanide ion Ge4− and other cluster (Zintl) ions such as , , , and [(Ge9)2]6−, is largely analogous to that of the corresponding silicides. Alkali metal stannides are mostly ionic, sometimes with the stannide ion (Sn4−), and sometimes with more complex Zintl ions such as , which appears in tetrapotassium nonastannide (K4Sn9). The monatomic plumbide ion (Pb4−) is unknown, and indeed its formation is predicted to be energetically unfavourable; alkali metal plumbides have complex Zintl ions, such as . These alkali metal germanides, stannides, and plumbides may be produced by reducing germanium, tin, and lead with sodium metal in liquid ammonia. Nitrides and pnictides Lithium, the lightest of the alkali metals, is the only alkali metal which reacts with nitrogen at standard conditions, and its nitride is the only stable alkali metal nitride. Nitrogen is an unreactive gas because breaking the strong triple bond in the dinitrogen molecule (N2) requires a lot of energy. The formation of an alkali metal nitride would consume the ionisation energy of the alkali metal (forming M+ ions), the energy required to break the triple bond in N2 and the formation of N3− ions, and all the energy released from the formation of an alkali metal nitride is from the lattice energy of the alkali metal nitride. The lattice energy is maximised with small, highly charged ions; the alkali metals do not form highly charged ions, only forming ions with a charge of +1, so only lithium, the smallest alkali metal, can release enough lattice energy to make the reaction with nitrogen exothermic, forming lithium nitride. The reactions of the other alkali metals with nitrogen would not release enough lattice energy and would thus be endothermic, so they do not form nitrides at standard conditions. Sodium nitride (Na3N) and potassium nitride (K3N), while existing, are extremely unstable, being prone to decomposing back into their constituent elements, and cannot be produced by reacting the elements with each other at standard conditions. Steric hindrance forbids the existence of rubidium or caesium nitride. However, sodium and potassium form colourless azide salts involving the linear anion; due to the large size of the alkali metal cations, they are thermally stable enough to be able to melt before decomposing. All the alkali metals react readily with phosphorus and arsenic to form phosphides and arsenides with the formula M3Pn (where M represents an alkali metal and Pn represents a pnictogen – phosphorus, arsenic, antimony, or bismuth). This is due to the greater size of the P3− and As3− ions, so that less lattice energy needs to be released for the salts to form. These are not the only phosphides and arsenides of the alkali metals: for example, potassium has nine different known phosphides, with formulae K3P, K4P3, K5P4, KP, K4P6, K3P7, K3P11, KP10.3, and KP15. While most metals form arsenides, only the alkali and alkaline earth metals form mostly ionic arsenides. The structure of Na3As is complex with unusually short Na–Na distances of 328–330 pm which are shorter than in sodium metal, and this indicates that even with these electropositive metals the bonding cannot be straightforwardly ionic. Other alkali metal arsenides not conforming to the formula M3As are known, such as LiAs, which has a metallic lustre and electrical conductivity indicating the presence of some metallic bonding. The antimonides are unstable and reactive as the Sb3− ion is a strong reducing agent; reaction of them with acids form the toxic and unstable gas stibine (SbH3). Indeed, they have some metallic properties, and the alkali metal antimonides of stoichiometry MSb involve antimony atoms bonded in a spiral Zintl structure. Bismuthides are not even wholly ionic; they are intermetallic compounds containing partially metallic and partially ionic bonds. Oxides and chalcogenides All the alkali metals react vigorously with oxygen at standard conditions. They form various types of oxides, such as simple oxides (containing the O2− ion), peroxides (containing the ion, where there is a single bond between the two oxygen atoms), superoxides (containing the ion), and many others. Lithium burns in air to form lithium oxide, but sodium reacts with oxygen to form a mixture of sodium oxide and sodium peroxide. Potassium forms a mixture of potassium peroxide and potassium superoxide, while rubidium and caesium form the superoxide exclusively. Their reactivity increases going down the group: while lithium, sodium and potassium merely burn in air, rubidium and caesium are pyrophoric (spontaneously catch fire in air). The smaller alkali metals tend to polarise the larger anions (the peroxide and superoxide) due to their small size. This attracts the electrons in the more complex anions towards one of its constituent oxygen atoms, forming an oxide ion and an oxygen atom. This causes lithium to form the oxide exclusively on reaction with oxygen at room temperature. This effect becomes drastically weaker for the larger sodium and potassium, allowing them to form the less stable peroxides. Rubidium and caesium, at the bottom of the group, are so large that even the least stable superoxides can form. Because the superoxide releases the most energy when formed, the superoxide is preferentially formed for the larger alkali metals where the more complex anions are not polarised. The oxides and peroxides for these alkali metals do exist, but do not form upon direct reaction of the metal with oxygen at standard conditions. In addition, the small size of the Li+ and O2− ions contributes to their forming a stable ionic lattice structure. Under controlled conditions, however, all the alkali metals, with the exception of francium, are known to form their oxides, peroxides, and superoxides. The alkali metal peroxides and superoxides are powerful oxidising agents. Sodium peroxide and potassium superoxide react with carbon dioxide to form the alkali metal carbonate and oxygen gas, which allows them to be used in submarine air purifiers; the presence of water vapour, naturally present in breath, makes the removal of carbon dioxide by potassium superoxide even more efficient. All the stable alkali metals except lithium can form red ozonides (MO3) through low-temperature reaction of the powdered anhydrous hydroxide with ozone: the ozonides may be then extracted using liquid ammonia. They slowly decompose at standard conditions to the superoxides and oxygen, and hydrolyse immediately to the hydroxides when in contact with water. Potassium, rubidium, and caesium also form sesquioxides M2O3, which may be better considered peroxide disuperoxides, . Rubidium and caesium can form a great variety of suboxides with the metals in formal oxidation states below +1. Rubidium can form Rb6O and Rb9O2 (copper-coloured) upon oxidation in air, while caesium forms an immense variety of oxides, such as the ozonide CsO3 and several brightly coloured suboxides, such as Cs7O (bronze), Cs4O (red-violet), Cs11O3 (violet), Cs3O (dark green), CsO, Cs3O2, as well as Cs7O2. The last of these may be heated under vacuum to generate Cs2O. The alkali metals can also react analogously with the heavier chalcogens (sulfur, selenium, tellurium, and polonium), and all the alkali metal chalcogenides are known (with the exception of francium's). Reaction with an excess of the chalcogen can similarly result in lower chalcogenides, with chalcogen ions containing chains of the chalcogen atoms in question. For example, sodium can react with sulfur to form the sulfide (Na2S) and various polysulfides with the formula Na2Sx (x from 2 to 6), containing the ions. Due to the basicity of the Se2− and Te2− ions, the alkali metal selenides and tellurides are alkaline in solution; when reacted directly with selenium and tellurium, alkali metal polyselenides and polytellurides are formed along with the selenides and tellurides with the and ions. They may be obtained directly from the elements in liquid ammonia or when air is not present, and are colourless, water-soluble compounds that air oxidises quickly back to selenium or tellurium. The alkali metal polonides are all ionic compounds containing the Po2− ion; they are very chemically stable and can be produced by direct reaction of the elements at around 300–400 °C. Halides, hydrides, and pseudohalides The alkali metals are among the most electropositive elements on the periodic table and thus tend to bond ionically to the most electronegative elements on the periodic table, the halogens (fluorine, chlorine, bromine, iodine, and astatine), forming salts known as the alkali metal halides. The reaction is very vigorous and can sometimes result in explosions. All twenty stable alkali metal halides are known; the unstable ones are not known, with the exception of sodium astatide, because of the great instability and rarity of astatine and francium. The most well-known of the twenty is certainly sodium chloride, otherwise known as common salt. All of the stable alkali metal halides have the formula MX where M is an alkali metal and X is a halogen. They are all white ionic crystalline solids that have high melting points. All the alkali metal halides are soluble in water except for lithium fluoride (LiF), which is insoluble in water due to its very high lattice enthalpy. The high lattice enthalpy of lithium fluoride is due to the small sizes of the Li+ and F− ions, causing the electrostatic interactions between them to be strong: a similar effect occurs for magnesium fluoride, consistent with the diagonal relationship between lithium and magnesium. The alkali metals also react similarly with hydrogen to form ionic alkali metal hydrides, where the hydride anion acts as a pseudohalide: these are often used as reducing agents, producing hydrides, complex metal hydrides, or hydrogen gas. Other pseudohalides are also known, notably the cyanides. These are isostructural to the respective halides except for lithium cyanide, indicating that the cyanide ions may rotate freely. Ternary alkali metal halide oxides, such as Na3ClO, K3BrO (yellow), Na4Br2O, Na4I2O, and K4Br2O, are also known. The polyhalides are rather unstable, although those of rubidium and caesium are greatly stabilised by the feeble polarising power of these extremely large cations. Coordination complexes Alkali metal cations do not usually form coordination complexes with simple Lewis bases due to their low charge of just +1 and their relatively large size; thus the Li+ ion forms most complexes and the heavier alkali metal ions form less and less (though exceptions occur for weak complexes). Lithium in particular has a very rich coordination chemistry in which it exhibits coordination numbers from 1 to 12, although octahedral hexacoordination is its preferred mode. In aqueous solution, the alkali metal ions exist as octahedral hexahydrate complexes [M(H2O)6]+, with the exception of the lithium ion, which due to its small size forms tetrahedral tetrahydrate complexes [Li(H2O)4]+; the alkali metals form these complexes because their ions are attracted by electrostatic forces of attraction to the polar water molecules. Because of this, anhydrous salts containing alkali metal cations are often used as desiccants. Alkali metals also readily form complexes with crown ethers (e.g. 12-crown-4 for Li+, 15-crown-5 for Na+, 18-crown-6 for K+, and 21-crown-7 for Rb+) and cryptands due to electrostatic attraction. Ammonia solutions The alkali metals dissolve slowly in liquid ammonia, forming ammoniacal solutions of solvated metal cation M+ and solvated electron e−, which react to form hydrogen gas and the alkali metal amide (MNH2, where M represents an alkali metal): this was first noted by Humphry Davy in 1809 and rediscovered by W. Weyl in 1864. The process may be speeded up by a catalyst. Similar solutions are formed by the heavy divalent alkaline earth metals calcium, strontium, barium, as well as the divalent lanthanides, europium and ytterbium. The amide salt is quite insoluble and readily precipitates out of solution, leaving intensely coloured ammonia solutions of the alkali metals. In 1907, Charles A. Kraus identified the colour as being due to the presence of solvated electrons, which contribute to the high electrical conductivity of these solutions. At low concentrations (below 3 M), the solution is dark blue and has ten times the conductivity of aqueous sodium chloride; at higher concentrations (above 3 M), the solution is copper-coloured and has approximately the conductivity of liquid metals like mercury. In addition to the alkali metal amide salt and solvated electrons, such ammonia solutions also contain the alkali metal cation (M+), the neutral alkali metal atom (M), diatomic alkali metal molecules (M2) and alkali metal anions (M−). These are unstable and eventually become the more thermodynamically stable alkali metal amide and hydrogen gas. Solvated electrons are powerful reducing agents and are often used in chemical synthesis. Organometallic Organolithium Being the smallest alkali metal, lithium forms the widest variety of and most stable organometallic compounds, which are bonded covalently. Organolithium compounds are electrically non-conducting volatile solids or liquids that melt at low temperatures, and tend to form oligomers with the structure (RLi)x where R is the organic group. As the electropositive nature of lithium puts most of the charge density of the bond on the carbon atom, effectively creating a carbanion, organolithium compounds are extremely powerful bases and nucleophiles. For use as bases, butyllithiums are often used and are commercially available. An example of an organolithium compound is methyllithium ((CH3Li)x), which exists in tetrameric (x = 4, tetrahedral) and hexameric (x = 6, octahedral) forms. Organolithium compounds, especially n-butyllithium, are useful reagents in organic synthesis, as might be expected given lithium's diagonal relationship with magnesium, which plays an important role in the Grignard reaction. For example, alkyllithiums and aryllithiums may be used to synthesise aldehydes and ketones by reaction with metal carbonyls. The reaction with nickel tetracarbonyl, for example, proceeds through an unstable acyl nickel carbonyl complex which then undergoes electrophilic substitution to give the desired aldehyde (using H+ as the electrophile) or ketone (using an alkyl halide) product. LiR \ + \ Ni(CO)4 \ \longrightarrow Li^{+}[RCONi(CO)3]^{-} Li^{+}[RCONi(CO)3]^{-}->[\ce{H^{+}}][\ce{solvent}] \ Li^{+} \ + \ RCHO \ + \ [(solvent)Ni(CO)3] Li^{+}[RCONi(CO)3]^{-}->[\ce{R^{'}Br}][\ce{solvent}] \ Li^{+} \ + \ RR^{'}CO \ + \ [(solvent)Ni(CO)3] Alkyllithiums and aryllithiums may also react with N,N-disubstituted amides to give aldehydes and ketones, and symmetrical ketones by reacting with carbon monoxide. They thermally decompose to eliminate a β-hydrogen, producing alkenes and lithium hydride: another route is the reaction of ethers with alkyl- and aryllithiums that act as strong bases. In non-polar solvents, aryllithiums react as the carbanions they effectively are, turning carbon dioxide to aromatic carboxylic acids (ArCO2H) and aryl ketones to tertiary carbinols (Ar'2C(Ar)OH). Finally, they may be used to synthesise other organometallic compounds through metal-halogen exchange. Heavier alkali metals Unlike the organolithium compounds, the organometallic compounds of the heavier alkali metals are predominantly ionic. The application of organosodium compounds in chemistry is limited in part due to competition from organolithium compounds, which are commercially available and exhibit more convenient reactivity. The principal organosodium compound of commercial importance is sodium cyclopentadienide. Sodium tetraphenylborate can also be classified as an organosodium compound since in the solid state sodium is bound to the aryl groups. Organometallic compounds of the higher alkali metals are even more reactive than organosodium compounds and of limited utility. A notable reagent is Schlosser's base, a mixture of n-butyllithium and potassium tert-butoxide. This reagent reacts with propene to form the compound allylpotassium (KCH2CHCH2). cis-2-Butene and trans-2-butene equilibrate when in contact with alkali metals. Whereas isomerisation is fast with lithium and sodium, it is slow with the heavier alkali metals. The heavier alkali metals also favour the sterically congested conformation. Several crystal structures of organopotassium compounds have been reported, establishing that they, like the sodium compounds, are polymeric. Organosodium, organopotassium, organorubidium and organocaesium compounds are all mostly ionic and are insoluble (or nearly so) in nonpolar solvents. Alkyl and aryl derivatives of sodium and potassium tend to react with air. They cause the cleavage of ethers, generating alkoxides. Unlike alkyllithium compounds, alkylsodiums and alkylpotassiums cannot be made by reacting the metals with alkyl halides because Wurtz coupling occurs: RM + R'X → R–R' + MX As such, they have to be made by reacting alkylmercury compounds with sodium or potassium metal in inert hydrocarbon solvents. While methylsodium forms tetramers like methyllithium, methylpotassium is more ionic and has the nickel arsenide structure with discrete methyl anions and potassium cations. The alkali metals and their hydrides react with acidic hydrocarbons, for example cyclopentadienes and terminal alkynes, to give salts. Liquid ammonia, ether, or hydrocarbon solvents are used, the most common of which being tetrahydrofuran. The most important of these compounds is sodium cyclopentadienide, NaC5H5, an important precursor to many transition metal cyclopentadienyl derivatives. Similarly, the alkali metals react with cyclooctatetraene in tetrahydrofuran to give alkali metal cyclooctatetraenides; for example, dipotassium cyclooctatetraenide (K2C8H8) is an important precursor to many metal cyclooctatetraenyl derivatives, such as uranocene. The large and very weakly polarising alkali metal cations can stabilise large, aromatic, polarisable radical anions, such as the dark-green sodium naphthalenide, Na+[C10H8•]−, a strong reducing agent. Representative reactions of alkali metals Reaction with oxygen Upon reacting with oxygen, alkali metals form oxides, peroxides, superoxides and suboxides. However, the first three are more common. The table below shows the types of compounds formed in reaction with oxygen. The compound in brackets represents the minor product of combustion. The alkali metal peroxides are ionic compounds that are unstable in water. The peroxide anion is weakly bound to the cation, and it is hydrolysed, forming stronger covalent bonds. Na2O2 + 2H2O → 2NaOH + H2O2 The other oxygen compounds are also unstable in water. 2KO2 + 2H2O → 2KOH + H2O2 + O2 Li2O + H2O → 2LiOH Reaction with sulfur With sulfur, they form sulfides and polysulfides. 2Na + 1/8S8 → Na2S + 1/8S8 → Na2S2...Na2S7 Because alkali metal sulfides are essentially salts of a weak acid and a strong base, they form basic solutions. S2- + H2O → HS− + HO− HS− + H2O → H2S + HO− Reaction with nitrogen Lithium is the only metal that combines directly with nitrogen at room temperature. 3Li + 1/2N2 → Li3N Li3N can react with water to liberate ammonia. Li3N + 3H2O → 3LiOH + NH3 Reaction with hydrogen With hydrogen, alkali metals form saline hydrides that hydrolyse in water. 2 Na \ + H2 \ ->[\ce{\Delta}] \ 2 NaH 2 NaH \ + \ 2 H2O \ \longrightarrow \ 2 NaOH \ + \ H2 \uparrow Reaction with carbon Lithium is the only metal that reacts directly with carbon to give dilithium acetylide. Na and K can react with acetylene to give acetylides. 2 Li \ + \ 2 C \ \longrightarrow \ Li2C2 2 Na \ + \ 2 C2H2 \ ->[\ce{150 \ ^{o}C}] \ 2 NaC2H \ + \ H2 2 Na \ + \ 2 NaC2H \ ->[\ce{220 \ ^{o}C}] \ 2 Na2C2 \ + \ H2 Reaction with water On reaction with water, they generate hydroxide ions and hydrogen gas. This reaction is vigorous and highly exothermic and the hydrogen resulted may ignite in air or even explode in the case of Rb and Cs. Na + H2O → NaOH + 1/2H2 Reaction with other salts The alkali metals are very good reducing agents. They can reduce metal cations that are less electropositive. Titanium is produced industrially by the reduction of titanium tetrachloride with Na at 400 °C (van Arkel–de Boer process). TiCl4 + 4Na → 4NaCl + Ti Reaction with organohalide compounds Alkali metals react with halogen derivatives to generate hydrocarbon via the Wurtz reaction. 2CH3-Cl + 2Na → H3C-CH3 + 2NaCl Alkali metals in liquid ammonia Alkali metals dissolve in liquid ammonia or other donor solvents like aliphatic amines or hexamethylphosphoramide to give blue solutions. These solutions are believed to contain free electrons. Na + xNH3 → Na+ + e(NH3)x− Due to the presence of solvated electrons, these solutions are very powerful reducing agents used in organic synthesis. Reaction 1) is known as Birch reduction. Other reductions that can be carried by these solutions are: S8 + 2e− → S82- Fe(CO)5 + 2e− → Fe(CO)42- + CO Extensions Although francium is the heaviest alkali metal that has been discovered, there has been some theoretical work predicting the physical and chemical characteristics of hypothetical heavier alkali metals. Being the first period 8 element, the undiscovered element ununennium (element 119) is predicted to be the next alkali metal after francium and behave much like their lighter congeners; however, it is also predicted to differ from the lighter alkali metals in some properties. Its chemistry is predicted to be closer to that of potassium or rubidium instead of caesium or francium. This is unusual as periodic trends, ignoring relativistic effects would predict ununennium to be even more reactive than caesium and francium. This lowered reactivity is due to the relativistic stabilisation of ununennium's valence electron, increasing ununennium's first ionisation energy and decreasing the metallic and ionic radii; this effect is already seen for francium. This assumes that ununennium will behave chemically as an alkali metal, which, although likely, may not be true due to relativistic effects. The relativistic stabilisation of the 8s orbital also increases ununennium's electron affinity far beyond that of caesium and francium; indeed, ununennium is expected to have an electron affinity higher than all the alkali metals lighter than it. Relativistic effects also cause a very large drop in the polarisability of ununennium. On the other hand, ununennium is predicted to continue the trend of melting points decreasing going down the group, being expected to have a melting point between 0 °C and 30 °C. The stabilisation of ununennium's valence electron and thus the contraction of the 8s orbital cause its atomic radius to be lowered to 240 pm, very close to that of rubidium (247 pm), so that the chemistry of ununennium in the +1 oxidation state should be more similar to the chemistry of rubidium than to that of francium. On the other hand, the ionic radius of the Uue+ ion is predicted to be larger than that of Rb+, because the 7p orbitals are destabilised and are thus larger than the p-orbitals of the lower shells. Ununennium may also show the +3 and +5 oxidation states, which are not seen in any other alkali metal, in addition to the +1 oxidation state that is characteristic of the other alkali metals and is also the main oxidation state of all the known alkali metals: this is because of the destabilisation and expansion of the 7p3/2 spinor, causing its outermost electrons to have a lower ionisation energy than what would otherwise be expected. Indeed, many ununennium compounds are expected to have a large covalent character, due to the involvement of the 7p3/2 electrons in the bonding. Not as much work has been done predicting the properties of the alkali metals beyond ununennium. Although a simple extrapolation of the periodic table (by the Aufbau principle) would put element 169, unhexennium, under ununennium, Dirac-Fock calculations predict that the next element after ununennium with alkali-metal-like properties may be element 165, unhexpentium, which is predicted to have the electron configuration [Og] 5g18 6f14 7d10 8s2 8p1/22 9s1. This element would be intermediate in properties between an alkali metal and a group 11 element, and while its physical and atomic properties would be closer to the former, its chemistry may be closer to that of the latter. Further calculations show that unhexpentium would follow the trend of increasing ionisation energy beyond caesium, having an ionisation energy comparable to that of sodium, and that it should also continue the trend of decreasing atomic radii beyond caesium, having an atomic radius comparable to that of potassium. However, the 7d electrons of unhexpentium may also be able to participate in chemical reactions along with the 9s electron, possibly allowing oxidation states beyond +1, whence the likely transition metal behaviour of unhexpentium. Due to the alkali and alkaline earth metals both being s-block elements, these predictions for the trends and properties of ununennium and unhexpentium also mostly hold quite similarly for the corresponding alkaline earth metals unbinilium (Ubn) and unhexhexium (Uhh). Unsepttrium, element 173, may be an even better heavier homologue of ununennium; with a predicted electron configuration of [Usb] 6g1, it returns to the alkali-metal-like situation of having one easily removed electron far above a closed p-shell in energy, and is expected to be even more reactive than caesium. The probable properties of further alkali metals beyond unsepttrium have not been explored yet as of 2019, and they may or may not be able to exist. In periods 8 and above of the periodic table, relativistic and shell-structure effects become so strong that extrapolations from lighter congeners become completely inaccurate. In addition, the relativistic and shell-structure effects (which stabilise the s-orbitals and destabilise and expand the d-, f-, and g-orbitals of higher shells) have opposite effects, causing even larger difference between relativistic and non-relativistic calculations of the properties of elements with such high atomic numbers. Interest in the chemical properties of ununennium, unhexpentium, and unsepttrium stems from the fact that they are located close to the expected locations of islands of stability, centered at elements 122 (306Ubb) and 164 (482Uhq). Pseudo-alkali metals Many other substances are similar to the alkali metals in their tendency to form monopositive cations. Analogously to the pseudohalogens, they have sometimes been called "pseudo-alkali metals". These substances include some elements and many more polyatomic ions; the polyatomic ions are especially similar to the alkali metals in their large size and weak polarising power. Hydrogen The element hydrogen, with one electron per neutral atom, is usually placed at the top of Group 1 of the periodic table because of its electron configuration. But hydrogen is not normally considered to be an alkali metal. Metallic hydrogen, which only exists at very high pressures, is known for its electrical and magnetic properties, not its chemical properties. Under typical conditions, pure hydrogen exists as a diatomic gas consisting of two atoms per molecule (H2); however, the alkali metals form diatomic molecules (such as dilithium, Li2) only at high temperatures, when they are in the gaseous state. Hydrogen, like the alkali metals, has one valence electron and reacts easily with the halogens, but the similarities mostly end there because of the small size of a bare proton H+ compared to the alkali metal cations. Its placement above lithium is primarily due to its electron configuration. It is sometimes placed above fluorine due to their similar chemical properties, though the resemblance is likewise not absolute. The first ionisation energy of hydrogen (1312.0 kJ/mol) is much higher than that of the alkali metals. As only one additional electron is required to fill in the outermost shell of the hydrogen atom, hydrogen often behaves like a halogen, forming the negative hydride ion, and is very occasionally considered to be a halogen on that basis. (The alkali metals can also form negative ions, known as alkalides, but these are little more than laboratory curiosities, being unstable.) An argument against this placement is that formation of hydride from hydrogen is endothermic, unlike the exothermic formation of halides from halogens. The radius of the H− anion also does not fit the trend of increasing size going down the halogens: indeed, H− is very diffuse because its single proton cannot easily control both electrons. It was expected for some time that liquid hydrogen would show metallic properties; while this has been shown to not be the case, under extremely high pressures, such as those found at the cores of Jupiter and Saturn, hydrogen does become metallic and behaves like an alkali metal; in this phase, it is known as metallic hydrogen. The electrical resistivity of liquid metallic hydrogen at 3000 K is approximately equal to that of liquid rubidium and caesium at 2000 K at the respective pressures when they undergo a nonmetal-to-metal transition. The 1s1 electron configuration of hydrogen, while analogous to that of the alkali metals (ns1), is unique because there is no 1p subshell. Hence it can lose an electron to form the hydron H+, or gain one to form the hydride ion H−. In the former case it resembles superficially the alkali metals; in the latter case, the halogens, but the differences due to the lack of a 1p subshell are important enough that neither group fits the properties of hydrogen well. Group 14 is also a good fit in terms of thermodynamic properties such as ionisation energy and electron affinity, but hydrogen cannot be tetravalent. Thus none of the three placements are entirely satisfactory, although group 1 is the most common placement (if one is chosen) because of the electron configuration and the fact that the hydron is by far the most important of all monatomic hydrogen species, being the foundation of acid-base chemistry. As an example of hydrogen's unorthodox properties stemming from its unusual electron configuration and small size, the hydrogen ion is very small (radius around 150 fm compared to the 50–220 pm size of most other atoms and ions) and so is nonexistent in condensed systems other than in association with other atoms or molecules. Indeed, transferring of protons between chemicals is the basis of acid-base chemistry. Also unique is hydrogen's ability to form hydrogen bonds, which are an effect of charge-transfer, electrostatic, and electron correlative contributing phenomena. While analogous lithium bonds are also known, they are mostly electrostatic. Nevertheless, hydrogen can take on the same structural role as the alkali metals in some molecular crystals, and has a close relationship with the lightest alkali metals (especially lithium). Ammonium and derivatives The ammonium ion () has very similar properties to the heavier alkali metals, acting as an alkali metal intermediate between potassium and rubidium, and is often considered a close relative. For example, most alkali metal salts are soluble in water, a property which ammonium salts share. Ammonium is expected to behave stably as a metal ( ions in a sea of delocalised electrons) at very high pressures (though less than the typical pressure where transitions from insulating to metallic behaviour occur around, 100 GPa), and could possibly occur inside the ice giants Uranus and Neptune, which may have significant impacts on their interior magnetic fields. It has been estimated that the transition from a mixture of ammonia and dihydrogen molecules to metallic ammonium may occur at pressures just below 25 GPa. Under standard conditions, ammonium can form a metallic amalgam with mercury. Other "pseudo-alkali metals" include the alkylammonium cations, in which some of the hydrogen atoms in the ammonium cation are replaced by alkyl or aryl groups. In particular, the quaternary ammonium cations () are very useful since they are permanently charged, and they are often used as an alternative to the expensive Cs+ to stabilise very large and very easily polarisable anions such as . Tetraalkylammonium hydroxides, like alkali metal hydroxides, are very strong bases that react with atmospheric carbon dioxide to form carbonates. Furthermore, the nitrogen atom may be replaced by a phosphorus, arsenic, or antimony atom (the heavier nonmetallic pnictogens), creating a phosphonium () or arsonium () cation that can itself be substituted similarly; while stibonium () itself is not known, some of its organic derivatives are characterised. Cobaltocene and derivatives Cobaltocene, Co(C5H5)2, is a metallocene, the cobalt analogue of ferrocene. It is a dark purple solid. Cobaltocene has 19 valence electrons, one more than usually found in organotransition metal complexes, such as its very stable relative, ferrocene, in accordance with the 18-electron rule. This additional electron occupies an orbital that is antibonding with respect to the Co–C bonds. Consequently, many chemical reactions of Co(C5H5)2 are characterized by its tendency to lose this "extra" electron, yielding a very stable 18-electron cation known as cobaltocenium. Many cobaltocenium salts coprecipitate with caesium salts, and cobaltocenium hydroxide is a strong base that absorbs atmospheric carbon dioxide to form cobaltocenium carbonate. Like the alkali metals, cobaltocene is a strong reducing agent, and decamethylcobaltocene is stronger still due to the combined inductive effect of the ten methyl groups. Cobalt may be substituted by its heavier congener rhodium to give rhodocene, an even stronger reducing agent. Iridocene (involving iridium) would presumably be still more potent, but is not very well-studied due to its instability. Thallium Thallium is the heaviest stable element in group 13 of the periodic table. At the bottom of the periodic table, the inert-pair effect is quite strong, because of the relativistic stabilisation of the 6s orbital and the decreasing bond energy as the atoms increase in size so that the amount of energy released in forming two more bonds is not worth the high ionisation energies of the 6s electrons. It displays the +1 oxidation state that all the known alkali metals display, and thallium compounds with thallium in its +1 oxidation state closely resemble the corresponding potassium or silver compounds stoichiometrically due to the similar ionic radii of the Tl+ (164 pm), K+ (152 pm) and Ag+ (129 pm) ions. It was sometimes considered an alkali metal in continental Europe (but not in England) in the years immediately following its discovery, and was placed just after caesium as the sixth alkali metal in Dmitri Mendeleev's 1869 periodic table and Julius Lothar Meyer's 1868 periodic table. Mendeleev's 1871 periodic table and Meyer's 1870 periodic table put thallium in its current position in the boron group and left the space below caesium blank. However, thallium also displays the oxidation state +3, which no known alkali metal displays (although ununennium, the undiscovered seventh alkali metal, is predicted to possibly display the +3 oxidation state). The sixth alkali metal is now considered to be francium. While Tl+ is stabilised by the inert-pair effect, this inert pair of 6s electrons is still able to participate chemically, so that these electrons are stereochemically active in aqueous solution. Additionally, the thallium halides (except TlF) are quite insoluble in water, and TlI has an unusual structure because of the presence of the stereochemically active inert pair in thallium. Copper, silver, and gold The group 11 metals (or coinage metals), copper, silver, and gold, are typically categorised as transition metals given they can form ions with incomplete d-shells. Physically, they have the relatively low melting points and high electronegativity values associated with post-transition metals. "The filled d subshell and free s electron of Cu, Ag, and Au contribute to their high electrical and thermal conductivity. Transition metals to the left of group 11 experience interactions between s electrons and the partially filled d subshell that lower electron mobility." Chemically, the group 11 metals behave like main-group metals in their +1 valence states, and are hence somewhat related to the alkali metals: this is one reason for their previously being labelled as "group IB", paralleling the alkali metals' "group IA". They are occasionally classified as post-transition metals. Their spectra are analogous to those of the alkali metals. Their monopositive ions are paramagnetic and contribute no colour to their salts, like those of the alkali metals. In Mendeleev's 1871 periodic table, copper, silver, and gold are listed twice, once under group VIII (with the iron triad and platinum group metals), and once under group IB. Group IB was nonetheless parenthesised to note that it was tentative. Mendeleev's main criterion for group assignment was the maximum oxidation state of an element: on that basis, the group 11 elements could not be classified in group IB, due to the existence of copper(II) and gold(III) compounds being known at that time. However, eliminating group IB would make group I the only main group (group VIII was labelled a transition group) to lack an A–B bifurcation. Soon afterward, a majority of chemists chose to classify these elements in group IB and remove them from group VIII for the resulting symmetry: this was the predominant classification until the rise of the modern medium-long 18-column periodic table, which separated the alkali metals and group 11 metals. The coinage metals were traditionally regarded as a subdivision of the alkali metal group, due to them sharing the characteristic s1 electron configuration of the alkali metals (group 1: p6s1; group 11: d10s1). However, the similarities are largely confined to the stoichiometries of the +1 compounds of both groups, and not their chemical properties. This stems from the filled d subshell providing a much weaker shielding effect on the outermost s electron than the filled p subshell, so that the coinage metals have much higher first ionisation energies and smaller ionic radii than do the corresponding alkali metals. Furthermore, they have higher melting points, hardnesses, and densities, and lower reactivities and solubilities in liquid ammonia, as well as having more covalent character in their compounds. Finally, the alkali metals are at the top of the electrochemical series, whereas the coinage metals are almost at the very bottom. The coinage metals' filled d shell is much more easily disrupted than the alkali metals' filled p shell, so that the second and third ionisation energies are lower, enabling higher oxidation states than +1 and a richer coordination chemistry, thus giving the group 11 metals clear transition metal character. Particularly noteworthy is gold forming ionic compounds with rubidium and caesium, in which it forms the auride ion (Au−) which also occurs in solvated form in liquid ammonia solution: here gold behaves as a pseudohalogen because its 5d106s1 configuration has one electron less than the quasi-closed shell 5d106s2 configuration of mercury. Production and isolation The production of pure alkali metals is somewhat complicated due to their extreme reactivity with commonly used substances, such as water. From their silicate ores, all the stable alkali metals may be obtained the same way: sulfuric acid is first used to dissolve the desired alkali metal ion and aluminium(III) ions from the ore (leaching), whereupon basic precipitation removes aluminium ions from the mixture by precipitating it as the hydroxide. The remaining insoluble alkali metal carbonate is then precipitated selectively; the salt is then dissolved in hydrochloric acid to produce the chloride. The result is then left to evaporate and the alkali metal can then be isolated. Lithium and sodium are typically isolated through electrolysis from their liquid chlorides, with calcium chloride typically added to lower the melting point of the mixture. The heavier alkali metals, however, are more typically isolated in a different way, where a reducing agent (typically sodium for potassium and magnesium or calcium for the heaviest alkali metals) is used to reduce the alkali metal chloride. The liquid or gaseous product (the alkali metal) then undergoes fractional distillation for purification. Most routes to the pure alkali metals require the use of electrolysis due to their high reactivity; one of the few which does not is the pyrolysis of the corresponding alkali metal azide, which yields the metal for sodium, potassium, rubidium, and caesium and the nitride for lithium. Lithium salts have to be extracted from the water of mineral springs, brine pools, and brine deposits. The metal is produced electrolytically from a mixture of fused lithium chloride and potassium chloride. Sodium occurs mostly in seawater and dried seabed, but is now produced through electrolysis of sodium chloride by lowering the melting point of the substance to below 700 °C through the use of a Downs cell. Extremely pure sodium can be produced through the thermal decomposition of sodium azide. Potassium occurs in many minerals, such as sylvite (potassium chloride). Previously, potassium was generally made from the electrolysis of potassium chloride or potassium hydroxide, found extensively in places such as Canada, Russia, Belarus, Germany, Israel, United States, and Jordan, in a method similar to how sodium was produced in the late 1800s and early 1900s. It can also be produced from seawater. However, these methods are problematic because the potassium metal tends to dissolve in its molten chloride and vaporises significantly at the operating temperatures, potentially forming the explosive superoxide. As a result, pure potassium metal is now produced by reducing molten potassium chloride with sodium metal at 850 °C. Na (g) + KCl (l) NaCl (l) + K (g) Although sodium is less reactive than potassium, this process works because at such high temperatures potassium is more volatile than sodium and can easily be distilled off, so that the equilibrium shifts towards the right to produce more potassium gas and proceeds almost to completion. Metals like sodium are obtained by electrolysis of molten salts. Rb & Cs obtained mainly as by products of Li processing. To make pure caesium, ores of caesium and rubidium are crushed and heated to 650 °C with sodium metal, generating an alloy that can then be separated via a fractional distillation technique. Because metallic caesium is too reactive to handle, it is normally offered as caesium azide (CsN3). Caesium hydroxide is formed when caesium interacts aggressively with water and ice (CsOH). Rubidium is the 16th most abundant element in the earth's crust; however, it is quite rare. Some minerals found in North America, South Africa, Russia, and Canada contain rubidium. Some potassium minerals (lepidolites, biotites, feldspar, carnallite) contain it, together with caesium. Pollucite, carnallite, leucite, and lepidolite are all minerals that contain rubidium. As a by-product of lithium extraction, it is commercially obtained from lepidolite. Rubidium is also found in potassium rocks and brines, which is a commercial supply. The majority of rubidium is now obtained as a byproduct of refining lithium. Rubidium is used in vacuum tubes as a getter, a material that combines with and removes trace gases from vacuum tubes. For several years in the 1950s and 1960s, a by-product of the potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium while the rest was potassium and a small fraction of caesium. Today the largest producers of caesium, for example the Tanco Mine in Manitoba, Canada, produce rubidium as by-product from pollucite. Today, a common method for separating rubidium from potassium and caesium is the fractional crystallisation of a rubidium and caesium alum (Cs, Rb)Al(SO4)2·12H2O, which yields pure rubidium alum after approximately 30 recrystallisations. The limited applications and the lack of a mineral rich in rubidium limit the production of rubidium compounds to 2 to 4 tonnes per year. Caesium, however, is not produced from the above reaction. Instead, the mining of pollucite ore is the main method of obtaining pure caesium, extracted from the ore mainly by three methods: acid digestion, alkaline decomposition, and direct reduction. Both metals are produced as by-products of lithium production: after 1958, when interest in lithium's thermonuclear properties increased sharply, the production of rubidium and caesium also increased correspondingly. Pure rubidium and caesium metals are produced by reducing their chlorides with calcium metal at 750 °C and low pressure. As a result of its extreme rarity in nature, most francium is synthesised in the nuclear reaction 197Au + 18O → 210Fr + 5 n, yielding francium-209, francium-210, and francium-211. The greatest quantity of francium ever assembled to date is about 300,000 neutral atoms, which were synthesised using the nuclear reaction given above. When the only natural isotope francium-223 is specifically required, it is produced as the alpha daughter of actinium-227, itself produced synthetically from the neutron irradiation of natural radium-226, one of the daughters of natural uranium-238. Applications Lithium, sodium, and potassium have many useful applications, while rubidium and caesium are very notable in academic contexts but do not have many applications yet. Lithium is the key ingredient for a range of lithium-based batteries, and lithium oxide can help process silica. Lithium stearate is a thickener and can be used to make lubricating greases; it is produced from lithium hydroxide, which is also used to absorb carbon dioxide in space capsules and submarines. Lithium chloride is used as a brazing alloy for aluminium parts. In medicine, some lithium salts are used as mood-stabilising pharmaceuticals. Metallic lithium is used in alloys with magnesium and aluminium to give very tough and light alloys. Sodium compounds have many applications, the most well-known being sodium chloride as table salt. Sodium salts of fatty acids are used as soap. Pure sodium metal also has many applications, including use in sodium-vapour lamps, which produce very efficient light compared to other types of lighting, and can help smooth the surface of other metals. Being a strong reducing agent, it is often used to reduce many other metals, such as titanium and zirconium, from their chlorides. Furthermore, it is very useful as a heat-exchange liquid in fast breeder nuclear reactors due to its low melting point, viscosity, and cross-section towards neutron absorption. Sodium-ion batteries may provide cheaper alternatives to their equivalent lithium-based cells. Both sodium and potassium are commonly used as GRAS counterions to create more water-soluble and hence more bioavailable salt forms of acidic pharmaceuticals. Potassium compounds are often used as fertilisers as potassium is an important element for plant nutrition. Potassium hydroxide is a very strong base, and is used to control the pH of various substances. Potassium nitrate and potassium permanganate are often used as powerful oxidising agents. Potassium superoxide is used in breathing masks, as it reacts with carbon dioxide to give potassium carbonate and oxygen gas. Pure potassium metal is not often used, but its alloys with sodium may substitute for pure sodium in fast breeder nuclear reactors. Rubidium and caesium are often used in atomic clocks. Caesium atomic clocks are extraordinarily accurate; if a clock had been made at the time of the dinosaurs, it would be off by less than four seconds (after 80 million years). For that reason, caesium atoms are used as the definition of the second. Rubidium ions are often used in purple fireworks, and caesium is often used in drilling fluids in the petroleum industry. Francium has no commercial applications, but because of francium's relatively simple atomic structure, among other things, it has been used in spectroscopy experiments, leading to more information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels, similar to those predicted by quantum theory. Biological role and precautions Metals Pure alkali metals are dangerously reactive with air and water and must be kept away from heat, fire, oxidising agents, acids, most organic compounds, halocarbons, plastics, and moisture. They also react with carbon dioxide and carbon tetrachloride, so that normal fire extinguishers are counterproductive when used on alkali metal fires. Some Class D dry powder extinguishers designed for metal fires are effective, depriving the fire of oxygen and cooling the alkali metal. Experiments are usually conducted using only small quantities of a few grams in a fume hood. Small quantities of lithium may be disposed of by reaction with cool water, but the heavier alkali metals should be dissolved in the less reactive isopropanol. The alkali metals must be stored under mineral oil or an inert atmosphere. The inert atmosphere used may be argon or nitrogen gas, except for lithium, which reacts with nitrogen. Rubidium and caesium must be kept away from air, even under oil, because even a small amount of air diffused into the oil may trigger formation of the dangerously explosive peroxide; for the same reason, potassium should not be stored under oil in an oxygen-containing atmosphere for longer than 6 months. Ions The bioinorganic chemistry of the alkali metal ions has been extensively reviewed. Solid state crystal structures have been determined for many complexes of alkali metal ions in small peptides, nucleic acid constituents, carbohydrates and ionophore complexes. Lithium naturally only occurs in traces in biological systems and has no known biological role, but does have effects on the body when ingested. Lithium carbonate is used as a mood stabiliser in psychiatry to treat bipolar disorder (manic-depression) in daily doses of about 0.5 to 2 grams, although there are side-effects. Excessive ingestion of lithium causes drowsiness, slurred speech and vomiting, among other symptoms, and poisons the central nervous system, which is dangerous as the required dosage of lithium to treat bipolar disorder is only slightly lower than the toxic dosage. Its biochemistry, the way it is handled by the human body and studies using rats and goats suggest that it is an essential trace element, although the natural biological function of lithium in humans has yet to be identified. Sodium and potassium occur in all known biological systems, generally functioning as electrolytes inside and outside cells. Sodium is an essential nutrient that regulates blood volume, blood pressure, osmotic equilibrium and pH; the minimum physiological requirement for sodium is 500 milligrams per day. Sodium chloride (also known as common salt) is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Dietary Reference Intake for sodium is 1.5 grams per day, but most people in the United States consume more than 2.3 grams per day, the minimum amount that promotes hypertension; this in turn causes 7.6 million premature deaths worldwide. Potassium is the major cation (positive ion) inside animal cells, while sodium is the major cation outside animal cells. The concentration differences of these charged particles causes a difference in electric potential between the inside and outside of cells, known as the membrane potential. The balance between potassium and sodium is maintained by ion transporter proteins in the cell membrane. The cell membrane potential created by potassium and sodium ions allows the cell to generate an action potential—a "spike" of electrical discharge. The ability of cells to produce electrical discharge is critical for body functions such as neurotransmission, muscle contraction, and heart function. Disruption of this balance may thus be fatal: for example, ingestion of large amounts of potassium compounds can lead to hyperkalemia strongly influencing the cardiovascular system. Potassium chloride is used in the United States for lethal injection executions. Due to their similar atomic radii, rubidium and caesium in the body mimic potassium and are taken up similarly. Rubidium has no known biological role, but may help stimulate metabolism, and, similarly to caesium, replace potassium in the body causing potassium deficiency. Partial substitution is quite possible and rather non-toxic: a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. Rats can survive up to 50% substitution of potassium by rubidium. Rubidium (and to a much lesser extent caesium) can function as temporary cures for hypokalemia; while rubidium can adequately physiologically substitute potassium in some systems, caesium is never able to do so. There is only very limited evidence in the form of deficiency symptoms for rubidium being possibly essential in goats; even if this is true, the trace amounts usually present in food are more than enough. Caesium compounds are rarely encountered by most people, but most caesium compounds are mildly toxic. Like rubidium, caesium tends to substitute potassium in the body, but is significantly larger and is therefore a poorer substitute. Excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources. As such, caesium is not a major chemical environmental pollutant. The median lethal dose (LD50) value for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. Caesium chloride has been promoted as an alternative cancer therapy, but has been linked to the deaths of over 50 patients, on whom it was used as part of a scientifically unvalidated cancer treatment. Radioisotopes of caesium require special precautions: the improper handling of caesium-137 gamma ray sources can lead to release of this radioisotope and radiation injuries. Perhaps the best-known case is the Goiânia accident of 1987, in which an improperly-disposed-of radiation therapy system from an abandoned clinic in the city of Goiânia, Brazil, was scavenged from a junkyard, and the glowing caesium salt sold to curious, uneducated buyers. This led to four deaths and serious injuries from radiation exposure. Together with caesium-134, iodine-131, and strontium-90, caesium-137 was among the isotopes distributed by the Chernobyl disaster which constitute the greatest risk to health. Radioisotopes of francium would presumably be dangerous as well due to their high decay energy and short half-life, but none have been produced in large enough amounts to pose any serious risk.
Physical sciences
Chemical element groups
null
673
https://en.wikipedia.org/wiki/Atomic%20number
Atomic number
The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of its atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons. For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A. Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century. The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent atomic number) come into common use in this context. The rules above do not always apply to exotic atoms which contain short-lived elementary particles other than protons, neutrons and electrons. History In the 19th century, the term "atomic number" typically meant the number of atoms in a given volume. Modern chemists prefer to use the concept of molar concentration. In 1913, Antonius van den Broek proposed that the electric charge of an atomic nucleus, expressed as a multiplier of the elementary charge, was equal to the element's sequential position on the periodic table. Ernest Rutherford, in various articles in which he discussed van den Broek's idea, used the term "atomic number" to refer to an element's position on the periodic table. No writer before Rutherford is known to have used the term "atomic number" in this way, so it was probably he who established this definition. After Rutherford deduced the existence of the proton in 1920, "atomic number" customarily referred to the proton number of an atom. In 1921, the German Atomic Weight Commission based its new periodic table on the nuclear charge number and in 1923 the International Committee on Chemical Elements followed suit. The periodic table and a natural number for each element The periodic table of elements creates an ordering of the elements, and so they can be numbered in order. Dmitri Mendeleev arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time. A simple numbering based on atomic weight position was never entirely satisfactory. In addition to the case of iodine and tellurium, several other pairs of elements (such as argon and potassium, cobalt and nickel) were later shown to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time). The Rutherford-Bohr model and van den Broek In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom were exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This eventually proved to be the case. Moseley's 1913 experiment The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z. To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminium (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time. Missing elements After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91. From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96). The proton and the idea of nuclear electrons In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms. In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to have four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number. Discovery of the neutron makes Z the proton number All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive nuclear charge now was realized to come entirely from a content of 79 protons. Since Moseley had previously shown that the atomic number Z of an element equals this positive charge, it was now clear that Z is identical to the number of protons of its nuclei. Chemical properties Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number. New elements The quest for new elements is usually described using atomic numbers. As of , all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life of a nuclide becomes shorter as atomic number increases, though undiscovered nuclides with certain "magic" numbers of protons and neutrons may have relatively longer half-lives and comprise an island of stability. A hypothetical element composed only of neutrons, neutronium, has also been proposed and would have atomic number 0, but has never been observed.
Physical sciences
Basics_4
null
674
https://en.wikipedia.org/wiki/Anatomy
Anatomy
Anatomy () is the branch of morphology concerned with the study of the internal structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology. Anatomy is a complex and dynamic field that is constantly evolving as discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures. The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells. The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging. Etymology and definition Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions. The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system. Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels. The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy. Animal tissues The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells. Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm. Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue. Connective tissue Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Often called fascia (from the Latin "fascia," meaning "band" or "bandage"), connective tissues give shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed. Epithelium Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells. Muscle tissue Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body. Nervous tissue Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach. Vertebrate anatomy All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution. Fish anatomy The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure. Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases. The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column. Amphibian anatomy Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist. In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side. Reptile anatomy Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid. Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers. Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead. Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye. Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey. Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood. Bird anatomy Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks. The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes. Mammal anatomy Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea. Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a teat and completes its development. Human anatomy Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet. Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope. Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology. Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells. Invertebrate anatomy Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies. Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles. Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring. Arthropod anatomy Arthropods comprise the largest phylum of invertebrates in the animal kingdom with over a million known species. Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts. Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ. Other branches of anatomy Surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables medics and veterinarians to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body. Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals. Artistic anatomy relates to anatomic studies of body proportions for artistic reasons. History Ancient In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart and its vessels, as well as the brain and its meninges and cerebrospinal fluid, and the liver, spleen, kidneys, uterus and bladder. It showed the blood vessels diverging from the heart. The Ebers Papyrus () features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body. Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded due to a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which contributed to the understanding of the brain, eye, liver, reproductive organs, and nervous system. The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemaic dynasty of Egypt helped raise Alexandria up, further rivalling other Greek states' cultural and scientific achievements. Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research, using the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works, making impressive contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs, and nervous system and characterizing the course of the disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He could distinguish the human body's sensory and motor nerves and believed air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carry the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the heart's valves, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves. Incredible feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus discovered and described not only the salivary glands but also the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland. The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic period. In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer, and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through the dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from Greek sometime in the 15th century. Medieval to early modern Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, thorax, head, and limbs. It was the standard anatomy textbook for the next century. Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected. Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian. In England, anatomy was the subject of the first public lectures given in any science; these were provided by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians. Late modern Medical schools began to be set up in the United States towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection, and these were difficult to obtain. Philadelphia, Baltimore, and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were, in consequence, protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery". The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically. Before the modern medical era, the primary means for studying the internal structures of the body were dissection of the dead and inspection, palpation, and auscultation of the living. The advent of microscopy opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope, and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. The study of small structures involved passing light through them, and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different tissue types. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a significant advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids, and other biological molecules gave rise to a new field of molecular anatomy. Equally important advances have occurred in non-invasive techniques for examining the body's interior structures. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled the examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations.
Biology and health sciences
Biology
null
680
https://en.wikipedia.org/wiki/Aardvark
Aardvark
Aardvarks ( ; Orycteropus afer) are medium-sized, burrowing, nocturnal mammals native to Africa. Aardvarks are the only living species of the family Orycteropodidae and the order Tubulidentata. They have a long snout, similar to that of a pig, which is used to sniff out food. They are afrotheres, a clade that also includes elephants, manatees, and hyraxes. They are found over much of the southern two-thirds of the African continent, avoiding areas that are mainly rocky. Nocturnal feeders, aardvarks subsist on ants and termites by using their sharp claws and powerful legs to dig the insects out of their hills. Aardvarks also dig to create burrows in which to live and rear their young. Name and taxonomy Name The aardvark is sometimes colloquially called the "African ant bear", "anteater" (not to be confused with the South American anteaters), or the "Cape anteater" after the Cape of Good Hope. The name "aardvark" is Afrikaans () and comes from earlier Afrikaans . It means "earth pig" or "ground pig" (: , : ), because of its burrowing habits. The name Orycteropus means "burrowing foot", and the name afer refers to Africa. The name of the aardvark's order, Tubulidentata, comes from the tubule-style teeth. Taxonomy The aardvark is not closely related to the pig; rather, it is the sole extant representative of the obscure mammalian order Tubulidentata, in which it is usually considered to form one variable species of the genus Orycteropus, the sole surviving genus in the family Orycteropodidae. The aardvark is not closely related to the South American anteater, despite sharing some characteristics and a superficial resemblance. The similarities are the outcome of convergent evolution. The closest living relatives of the aardvark are the elephant shrews, Tenrecidae, and golden moles. Along with sirenians, hyraxes, elephants, and their extinct relatives, these animals form the superorder Afrotheria. Studies of the brain have shown the similarities with Condylarthra. Evolutionary history Based on his study of fossils, Bryan Patterson has concluded that early relatives of the aardvark appeared in Africa around the end of the Paleocene. The ptolemaiidans, a mysterious clade of mammals with uncertain affinities, may actually be stem-aardvarks, either as a sister clade to Tubulidentata or as a grade leading to true tubulidentates. The first unambiguous tubulidentate was probably Myorycteropus africanus from Kenyan Miocene deposits. The earliest example from the genus Orycteropus was Orycteropus mauritanicus, found in Algeria in deposits from the middle Miocene, with an equally old version found in Kenya. Fossils from the aardvark have been dated to 5 million years, and have been located throughout Europe and the Near East. The mysterious Pleistocene Plesiorycteropus from Madagascar was originally thought to be a tubulidentate that was descended from ancestors that entered the island during the Eocene. However, a number of subtle anatomical differences coupled with recent molecular evidence now lead researchers to believe that Plesiorycteropus is a relative of golden moles and tenrecs that achieved an aardvark-like appearance and ecological niche through convergent evolution. Subspecies The aardvark has seventeen poorly defined subspecies listed: Orycteropus afer afer (Southern aardvark) O. a. adametzi Grote, 1921 (Western aardvark) O. a. aethiopicus Sundevall, 1843 O. a. angolensis Zukowsky & Haltenorth, 1957 O. a. erikssoni Lönnberg, 1906 O. a. faradjius Hatt, 1932 O. a. haussanus Matschie, 1900 O. a. kordofanicus Rothschild, 1927 O. a. lademanni Grote, 1911 O. a. leptodon Hirst, 1906 O. a. matschiei Grote, 1921 O. a. observandus Grote, 1921 O. a. ruvanensis Grote, 1921 O. a. senegalensis Lesson, 1840 O. a. somalicus Lydekker, 1908 O. a. wardi Lydekker, 1908 O. a. wertheri Matschie, 1898 (Eastern aardvark) The 1911 Encyclopædia Britannica also mentions O. a. capensis or Cape ant-bear from South Africa. Description The aardvark is vaguely pig-like in appearance. Its body is stout with a prominently arched back and is sparsely covered with coarse hairs. The limbs are of moderate length, with the rear legs being longer than the forelegs. The front feet have lost the pollex (or 'thumb'), resulting in four toes, while the rear feet have all five toes. Each toe bears a large, robust nail which is somewhat flattened and shovel-like, and appears to be intermediate between a claw and a hoof. Whereas the aardvark is considered digitigrade, it appears at times to be plantigrade. This confusion happens because when it squats it stands on its soles. A contributing characteristic to the burrow digging capabilities of aardvarks is an endosteal tissue called compacted coarse cancellous bone (CCCB). The stress and strain resistance provided by CCCB allows aardvarks to create their burrows, ultimately leading to a favourable environment for plants and a variety of animals. Digging is also facilitated by its forearm's unusually stout ulna and radius.An aardvark's weight is typically between . An aardvark's length is usually between , and can reach lengths of when its tail (which can be up to ) is taken into account. It is tall at the shoulder, and has a girth of about . It does not exhibit sexual dimorphism. It is the largest member of the proposed clade Afroinsectiphilia. The aardvark is pale yellowish-grey in colour and often stained reddish-brown by soil. The aardvark's coat is thin, and the animal's primary protection is its tough skin. Its hair is short on its head and tail; however its legs tend to have longer hair. The hair on the majority of its body is grouped in clusters of three to four hairs. The hair surrounding its nostrils is dense to help filter particulate matter out as it digs. Its tail is very thick at the base and gradually tapers. Head The greatly elongated head is set on a short, thick neck, and the end of the snout bears a disc, which houses the nostrils. It contains a thin but complete zygomatic arch. The head of the aardvark contains many unique and different features. One of the most distinctive characteristics of the Tubulidentata is their teeth. Instead of having a pulp cavity, each tooth has a cluster of thin, hexagonal, upright, parallel tubes of vasodentin (a modified form of dentine), with individual pulp canals, held together by cementum. The number of columns is dependent on the size of the tooth, with the largest having about 1,500. The teeth have no enamel coating and are worn away and regrow continuously. The aardvark is born with conventional incisors and canines at the front of the jaw, which fall out and are not replaced. Adult aardvarks have only cheek teeth at the back of the jaw, and have a dental formula of: These remaining teeth are peg-like and rootless and are of unique composition. The teeth consist of 14 upper and 12 lower jaw molars. The nasal area of the aardvark is another unique area, as it contains ten nasal conchae, more than any other placental mammal. The sides of the nostrils are thick with hair. The tip of the snout is highly mobile and is moved by modified mimetic muscles. The fleshy dividing tissue between its nostrils probably has sensory functions, but it is uncertain whether they are olfactory or vibratory in nature. Its nose is made up of more turbinate bones than any other mammal, with between nine and 11, compared to dogs with four to five. With a large quantity of turbinate bones, the aardvark has more space for the moist epithelium, which is the location of the olfactory bulb. The nose contains nine olfactory bulbs, more than any other mammal. Its keen sense of smell is not just from the quantity of bulbs in the nose but also in the development of the brain, as its olfactory lobe is very developed. The snout resembles an elongated pig snout. The mouth is small and tubular, typical of species that feed on ants and termites. The aardvark has a long, thin, snakelike, protruding tongue (as much as long) and elaborate structures supporting a keen sense of smell. The ears, which are very effective, are disproportionately long, about long. The eyes are small for its head, and consist only of rods. Digestive system The aardvark's stomach has a muscular pyloric area that acts as a gizzard to grind swallowed food up, thereby rendering chewing unnecessary. Its cecum is large. Both sexes emit a strong smelling secretion from an anal gland. Its salivary glands are highly developed and almost completely ring the neck; their output is what causes the tongue to maintain its tackiness. The female has two pairs of teats in the inguinal region. Genetically speaking, the aardvark is a living fossil, as its chromosomes are highly conserved, reflecting much of the early eutherian arrangement before the divergence of the major modern taxa. Habitat and range Aardvarks are found in sub-Saharan Africa, where suitable habitat (savannas, grasslands, woodlands and bushland) and food (i.e., ants and termites) is available. They spend the daylight hours in dark burrows to avoid the heat of the day. The only major habitat that they are not present in is swamp forest, as the high water table precludes digging to a sufficient depth. They also avoid terrain rocky enough to cause problems with digging. They have been documented as high as in Ethiopia. They can be found throughout sub-Saharan Africa from Ethiopia all the way to Cape of Good Hope in South Africa with few exceptions including the coastal areas of Namibia, Ivory Coast, and Ghana. They are not found in Madagascar. Ecology and behaviour Aardvarks live for up to 23 years in captivity. Its keen hearing warns it of predators: lions, leopards, cheetahs, African wild dogs, hyenas, and pythons. Some humans also hunt aardvarks for meat. Aardvarks can dig fast or run in zigzag fashion to elude enemies, but if all else fails, they will strike with their claws, tail and shoulders, sometimes flipping onto their backs lying motionless except to lash out with all four feet. They are capable of causing substantial damage to unprotected areas of an attacker. They will also dig to escape as they can. Sometimes, when pressed, aardvarks can dig extremely quickly. Feeding The aardvark is nocturnal and is a solitary creature that feeds almost exclusively on ants and termites (myrmecophagy); studies in the Nama Karoo revealed that ants, especially Anoplolepis custodiens, were the predominant prey year-round, followed by termites like Trinervitermes trinervoides. In winter, when ant numbers declined, aardvarks relied more on termites, often feeding on epigeal mounds coinciding with the presence of alates, possibly to meet their nutritional needs. They avoid eating the African driver ant and red ants. Due to their stringent diet requirements, they require a large range to survive. The only fruit eaten by aardvarks is the aardvark cucumber. In fact, the cucumber and the aardvark have a symbiotic relationship as they eat the subterranean fruit, then defecate the seeds near their burrows, which then grow rapidly due to the loose soil and fertile nature of the area. The time spent in the intestine of the aardvark helps the fertility of the seed, and the fruit provides needed moisture for the aardvark. An aardvark emerges from its burrow in the late afternoon or shortly after sunset, and forages over a considerable home range encompassing . While foraging for food, the aardvark will keep its nose to the ground and its ears pointed forward, which indicates that both smell and hearing are involved in the search for food. They zig-zag as they forage and will usually not repeat a route for five to eight days as they appear to allow time for the termite nests to recover before feeding on it again. During a foraging period, they will stop to dig a V-shaped trench with their forefeet and then sniff it profusely as a means to explore their location. When a concentration of ants or termites is detected, the aardvark digs into it with its powerful front legs, keeping its long ears upright to listen for predators, and takes up an astonishing number of insects with its long, sticky tongue—as many as 50,000 in one night have been recorded. Its claws enable it to dig through the extremely hard crust of a termite or ant mound quickly. It avoids inhaling the dust by sealing the nostrils. When successful, the aardvark's long (up to ) tongue licks up the insects; the termites' biting, or the ants' stinging attacks are rendered futile by the tough skin. After an aardvark visit at a termite mound, other animals will visit to pick up all the leftovers. Termite mounds alone do not provide enough food for the aardvark, so they look for termites that are on the move. When these insects move, they can form columns long and these tend to provide easy pickings with little effort exerted by the aardvark. These columns are more common in areas of livestock or other hoofed animals. The trampled grass and dung attract termites from the Odontotermes, Microtermes, and Pseudacanthotermes genera. On a nightly basis they tend to be more active during the first portion of night (roughly the four hours between 8:00p.m. and 12:00a.m.); however, they do not seem to prefer bright or dark nights over the other. During adverse weather or if disturbed they will retreat to their burrow systems. They cover between per night; however, some studies have shown that they may traverse as far as in a night. Aardvarks shift their circadian rhythms to more diurnal activity patterns in response to a reduced food supply. This survival tactic may signify an increased risk of imminent mortality. Vocalisation The aardvark is a rather quiet animal. However, it does make soft grunting sounds as it forages and loud grunts as it makes for its tunnel entrance. It makes a bleating sound if frightened. When it is threatened it will make for one of its burrows. If one is not close it will dig a new one rapidly. This new one will be short and require the aardvark to back out when the coast is clear. Movement The aardvark is known to be a good swimmer and has been witnessed successfully swimming in strong currents. It can dig a yard of tunnel in about five minutes, but otherwise moves fairly slowly. When leaving the burrow at night, they pause at the entrance for about ten minutes, sniffing and listening. After this period of watchfulness, it will bound out and within seconds it will be away. It will then pause, prick its ears, twisting its head to listen, then jump and move off to start foraging. Aside from digging out ants and termites, the aardvark also excavates burrows in which to live, which generally fall into one of three categories: burrows made while foraging, refuge and resting location, and permanent homes. Temporary sites are scattered around the home range and are used as refuges, while the main burrow is also used for breeding. Main burrows can be deep and extensive, have several entrances and can be as long as . These burrows can be large enough for a person to enter. The aardvark changes the layout of its home burrow regularly, and periodically moves on and makes a new one. The old burrows are an important part of the African wildlife scene. As they are vacated, then they are inhabited by smaller animals like the African wild dog, ant-eating chat, Nycteris thebaica and warthogs. Other animals that use them are hares, mongooses, hyenas, owls, pythons, and lizards. Without these refuges many animals would die during wildfire season. Only mothers and young share burrows; however, the aardvark is known to live in small family groups or as a solitary creature. If attacked in the tunnel, it will escape by digging out of the tunnel thereby placing the fresh fill between it and its predator, or if it decides to fight it will roll onto its back, and attack with its claws. The aardvark has been known to sleep in a recently excavated ant nest, which also serves as protection from its predators. Reproduction It is believed to exhibit polygamous breeding behavior. During mating, the male secures himself to the female's back using his claws, which can occasionally result in noticeable scratches. Males play no role on parental care. Aardvarks pair only during the breeding season; after a gestation period of seven months, one cub weighing around is born during May–July. When born, the young has flaccid ears and many wrinkles. When nursing, it will nurse off each teat in succession. After two weeks, the folds of skin disappear and after three, the ears can be held upright. After 5–6 weeks, body hair starts growing. It is able to leave the burrow to accompany its mother after only two weeks and eats termites at nine weeks, and is weaned between three months and 16 weeks. At six months of age, it is able to dig its own burrows, but it will often remain with the mother until the next mating season, and is sexually mature from approximately two years of age. Conservation Aardvarks were thought to have declining numbers, however, this is possibly because they are not readily seen. There are no definitive counts because of their nocturnal and secretive habits; however, their numbers seem to be stable overall. They are not considered common anywhere in Africa, but due to their large range, they maintain sufficient numbers. There may be a slight decrease in numbers in eastern, northern, and western Africa. Southern African numbers are not decreasing. It has received an official designation from the IUCN as least concern. However, they are a species in a precarious situation, as they are so dependent on such specific food; therefore if a problem arises with the abundance of termites, the species as a whole would be affected drastically. Recent research suggests that aardvarks may be particularly vulnerable to alterations in temperature caused by climate change. Droughts negatively impact the availability of termites and ants, which comprise the bulk of an aardvark's diet. Nocturnal species faced with resource scarcity may increase their diurnal activity to spare the energy costs of staying warm at night, but this comes at the cost of withstanding high temperatures during the day. A study on aardvarks in the Kalahari Desert saw that five out of six aardvarks being studied perished following a drought. Aardvarks that survive droughts can take long periods of time to regain health and optimal thermoregulatory physiology, reducing the reproductive potential of the species. Aardvarks handle captivity well. The first zoo to have one was London Zoo in 1869, which had an animal from South Africa. Mythology and popular culture In African folklore, the aardvark is much admired because of its diligent quest for food and its fearless response to soldier ants. Hausa magicians make a charm from the heart, skin, forehead, and nails of the aardvark, which they then proceed to pound together with the root of a certain tree. Wrapped in a piece of skin and worn on the chest, the charm is said to give the owner the ability to pass through walls or roofs at night. The charm is said to be used by burglars and those seeking to visit young girls without their parents' permission. Also, some tribes, such as the Margbetu, Ayanda, and Logo, will use aardvark teeth to make bracelets, which are regarded as good luck charms. The meat, which has a resemblance to pork, is eaten in certain cultures. In the mythology of the Dagbon people of Ghana, the aardvark is believed to possess superpowers. The Dagombas believe this animal can transfigure into and interact with humans. The ancient Egyptian god Set is usually depicted with the head of an unidentified animal, whose similarity to an aardvark has been noted in scholarship. The titular character and his families from Arthur, an animated television series for children based on a book series and produced by WGBH, shown in more than 180 countries, is an aardvark. In the first book of the series, Arthur's Nose (1976), he has a long, aardvark-like nose, but in later books, his face becomes more rounded. Otis the Aardvark was a puppet character used on Children's BBC programming. An aardvark features as the antagonist in the cartoon The Ant and the Aardvark as well as in the Canadian animated series The Raccoons. The supersonic fighter-bomber F-111/FB-111 was nicknamed the Aardvark because of its long nose resembling the animal. It also had similarities with its nocturnal missions flown at a very low level employing ordnance that could penetrate deep into the ground. In the US Navy, the squadron VF-114 was nicknamed the Aardvarks, flying F-4s and then F-14s. The squadron mascot was adapted from the animal in the comic strip B.C., which the F-4 was said to resemble. Cerebus the Aardvark is a 300-issue comic book series by Dave Sim.
Biology and health sciences
Mammals
null
681
https://en.wikipedia.org/wiki/Aardwolf
Aardwolf
The aardwolf (Proteles cristatus) is an insectivorous hyaenid species, native to East and Southern Africa. Its name means "earth-wolf" in Afrikaans and Dutch. It is also called the maanhaar-jackal (Afrikaans for "mane-jackal"), termite-eating hyena and civet hyena, based on its habit of secreting substances from its anal gland, a characteristic shared with the African civet. Unlike many of its relatives in the order Carnivora, the aardwolf does not hunt large animals. It eats insects and their larvae, mainly termites; one aardwolf can lap up as many as 300,000 termites during a single night using its long, sticky tongue. The aardwolf's tongue has adapted to be tough enough to withstand the strong bite of termites. The aardwolf lives in the shrublands of eastern and southern Africa – open lands covered with stunted trees and shrubs. It is nocturnal, resting in burrows during the day and emerging at night to seek food. Taxonomy The aardwolf is generally classified as part of the hyena family Hyaenidae. However, it was formerly placed in its own family Protelidae. Early on, scientists felt that it was merely mimicking the striped hyena, which subsequently led to the creation of Protelidae. Recent studies have suggested that the aardwolf probably diverged from other hyaenids early on; how early is still unclear, as the fossil record and genetic studies disagree by 10 million years. The aardwolf is the only surviving species in the subfamily Protelinae. There is disagreement as to whether the species is monotypic, or can be divided into subspecies. A 2021 study found the genetic differences in eastern and southern aardwolves may be pronounced enough to categorize them as species. A 2006 molecular analysis indicates it is phylogenetically the most basal of the four extant hyaenidae species. Etymology The generic name proteles comes from two words both of Greek origin, protos and teleos which combined means "complete in front" based on the fact that they have five toes on their front feet and four on the rear. The specific name, cristatus, comes from Latin and means "provided with a comb", relating to their mane. Description The aardwolf resembles a much smaller and thinner striped hyena, with a more slender muzzle, black vertical stripes on a coat of yellowish fur, and a long, distinct mane down the midline of the neck and back. It also has one or two diagonal stripes down the fore and hindquarters and several stripes on its legs. The mane is raised during confrontations to make the aardwolf appear larger. It is missing the throat spot that others in the family have. Its lower leg (from the knee down) is all black, and its tail is bushy with a black tip. The aardwolf is about long, excluding its bushy tail, which is about long, and stands about tall at the shoulders. An adult aardwolf weighs approximately , sometimes reaching . The aardwolves in the south of the continent tend to be smaller (about ) than the eastern version (around ). This makes the aardwolf the smallest extant member of the Hyaenidae family. The front feet have five toes each, unlike the four-toed hyena. The skull is similar in shape to those of other hyenas, though much smaller, and its cheek teeth are specialised for eating insects. It still has canines, but unlike other hyenas, these teeth are used primarily for fighting and defense. Its ears, which are large, are very similar to those of the striped hyena. As an aardwolf ages, it will typically lose some of its teeth, though this has little impact on its feeding habits due to the softness of the insects that it eats. Distribution and habitat Aardwolves live in open, dry plains and bushland, avoiding mountainous areas. Due to their specific food requirements, they are found only in regions where termites of the family Hodotermitidae occur. Termites of this family depend on dead and withered grass and are most populous in heavily grazed grasslands and savannahs, including farmland. For most of the year, aardwolves spend time in shared territories consisting of up to a dozen dens, which are occupied for six weeks at a time. There are two distinct populations: one in Southern Africa, and another in East and Northeast Africa. The species does not occur in the intermediary miombo forests. An adult pair, along with their most-recent offspring, occupies a territory of . Behavior and ecology Aardwolves are shy and nocturnal, sleeping in burrows by day. They will, on occasion during the winter, become diurnal feeders. This happens during the coldest periods as they then stay in at night to conserve heat. They are primarily solitary animals, though during mating season they form monogamous pairs which occupy a territory with their young. If their territory is infringed upon by another aardwolf, they will chase the intruder away for up to or to the border. If the intruder is caught, which rarely happens, a fight will occur, which is accompanied by soft clucking, hoarse barking, and a type of roar. The majority of incursions occur during mating season, when they can occur once or twice per week. When food is scarce, the stringent territorial system may be abandoned and as many as three pairs may occupy a single territory. The territory is marked by both sexes, as they both have developed anal glands from which they extrude a black substance that is smeared on rocks or grass stalks in -long streaks. Aardwolves also have scent glands on the forefoot and penile pad. They often mark near termite mounds within their territory every 20 minutes or so. If they are patrolling their territorial boundaries, the marking frequency increases drastically, to once every . At this rate, an individual may mark 60 marks per hour, and upwards of 200 per night. An aardwolf pair's territory may have up to 10 dens, and numerous middens where they dig small holes and bury their feces with sand. Their dens are usually abandoned aardvark, springhare, or porcupine dens, or on occasion they are crevices in rocks. They will also dig their own dens, or enlarge dens started by springhares. They typically will only use one or two dens at a time, rotating through all of their dens every six months. During the summer, they may rest outside their den during the night and sleep underground during the heat of the day. Aardwolves are not fast runners nor are they particularly adept at fighting off predators. Therefore, when threatened, the aardwolf may attempt to mislead its foe by doubling back on its tracks. If confronted, it may raise its mane in an attempt to appear more menacing. It also emits a foul-smelling liquid from its anal glands. Feeding The aardwolf feeds primarily on termites and more specifically on Trinervitermes. This genus of termites has different species throughout the aardwolf's range. In East Africa, they eat Trinervitermes bettonianus, in central Africa, they eat Trinervitermes rhodesiensis, and in southern Africa, they eat T. trinervoides. Their technique consists of licking them off the ground as opposed to the aardvark, which digs into the mound. They locate their food by sound and also from the scent secreted by the soldier termites. An aardwolf may consume up to 250,000 termites per night using its long, broad, sticky tongue. They do not destroy the termite mound or consume the entire colony, thus ensuring that the termites can rebuild and provide a continuous supply of food. They often memorize the location of such nests and return to them every few months. During certain seasonal events, such as the onset of the rainy season and the cold of midwinter, the primary termites become scarce, so the need for other foods becomes pronounced. During these times, the southern aardwolf will seek out Hodotermes mossambicus, a type of harvester termite active in the afternoon, which explains some of their diurnal behavior in the winter. The eastern aardwolf, during the rainy season, subsists on termites from the genera Odontotermes and Macrotermes. They are also known to feed on other insects and larvae, and, some sources mention, occasionally eggs, small mammals and birds, but these constitute a very small percentage of their total diet. They use their wide tongues to lap surface foraging termites off of the ground and consume large quantities of sand in the process, which aids in digestion in the absence of teeth to break down their food. Unlike other hyenas, aardwolves do not scavenge or kill larger animals. Contrary to popular myths, aardwolves do not eat carrion, and if they are seen eating while hunched over a dead carcass, they are actually eating larvae and beetles. Also, contrary to some sources, they do not like meat, unless it is finely ground or cooked for them. The adult aardwolf was formerly assumed to forage in small groups, but more recent research has shown that they are primarily solitary foragers, necessary because of the scarcity of their insect prey. Their primary source, Trinervitermes, forages in small but dense patches of . While foraging, the aardwolf can cover about per hour, which translates to per summer night and per winter night. Breeding The breeding season varies depending on location, but normally takes place during autumn or spring. In South Africa, breeding occurs in early July. During the breeding season, unpaired male aardwolves search their own territory, as well as others, for a female to mate with. Dominant males also mate opportunistically with the females of less dominant neighboring aardwolves, which can result in conflict between rival males. Dominant males even go a step further and as the breeding season approaches, they make increasingly greater and greater incursions onto weaker males' territories. As the female comes into oestrus, they add pasting to their tricks inside of the other territories, sometimes doing so more in rivals' territories than their own. Females will also, when given the opportunity, mate with the dominant male, which increases the chances of the dominant male guarding "his" cubs with her. Copulation lasts between 1 and 4.5 hours. Gestation lasts between 89 and 92 days, producing two to five cubs (most often two or three) during the rainy season (October–December), when termites are more active. They are born with their eyes open, but initially are helpless, and weigh around . The first six to eight weeks are spent in the den with their parents. The male may spend up to six hours a night watching over the cubs while the mother is out looking for food. After three months, they begin supervised foraging, and by four months are normally independent, though they often share a den with their mother until the next breeding season. By the time the next set of cubs is born, the older cubs have moved on. Aardwolves generally achieve sexual maturity at one and a half to two years of age. Conservation The aardwolf has not seen decreasing numbers and is relatively widespread throughout eastern Africa. They are not common throughout their range, as they maintain a density of no more than 1 per square kilometer, if food is abundant. Because of these factors, the IUCN has rated the aardwolf as least concern. In some areas, they are persecuted because of the mistaken belief that they prey on livestock; however, they are actually beneficial to the farmers because they eat termites that are detrimental. In other areas, the farmers have recognized this, but they are still killed, on occasion, for their fur. Dogs and insecticides are also common killers of the aardwolf. In captivity Frankfurt Zoo in Germany was home to the oldest recorded aardwolf in captivity at 18 years and 11 months.
Biology and health sciences
Other carnivora
Animals
682
https://en.wikipedia.org/wiki/Adobe
Adobe
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world. Adobe architecture has been dated to before 5,100 BP. Description Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth. Strength In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake. Distribution Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics. Etymology The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction. In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method. Composition An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight. No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition. Material properties Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least for the finished block. In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material. Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual – preferably with changing thermal jumps. There is an effective R-value for a north facing wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity 0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity 0.24 Btu/(lb °F) or 1 kJ/(kg K) and density , giving heat capacity 25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be . Uses Poured and puddled adobe walls Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish. Adobe bricks Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking. The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage. Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt. During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system. Adobe wall construction The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters. Adobe roof The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe. Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking. The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied. To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain. Roof design evolved around 1850 in the American Southwest. of adobe mud was applied on top of the latillas, then of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed. Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls. In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used. Adobe around the world The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru.
Technology
Building materials
null
713
https://en.wikipedia.org/wiki/Android%20%28robot%29
Android (robot)
An android is a humanoid robot or other artificial being, often made from a flesh-like material. Historically, androids existed only in the domain of science fiction and were frequently seen in film and television, but advances in robot technology have allowed the design of functional and realistic humanoid robots. Terminology The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls. The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886), featuring an artificial humanoid robot named Hadaly. The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944). Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts. The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dick in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070. While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (a portmanteau of anthrōpos and robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of science fiction, futurism and speculative astrobiology). Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between. Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition: the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues the golem type – made from flexible, possibly organic material, including golems and homunculi the automaton type – made from a mix of dead and living parts, including automatons and robots Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence). Projects Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway. Japan Japanese robotics have been leading the field since the 1970s. Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth. In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans. The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and the Kokoro company demonstrated the Actroid at Expo 2005 in Aichi Prefecture, Japan and released the Telenoid R1 in 2010. In 2006, Kokoro developed a new DER 2 android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice. The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the Tokyo University of Science, has developed an android head called Saya, which was exhibited at Robodex 2002 in Yokohama, Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is working at the Science University of Tokyo as a guide. The Waseda University (Japan) and NTT docomo's manufacturers have succeeded in creating a shape-shifting robot WD-2. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three degrees of freedom. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a 3D scanner to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask. Singapore Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's Siri or Microsoft's Cortana. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly. Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in tele-presence robotics, creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements. South Korea KITECH researched and developed EveR-1, an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is tall and weighs , matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the Biblical Eve, plus the letter r for robot. EveR-1's advanced computing processing power enables speech recognition and vocal synthesis, at the same time processing lip synchronization and visual recognition by 90-degree micro-CCD cameras with face recognition technology. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the Ministry of Information and Communication had an ambitious plan to put a robot in every household by 2020. Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment. The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa. United States Walt Disney and a staff of Imagineers created Great Moments with Mr. Lincoln that debuted at the 1964 New York World's Fair. Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the United States Military Academy, created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film Metropolis, as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level. Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020. Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources. Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same. Maria Bot is an ambassador robot for good and ethical AI technology. Hanson Robotics, Inc., of Texas and KAIST produced an android portrait of Albert Einstein, using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "Albert Hubo", thus represents the first full-body walking android in history. Hanson Robotics, the FedEx Institute of Technology, and the University of Texas at Arlington also developed the android portrait of sci-fi author Philip K. Dick (creator of Do Androids Dream of Electric Sheep?, the basis for the film Blade Runner), with full conversational capabilities that incorporated thousands of pages of the author's works. In 2005, the PKD android won a first-place artificial intelligence award from AAAI. Use in fiction Androids are a staple of science fiction. Isaac Asimov pioneered the fictionalization of the science of robotics and artificial intelligence, notably in his 1950s series I, Robot. One thing common to most fictional androids is that the real-life technological challenges associated with creating thoroughly human-like robots — such as the creation of strong artificial intelligence—are assumed to have been solved. Fictional androids are often depicted as mentally and physically equal or superior to humans—moving, thinking and speaking as fluidly as them. The tension between the nonhuman substance and the human appearance—or even human ambitions—of androids is the dramatic impetus behind most of their fictional depictions. Some android heroes seek, like Pinocchio, to become human, as in the film Bicentennial Man, or Data in Star Trek: The Next Generation. Others, as in the film Westworld, rebel against abuse by careless humans. Android hunter Deckard in Do Androids Dream of Electric Sheep? and its film adaptation Blade Runner discovers that his targets appear to be, in some ways, more "human" than he is. The sequel Blade Runner 2049 involves android hunter K, himself an android, discovering the same thing. Android stories, therefore, are not essentially stories "about" androids; they are stories about the human condition and what it means to be human. One aspect of writing about the meaning of humanity is to use discrimination against androids as a mechanism for exploring racism in society, as in Blade Runner. Perhaps the clearest example of this is John Brunner's 1968 novel Into the Slave Nebula, where the blue-skinned android slaves are explicitly shown to be fully human. More recently, the androids Bishop and Annalee Call in the films Aliens and Alien Resurrection are used as vehicles for exploring how humans deal with the presence of an "Other". The 2018 video game Detroit: Become Human also explores how androids are treated as second class citizens in a near future society. Female androids, or "gynoids", are often seen in science fiction, and can be viewed as a continuation of the long tradition of men attempting to create the stereotypical "perfect woman". Examples include the Greek myth of Pygmalion and the female robot Maria in Fritz Lang's Metropolis. Some gynoids, like Pris in Blade Runner, are designed as sex-objects, with the intent of "pleasing men's violent sexual desires", or as submissive, servile companions, such as in The Stepford Wives. Fiction about gynoids has therefore been described as reinforcing "essentialist ideas of femininity", although others have suggested that the treatment of androids is a way of exploring racism and misogyny in society. The 2015 Japanese film Sayonara, starring Geminoid F, was promoted as "the first movie to feature an android performing opposite a human actor".
Technology
Machinery and tools: General
null
734
https://en.wikipedia.org/wiki/Actinopterygii
Actinopterygii
Actinopterygii (; ), members of which are known as ray-finned fish or actinopterygians, is a class of bony fish that comprise over 50% of living vertebrate species. They are so called because of their lightly built fins made of webbings of skin supported by radially extended thin bony spines called lepidotrichia, as opposed to the bulkier, fleshy lobed fins of the sister class Sarcopterygii (lobe-finned fish). Resembling folding fans, the actinopterygian fins can easily change shape and wetted area, providing superior thrust-to-weight ratios per movement compared to sarcopterygian and chondrichthyian fins. The fin rays attach directly to the proximal or basal skeletal elements, the radials, which represent the articulation between these fins and the internal skeleton (e.g., pelvic and pectoral girdles). The vast majority of actinopterygians are teleosts. By species count, they dominate the subphylum Vertebrata, and constitute nearly 99% of the over 30,000 extant species of fish. They are the most abundant nektonic aquatic animals and are ubiquitous throughout freshwater and marine environments from the deep sea to subterranean waters to the highest mountain streams. Extant species can range in size from Paedocypris, at ; to the massive ocean sunfish, at ; and to the giant oarfish, at . The largest ever known ray-finned fish, the extinct Leedsichthys from the Jurassic, has been estimated to have grown to . Characteristics Ray-finned fishes occur in many variant forms. The main features of typical ray-finned fish are shown in the adjacent diagram. The swim bladder is a more derived structure and used for buoyancy. Except from the bichirs, which just like the lungs of lobe-finned fish have retained the ancestral condition of ventral budding from the foregut, the swim bladder in ray-finned fishes derives from a dorsal bud above the foregut. In early forms the swim bladder could still be used for breathing, a trait still present in Holostei (bowfins and gars). In some fish like the arapaima, the swim bladder has been modified for breathing air again, and in other lineages it have been completely lost. The teleosts have urinary and reproductive tracts that are fully separated, while the Chondrostei have common urogenital ducts, and partially connected ducts are found in Cladistia and Holostei. Ray-finned fishes have many different types of scales; but all teleosts have leptoid scales. The outer part of these scales fan out with bony ridges, while the inner part is crossed with fibrous connective tissue. Leptoid scales are thinner and more transparent than other types of scales, and lack the hardened enamel- or dentine-like layers found in the scales of many other fish. Unlike ganoid scales, which are found in non-teleost actinopterygians, new scales are added in concentric layers as the fish grows. Teleosts and chondrosteans (sturgeons and paddlefish) also differ from the bichirs and holosteans (bowfin and gars) in having gone through a whole-genome duplication (paleopolyploidy). The WGD is estimated to have happened about 320 million years ago in the teleosts, which on average has retained about 17% of the gene duplicates, and around 180 (124–225) million years ago in the chondrosteans. It has since happened again in some teleost lineages, like Salmonidae (80–100 million years ago) and several times independently within the Cyprinidae (in goldfish and common carp as recently as 14 million years ago). Body shapes and fin arrangements Ray-finned fish vary in size and shape, in their feeding specializations, and in the number and arrangement of their ray-fins. Reproduction In nearly all ray-finned fish, the sexes are separate, and in most species the females spawn eggs that are fertilized externally, typically with the male inseminating the eggs after they are laid. Development then proceeds with a free-swimming larval stage. However other patterns of ontogeny exist, with one of the commonest being sequential hermaphroditism. In most cases this involves protogyny, fish starting life as females and converting to males at some stage, triggered by some internal or external factor. Protandry, where a fish converts from male to female, is much less common than protogyny. Most families use external rather than internal fertilization. Of the oviparous teleosts, most (79%) do not provide parental care. Viviparity, ovoviviparity, or some form of parental care for eggs, whether by the male, the female, or both parents is seen in a significant fraction (21%) of the 422 teleost families; no care is likely the ancestral condition. The oldest case of viviparity in ray-finned fish is found in Middle Triassic species of Saurichthys. Viviparity is relatively rare and is found in about 6% of living teleost species; male care is far more common than female care. Male territoriality "preadapts" a species for evolving male parental care. There are a few examples of fish that self-fertilise. The mangrove rivulus is an amphibious, simultaneous hermaphrodite, producing both eggs and spawn and having internal fertilisation. This mode of reproduction may be related to the fish's habit of spending long periods out of water in the mangrove forests it inhabits. Males are occasionally produced at temperatures below and can fertilise eggs that are then spawned by the female. This maintains genetic variability in a species that is otherwise highly inbred. Classification and fossil record Actinopterygii is divided into the subclasses Cladistia,Chondrostei and Neopterygii. The Neopterygii, in turn, is divided into the infraclasses Holostei and Teleostei. During the Mesozoic (Triassic, Jurassic, Cretaceous) and Cenozoic the teleosts in particular diversified widely. As a result, 96% of living fish species are teleosts (40% of all fish species belong to the teleost subgroup Acanthomorpha), while all other groups of actinopterygians represent depauperate lineages. The classification of ray-finned fishes can be summarized as follows: Cladistia, which include bichirs and reedfish Actinopteri, which include: Chondrostei, which include Acipenseriformes (paddlefishes and sturgeons) Neopterygii, which include: Teleostei (most living fishes) Holostei, which include: Lepisosteiformes (gars) Amiiformes (bowfin) The cladogram below shows the main clades of living actinopterygians and their evolutionary relationships to other extant groups of fishes and the four-limbed vertebrates (tetrapods). The latter include mostly terrestrial species but also groups that became secondarily aquatic (e.g. whales and dolphins). Tetrapods evolved from a group of bony fish during the Devonian period. Approximate divergence dates for the different actinopterygian clades (in millions of years, mya) are from Near et al., 2012. The polypterids (bichirs and reedfish) are the sister lineage of all other actinopterygians, the Acipenseriformes (sturgeons and paddlefishes) are the sister lineage of Neopterygii, and Holostei (bowfin and gars) are the sister lineage of teleosts. The Elopomorpha (eels and tarpons) appear to be the most basal teleosts. The earliest known fossil actinopterygian is Andreolepis hedei, dating back 420 million years (Late Silurian), remains of which have been found in Russia, Sweden, and Estonia. Crown group actinopterygians most likely originated near the Devonian-Carboniferous boundary. The earliest fossil relatives of modern teleosts are from the Triassic period (Prohalecites, Pholidophorus), although it is suspected that teleosts originated already during the Paleozoic Era. Taxonomy The listing below is a summary of all extinct (indicated by a dagger, †) and living groups of Actinopterygii with their respective taxonomic rank. The taxonomy follows Phylogenetic Classification of Bony Fishes with notes when this differs from Nelson, ITIS and FishBase and extinct groups from Van der Laan 2016 and Xu 2021. Order †?Asarotiformes Schaeffer 1968 Order †?Discordichthyiformes Minikh 1998 Order †?Paphosisciformes Grogan & Lund 2015 Order †?Scanilepiformes Selezneya 1985 Order †Cheirolepidiformes Kazantseva-Selezneva 1977 Order †Paramblypteriformes Heyler 1969 Order †Rhadinichthyiformes Order †Palaeonisciformes Hay 1902 Order †Tarrasiiformes sensu Lund & Poplin 2002 Order †Ptycholepiformes Andrews et al. 1967 Order †Haplolepidiformes Westoll 1944 Order †Aeduelliformes Heyler 1969 Order †Platysomiformes Aldinger 1937 Order †Dorypteriformes Cope 1871 Order †Eurynotiformes Sallan & Coates 2013 Subclass Cladistia Pander 1860 Order †Guildayichthyiformes Lund 2000 Order Polypteriformes Bleeker 1859 (bichirs and reedfishes) Subclass Actinopteri Cope 1972 s.s. Order †Elonichthyiformes Kazantseva-Selezneva 1977 Order †Phanerorhynchiformes Order †Bobasatraniiformes Berg 1940 Order †Saurichthyiformes Aldinger 1937 Subclass Chondrostei Müller, 1844 Order †Birgeriiformes Heyler 1969 Order †Chondrosteiformes Aldinger, 1937 Order Acipenseriformes Berg 1940 (includes sturgeons and paddlefishes) Subclass Neopterygii Regan 1923 sensu Xu & Wu 2012 Order †Pholidopleuriformes Berg 1937 Order †Redfieldiiformes Berg 1940 Order †Platysiagiformes Brough 1939 Order †Polzbergiiformes Griffith 1977 Order †Perleidiformes Berg 1937 Order †Louwoichthyiformes Xu 2021 Order †Peltopleuriformes Lehman 1966 Order †Luganoiiformes Lehman 1958 Order †Pycnodontiformes Berg 1937 Infraclass Holostei Müller 1844 Division Halecomorphi Cope 1872 sensu Grande & Bemis 1998 Order †Parasemionotiformes Lehman 1966 Order †Ionoscopiformes Grande & Bemis 1998 Order Amiiformes Huxley 1861 sensu Grande & Bemis 1998 (bowfins) Division Ginglymodi Cope 1871 Order †Dapediiformes Thies & Waschkewitz 2015 Order †Semionotiformes Arambourg & Bertin 1958 Order Lepisosteiformes Hay 1929 (gars) Clade Teleosteomorpha Arratia 2000 sensu Arratia 2013 Order †Prohaleciteiformes Arratia 2017 Division Aspidorhynchei Nelson, Grand & Wilson 2016 Order †Aspidorhynchiformes Bleeker 1859 Order †Pachycormiformes Berg 1937 Infraclass Teleostei Müller 1844 sensu Arratia 2013 Order †?Araripichthyiformes Order †?Ligulelliiformes Taverne 2011 Order †?Tselfatiiformes Nelson 1994 Order †Pholidophoriformes Berg 1940 Order †Dorsetichthyiformes Nelson, Grand & Wilson 2016 Order †Leptolepidiformes Order †Crossognathiformes Taverne 1989 Order †Ichthyodectiformes Bardeck & Sprinkle 1969 Teleocephala de Pinna 1996 s.s. Megacohort Elopocephalai Patterson 1977 sensu Arratia 1999 (Elopomorpha Greenwood et al. 1966) Order Elopiformes Gosline 1960 (ladyfishes and tarpon) Order Albuliformes Greenwood et al. 1966 sensu Forey et al. 1996 (bonefishes) Order Notacanthiformes Goodrich 1909 (halosaurs and spiny eels) Order Anguilliformes Jarocki 1822 sensu Goodrich 1909 (true eels) Megacohort Osteoglossocephalai sensu Arratia 1999 Supercohort Osteoglossocephala sensu Arratia 1999 (Osteoglossomorpha Greenwood et al. 1966) Order †Lycopteriformes Chang & Chou 1977 Order Hiodontiformes McAllister 1968 sensu Taverne 1979 (mooneye and goldeye) Order Osteoglossiformes Regan 1909 sensu Zhang 2004 (bony-tongued fishes) Supercohort Clupeocephala Patterson & Rosen 1977 sensu Arratia 2010 Cohort Otomorpha Wiley & Johnson 2010 (Otocephala; Ostarioclupeomorpha) Subcohort Clupei Wiley & Johnson 2010 (Clupeomorpha Greenwood et al. 1966) Order †Ellimmichthyiformes Grande 1982 Order Clupeiformes Bleeker 1859 (herrings and anchovies) Subcohort Alepocephali Order Alepocephaliformes Marshall 1962 Subcohort Ostariophysi Sagemehl 1885 Section Anotophysa (Rosen & Greenwood 1970) Sagemehl 1885 Order †Sorbininardiformes Taverne 1999 Order Gonorynchiformes Regan 1909 (milkfishes) Section Otophysa Garstang 1931 Order Cypriniformes Bleeker 1859 sensu Goodrich 1909 (barbs, carp, danios, goldfishes, loaches, minnows, rasboras) Order Characiformes Goodrich 1909 (characins, pencilfishes, hatchetfishes, piranhas, tetras, dourado / golden (genus Salminus) and pacu) Order Gymnotiformes Berg 1940 (electric eels and knifefishes) Order Siluriformes Cuvier 1817 sensu Hay 1929 (catfishes) Cohort Euteleosteomorpha (Greenwood et al. 1966) (Euteleostei Greenwood 1967 sensu Johnson & Patterson 1996) Subcohort Lepidogalaxii Order Lepidogalaxiiformes Betancur-Rodriguez et al. 2013 (salamanderfish) Subcohort Protacanthopterygii Greenwood et al. 1966 sensu Johnson & Patterson 1996 Order Argentiniformes (barreleyes and slickheads) (formerly in Osmeriformes) Order Galaxiiformes Order Salmoniformes Bleeker 1859 sensu Nelson 1994 (salmon and trout) Order Esociformes Bleeker 1859 (pike) Subcohort Stomiati Order Osmeriformes (smelts) Order Stomiiformes Regan 1909 (bristlemouths and marine hatchetfishes) Subcohort Neoteleostei Nelson 1969 Infracohort Ateleopodia Order Ateleopodiformes (jellynose fish) Infracohort Eurypterygia Rosen 1973 Section Aulopa [Cyclosquamata Rosen 1973] Order Aulopiformes Rosen 1973 (Bombay duck and lancetfishes) Section Ctenosquamata Rosen 1973 Subsection Myctophata [Scopelomorpha] Order Myctophiformes Regan 1911 (lanternfishes) Subsection Acanthomorpha Betancur-Rodriguez et al. 2013 Division Lampridacea Betancur-Rodriguez et al. 2013 [Lampridomorpha; Lampripterygii] Order Lampriformes Regan 1909 (oarfish, opah and ribbonfishes) Division Paracanthomorphacea sensu Grande et al. 2013 (Paracanthopterygii Greenwood 1937) Order Percopsiformes Berg 1937 (cavefishes and trout-perches) Order †Sphenocephaliformes Rosen & Patterson 1969 Order Zeiformes Regan 1909 (dories) Order Stylephoriformes Miya et al. 2007 Order Gadiformes Goodrich 1909 (cods) Division Polymixiacea Betancur-Rodriguez et al. 2013 (Polymyxiomorpha; Polymixiipterygii) Order †Pattersonichthyiformes Gaudant 1976 Order †Ctenothrissiformes Berg 1937 Order Polymixiiformes Lowe 1838 (beardfishes) Division Euacanthomorphacea Betancur-Rodriguez et al. 2013 (Euacanthomorpha sensu Johnson & Patterson 1993; Acanthopterygii Gouan 1770 sensu]) Subdivision Berycimorphaceae Betancur-Rodriguez et al. 2013 Order Beryciformes (fangtooths and pineconefishes) (incl. Stephanoberyciformes; Cetomimiformes) Subdivision Holocentrimorphaceae Betancur-Rodriguez et al. 2013 Order Holocentriformes (Soldierfishes) Subdivision Percomorphaceae Betancur-Rodriguez et al. 2013 (Percomorpha sensu Miya et al. 2003; Acanthopteri) Series Ophidiimopharia Betancur-Rodriguez et al. 2013 Order Ophidiiformes (pearlfishes) Series Batrachoidimopharia Betancur-Rodriguez et al. 2013 Order Batrachoidiformes (toadfishes) Series Gobiomopharia Betancur-Rodriguez et al. 2013 Order Kurtiformes(Nurseryfishes and cardinalfishes) Order Gobiiformes(Sleepers and gobies) Series Scombrimopharia Betancur-Rodriguez et al. 2013 Order Syngnathiformes (seahorses, pipefishes, sea moths, cornetfishes and flying gurnards) Order Scombriformes (Tunas and (mackerels) Series Carangimopharia Betancur-Rodriguez et al. 2013 Subseries Anabantaria Betancur-Rodriguez et al. 2014 Order Synbranchiformes (swamp eels) Order Anabantiformes (Labyrinthici) (gouramies, snakeheads, ) Subseries Carangaria Betancur-Rodriguez et al. 2014 Carangaria incertae sedis Order Istiophoriformes Betancur-Rodriguez 2013 (Marlins, swordfishes, billfishes) Order Carangiformes (Jack mackerels, pompanos) Order Pleuronectiformes Bleeker 1859 (flatfishes) Subseries Ovalentaria Smith & Near 2012 (Stiassnyiformes sensu Li et al. 2009) Ovalentaria incertae sedis Order Cichliformes Betancur-Rodriguez et al. 2013 (Cichlids, Convict blenny, leaf fishes) Order Atheriniformes Rosen 1964 (silversides and rainbowfishes) Order Cyprinodontiformes Berg 1940 (livebearers, killifishes) Order Beloniformes Berg 1940 (flyingfishes and ricefishes) Order Mugiliformes Berg 1940 (mullets) Order Blenniiformes Springer 1993 (Blennies) Order Gobiesociformes Gill 1872 (Clingfishes) Series Eupercaria Betancur-Rodriguez et al. 2014 (Percomorpharia Betancur-Rodriguez et al. 2013) Eupercaria incertae sedis Order Gerreiformes (Mojarras) Order Labriformes (Wrasses and Parrotfishes) Order Caproiformes (Boarfishes) Order Lophiiformes Garman 1899 (Anglerfishes) Order Tetraodontiformes Regan 1929 (Filefishes and pufferfish) Order Centrarchiformes Bleeker 1859 (Sunfishes and mandarin fishes) Order Gasterosteiformes (Sticklebacks and relatives) Order Scorpaeniformes (Lionfishes and relatives) Order Perciformes Bleeker 1859
Biology and health sciences
Fishes
null
748
https://en.wikipedia.org/wiki/Amateur%20astronomy
Amateur astronomy
Amateur astronomy is a hobby where participants enjoy observing or imaging celestial objects in the sky using the unaided eye, binoculars, or telescopes. Even though scientific research may not be their primary goal, some amateur astronomers make contributions in doing citizen science, such as by monitoring variable stars, double stars, sunspots, or occultations of stars by the Moon or asteroids, or by discovering transient astronomical events, such as comets, galactic novae or supernovae in other galaxies. Amateur astronomers do not use the field of astronomy as their primary source of income or support, and usually have no professional degree in astrophysics or advanced academic training in the subject. Most amateurs are hobbyists, while others have a high degree of experience in astronomy and may often assist and work alongside professional astronomers. Many astronomers have studied the sky throughout history in an amateur framework; however, since the beginning of the twentieth century, professional astronomy has become an activity clearly distinguished from amateur astronomy and associated activities. Amateur astronomers typically view the sky at night, when most celestial objects and astronomical events are visible, but others observe during the daytime by viewing the Sun and solar eclipses. Some just look at the sky using nothing more than their eyes or binoculars, but more dedicated amateurs often use portable telescopes or telescopes situated in their private or club observatories. Amateurs also join amateur astronomical societies, which can advise, educate or guide them towards ways of finding and observing celestial objects. They also promote the science of astronomy among the general public. Objectives Collectively, amateur astronomers observe a variety of celestial objects and phenomena. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep sky objects such as star clusters, galaxies, and nebulae. Many amateurs like to specialise in observing particular objects, types of objects, or types of events which interest them. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Astrophotography has become more popular with the introduction of far easier to use equipment including, digital cameras, DSLR cameras and relatively sophisticated purpose built high quality CCD cameras and CMOS cameras. Most amateur astronomers work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. An early pioneer of radio astronomy was Grote Reber, an amateur astronomer who constructed the first purpose-built radio telescope in the late 1930s to follow up on the discovery of radio wavelength emissions from space by Karl Jansky. Non-visual amateur astronomy includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. Some amateur astronomers use home-made radio telescopes, while others use radio telescopes that were originally built for astronomical research but have since been made available for use by amateurs. The One-Mile Telescope is one such example. Common tools Amateur astronomers use a range of instruments to study the sky, depending on a combination of their interests and resources. Methods include simply looking at the night sky with the naked eye, using binoculars, and using a variety of optical telescopes of varying power and quality, as well as additional sophisticated equipment, such as cameras, to study light from the sky in both the visual and non-visual parts of the spectrum. To further improve studying the visual and non-visual part of the spectrum, amateur astronomers go to rural areas to get away from light pollution. Commercial telescopes are available, new and used, but it is also common for amateur astronomers to build (or commission the building of) their own custom telescopes. Some people even focus on amateur telescope making as their primary interest within the hobby of amateur astronomy. Although specialized and experienced amateur astronomers tend to acquire more specialized and more powerful equipment over time, relatively simple equipment is often preferred for certain tasks. Binoculars, for instance, although generally of lower power than the majority of telescopes, also tend to provide a wider field of view, which is preferable for looking at some objects in the night sky. Recent models of iPhones have introduced a "night mode" option when taking pictures as well, that allows you to increase exposure, which is a period of time the picture is being taken for. This optimizes focus on light in the frame which is why it is used primarily at night. Amateur astronomers also use star charts that, depending on experience and intentions, may range from simple planispheres through to star atlases with detailed charts of the entire night sky. A range of astronomy software is also available and used by amateur astronomers, including software that generates maps of the sky, software to assist with astrophotography, observation scheduling software, and software to perform various calculations pertaining to astronomical phenomena. Amateur astronomers often like to keep records of their observations, which usually takes the form of an observing log. Observing logs typically record details about which objects were observed and when, as well as describing the details that were seen. Sketching is sometimes used within logs, and photographic records of observations have also been used in recent times. The information gathered is used to help studies and interactions between amateur astronomers in yearly gatherings. Although not professional information or credible, it is a way for the hobby lovers to share their new sightings and experiences. The popularity of imaging among amateurs has led to large numbers of web sites being written by individuals about their images and equipment. Much of the social interaction of amateur astronomy occurs on mailing lists or discussion groups. Discussion group servers host numerous astronomy lists. A great deal of the commerce of amateur astronomy, the buying and selling of equipment, occurs online. Many amateurs use online tools to plan their nightly observing sessions, using tools such as the Clear Sky Chart. Common techniques While a number of interesting celestial objects are readily identified by the naked eye, sometimes with the aid of a star chart, many others are so faint or inconspicuous that technical means are necessary to locate them. Although many methods are used in amateur astronomy, most are variations of a few specific techniques. Star hopping Star hopping is a method often used by amateur astronomers with low-tech equipment such as binoculars or a manually driven telescope. It involves the use of maps (or memory) to locate known landmark stars, and "hopping" between them, often with the aid of a finderscope. Because of its simplicity, star hopping is a very common method for finding objects that are close to naked-eye stars. More advanced methods of locating objects in the sky include telescope mounts with setting circles, which allow pointing to targets in the sky using celestial coordinates, and GOTO telescopes, which are fully automated telescopes that are capable of locating objects on demand (having first been calibrated). Mobile apps The advent of mobile applications for use in smartphones has led to the creation of many dedicated apps. These apps allow any user to easily locate celestial objects of interest by simply pointing the smartphone device in that direction in the sky. These apps make use of the inbuilt hardware in the phone, such as GPS location and gyroscope. Useful information about the pointed object like celestial coordinates, the name of the object, its constellation, etc. are provided for a quick reference. Some paid versions give more information. These apps are gradually getting into regular use during observing, for the alignment process of telescopes. Setting circles Setting circles are angular measurement scales that can be placed on the two main rotation axes of some telescopes. Since the widespread adoption of digital setting circles, any classical engraved setting circle is now specifically identified as an "analog setting circle" (ASC). By knowing the coordinates of an object (usually given in equatorial coordinates), the telescope user can use the setting circle to align (i.e., point) the telescope in the appropriate direction before looking through its eyepiece. A computerized setting circle is called a "digital setting circle" (DSC). Although digital setting circles can be used to display a telescope's RA and Dec coordinates, they are not simply a digital read-out of what can be seen on the telescope's analog setting circles. As with go-to telescopes, digital setting circle computers (commercial names include Argo Navis, Sky Commander, and NGC Max) contain databases of tens of thousands of celestial objects and projections of planet positions. To find a celestial object in a telescope equipped with a DSC computer, one does not need to look up the specific RA and Dec coordinates in a book or other resource, and then adjust the telescope to those numerical readings. Rather, the object is chosen from the electronic database, which causes distance values and arrow markers to appear in the display that indicate the distance and direction to move the telescope. The telescope is moved until the two angular distance values reach zero, indicating that the telescope is properly aligned. When both the RA and Dec axes are thus "zeroed out", the object should be in the eyepiece. Many DSCs, like go-to systems, can also work in conjunction with laptop sky programs. Computerized systems provide the further advantage of computing coordinate precession. Traditional printed sources are subtitled by the epoch year, which refers to the positions of celestial objects at a given time to the nearest year (e.g., J2005, J2007). Most such printed sources have been updated for intervals of only about every fifty years (e.g., J1900, J1950, J2000). Computerized sources, on the other hand, are able to calculate the right ascension and declination of the "epoch of date" to the exact instant of observation. GoTo telescopes GOTO telescopes have become more popular since the 1980s as technology has improved and prices have been reduced. With these computer-driven telescopes, the user typically enters the name of the item of interest and the mechanics of the telescope point the telescope towards that item automatically. They have several notable advantages for amateur astronomers intent on research. For example, GOTO telescopes tend to be faster for locating items of interest than star hopping, allowing more time for studying of the object. GOTO also allows manufacturers to add equatorial tracking to mechanically simpler alt-azimuth telescope mounts, allowing them to produce an overall less expensive product. GOTO telescopes usually have to be calibrated using alignment stars to provide accurate tracking and positioning. However, several telescope manufacturers have recently developed telescope systems that are calibrated with the use of built-in GPS, decreasing the time it takes to set up a telescope at the start of an observing session. Remote-controlled telescopes With the development of fast internet in the last part of the 20th century along with advances in computer controlled telescope mounts and CCD cameras, "remote telescope" astronomy is now a viable means for amateur astronomers not aligned with major telescope facilities to partake in research and deep sky imaging. This enables anyone to control a telescope a great distance away in a dark location. The observer can image through the telescope using CCD cameras. The digital data collected by the telescope is then transmitted and displayed to the user by means of the Internet. An example of a digital remote telescope operation for public use via the Internet is the Bareket observatory, and there are telescope farms in New Mexico, Australia and Atacama in Chile. Imaging techniques Amateur astronomers engage in many imaging techniques including film, DSLR, LRGB, and CCD astrophotography. Because CCD imagers are linear, image processing may be used to subtract away the effects of light pollution, which has increased the popularity of astrophotography in urban areas. Narrowband filters may also be used to minimize light pollution. Scientific research Scientific research is most often not the main goal for many amateur astronomers, unlike professional astronomers. Work of scientific merit is possible, however, and many amateurs successfully contribute to the knowledge base of professional astronomers. Astronomy is sometimes promoted as one of the few remaining sciences for which amateurs can still contribute useful data. To recognize this, the Astronomical Society of the Pacific annually gives Amateur Achievement Awards for significant contributions to astronomy by amateurs. The majority of scientific contributions by amateur astronomers are in the area of data collection. In particular, this applies where large numbers of amateur astronomers with small telescopes are more effective than the relatively small number of large telescopes that are available to professional astronomers. Several organizations, such as the American Association of Variable Star Observers and the British Astronomical Association, exist to help coordinate these contributions. Amateur astronomers often contribute toward activities such as monitoring the changes in brightness of variable stars and supernovae, helping to track asteroids, and observing occultations to determine both the shape of asteroids and the shape of the terrain on the apparent edge of the Moon as seen from Earth. With more advanced equipment, but still cheap in comparison to professional setups, amateur astronomers can measure the light spectrum emitted from astronomical objects, which can yield high-quality scientific data if the measurements are performed with due care. A relatively recent role for amateur astronomers is searching for overlooked phenomena (e.g., Kreutz Sungrazers) in the vast libraries of digital images and other data captured by Earth and space based observatories, much of which is available over the Internet. In the past and present, amateur astronomers have played a major role in discovering new comets. Recently however, funding of projects such as the Lincoln Near-Earth Asteroid Research and Near Earth Asteroid Tracking projects has meant that most comets are now discovered by automated systems long before it is possible for amateurs to see them. Societies There are a large number of amateur astronomical societies around the world, that serve as a meeting point for those interested in amateur astronomy. Members range from active observers with their own equipment to "armchair astronomers" who are simply interested in the topic. Societies range widely in their goals and activities, which may depend on a variety of factors such as geographic spread, local circumstances, size, and membership. For example, a small local society located in dark countryside may focus on practical observing and star parties, whereas a large one based in a major city might have numerous members but be limited by light pollution and thus hold regular indoor meetings with guest speakers instead. Major national or international societies generally publish their own academic journal or newsletter, and some hold large multi-day meetings akin to a scientific conference or convention. They may also have sections devoted to particular topics, such as lunar observation or amateur telescope making. Notable contributions by amateur astronomers There have been many significant scientific, technological, and cultural contributions made by amateur astronomers: George Alcock, one of the most successful visual discoverers comets and novae. John E. Bortle, authored '"Comet Digest" in Sky and Telescope magazine and the monthly AAVSO circular for the American Association of Variable Star Observers. Created the Bortle scale to quantify the darkness of the night sky. Robert Burnham Jr. (1931–1993), author of the Celestial Handbook. Andrew Ainslie Common (1841–1903), built his own very large reflecting telescopes and demonstrated that photography could record astronomical features invisible to the human eye. Robert E. Cox (1917–1989) who conducted the "Gleanings for ATMs" column in Sky & Telescope magazine for 21 years. John Dobson (1915–2014), promoter of astronomy whose name is associated with the Dobsonian telescope. Robert Owen Evans (1937–2022) was an amateur astronomer who currently holds the all-time record for visual discoveries of supernovae. Giuseppe Donatiello, discovered eleven nearby dwarf galaxies in the Local Volume including the first galaxy to be named after its non-professional discoverer. . Will Hay, the famous comedian and actor, who discovered a white spot on Saturn. Walter Scott Houston (1912–1993) who wrote the "Deep-Sky Wonders" column in Sky & Telescope magazine for almost 50 years. Albert G. Ingalls (1888–1958), editor of Amateur Telescope Making, Vols. 1–3 and "The Amateur Scientist". David H. Levy discovered or co-discovered 22 comets including Comet Shoemaker-Levy 9, the most for any individual. Sir Patrick Moore (1923–2012), presenter of the BBC's long-running The Sky at Night and author of many books on astronomy. Russell W. Porter (1871–1949) founded Stellafane and has been referred to as a "founder" of amateur telescope making. Grote Reber (1911–2002), pioneer of radio astronomy constructing the first purpose-built radio telescope and conducted the first sky survey in the radio frequency. Citizen science projects Amateur astronomers and other non-professionals make contributions through ongoing citizen science projects: XO Project, an international team of amateur and professional astronomers tasked with identifying extrasolar planets. Many amateur astronomers contribute to scientific discoveries as part of the citizen science Zooniverse project. Prizes recognizing amateur astronomers Amateur Achievement Award of Astronomical Society of the Pacific Chambliss Amateur Achievement Award
Physical sciences
Astronomy basics
Astronomy
765
https://en.wikipedia.org/wiki/Abortion
Abortion
Abortion is the termination of a pregnancy by removal or expulsion of an embryo or fetus. An abortion that occurs without intervention is known as a miscarriage or "spontaneous abortion"; these occur in approximately 30% to 40% of all pregnancies. When deliberate steps are taken to end a pregnancy, it is called an induced abortion, or less frequently "induced miscarriage". The unmodified word abortion generally refers to an induced abortion. The most common reasons given for having an abortion are for birth-timing and limiting family size. Other reasons reported include maternal health, an inability to afford a child, domestic violence, lack of support, feeling they are too young, wishing to complete education or advance a career, and not being able or willing to raise a child conceived as a result of rape or incest. When done legally in industrialized societies, induced abortion is one of the safest procedures in medicine. Unsafe abortions—those performed by people lacking the necessary skills, or in inadequately resourced settings—are responsible for between 5–13% of maternal deaths, especially in the developing world. However, medication abortions that are self-managed are highly effective and safe throughout the first trimester. Public health data show that making safe abortion legal and accessible reduces maternal deaths. Modern methods use medication or surgery for abortions. The drug mifepristone (aka RU-486) in combination with prostaglandin appears to be as safe and effective as surgery during the first and second trimesters of pregnancy. The most common surgical technique involves dilating the cervix and using a suction device. Birth control, such as the pill or intrauterine devices, can be used immediately following abortion. When performed legally and safely on a woman who desires it, an induced abortion does not increase the risk of long-term mental or physical problems. In contrast, unsafe abortions performed by unskilled individuals, with hazardous equipment, or in unsanitary facilities cause between 22,000 and 44,000 deaths and 6.9 million hospital admissions each year. The World Health Organization states that "access to legal, safe and comprehensive abortion care, including post-abortion care, is essential for the attainment of the highest possible level of sexual and reproductive health". Historically, abortions have been attempted using herbal medicines, sharp tools, forceful massage, or other traditional methods. Around 73 million abortions are performed each year in the world, with about 45% done unsafely. Abortion rates changed little between 2003 and 2008, before which they decreased for at least two decades as access to family planning and birth control increased. , 37% of the world's women had access to legal abortions without limits as to reason. Countries that permit abortions have different limits on how late in pregnancy abortion is allowed. Abortion rates are similar between countries that restrict abortion and countries that broadly allow it, though this is partly because countries which restrict abortion tend to have higher unintended pregnancy rates. Globally, there has been a widespread trend towards greater legal access to abortion since 1973, but there remains debate with regard to moral, religious, ethical, and legal issues. Those who oppose abortion often argue that an embryo or fetus is a person with a right to life, and thus equate abortion with murder. Those who support abortion's legality often argue that it is a woman's reproductive right. Others favor legal and accessible abortion as a public health measure. Abortion laws and views of the procedure are different around the world. In some countries abortion is legal and women have the right to make the choice about abortion. In some areas, abortion is legal only in specific cases such as rape, incest, fetal defects, poverty, and risk to a woman's health. Types Induced An induced abortion is a medical procedure to end a pregnancy. In present-day English, the term abortion, when used without further qualification, generally refers to induced abortion. A pregnancy can be intentionally aborted in several ways. The abortion method depends upon the gestational age of the embryo or fetus, which gains mass as the pregnancy progresses. Abortion laws, regional availability, and the personal preference of the women and her doctor may inform the women's choice of a specific abortion procedure. Abortions can be characterized as either therapeutic or elective. When an abortion is performed for medical reasons, the procedure is referred to as a therapeutic abortion. Medical reasons for therapeutic abortion include saving the life of the pregnant woman, preventing harm to the woman's physical or mental health, preventing the birth of a child who will have a significantly increased chance of mortality or morbidity, and reducing the number of fetuses to lessen health risks associated with multiple pregnancy. An abortion is referred to as elective or voluntary when it is performed at the request of the woman for non-medical reasons. Confusion sometimes arises over the term elective because "elective surgery" generally refers to all scheduled surgery, whether medically necessary or not. About one in five pregnancies worldwide ends with an induced abortion. Most abortions result from unintended pregnancies. In the United Kingdom, 1 to 2% of abortions are done because of genetic problems in the fetus. Spontaneous Miscarriage, also known as spontaneous abortion, is the unintentional expulsion of an embryo or fetus before the 24th week of gestation. A pregnancy that ends before 37 weeks of gestation resulting in a live-born infant is a "premature birth" or a "preterm birth". When a fetus dies in utero after viability, or during delivery, it is usually termed "stillborn". Premature births and stillbirths are generally not considered to be miscarriages, although usage of these terms can sometimes overlap. Studies of pregnant women in the US and China have shown that between 40% and 60% of embryos do not progress to birth. The vast majority of miscarriages occur before the woman is aware that she is pregnant, and many pregnancies spontaneously abort before medical practitioners can detect an embryo. Between 15% and 30% of known pregnancies end in clinically apparent miscarriage, depending upon the age and health of the pregnant woman. 80% of these spontaneous abortions happen in the first trimester. The most common cause of spontaneous abortion during the first trimester is chromosomal abnormalities of the embryo or fetus, accounting for at least 50% of sampled early pregnancy losses. Other causes include vascular disease (such as lupus), diabetes, other hormonal problems, infection, and abnormalities of the uterus. Advancing maternal age and a woman's history of previous spontaneous abortions are the two leading factors associated with a greater risk of spontaneous abortion. A spontaneous abortion can also be caused by accidental trauma; intentional trauma or stress to cause miscarriage is considered induced abortion or feticide. Methods Medical Medical abortions are those induced by abortifacient pharmaceuticals. Medical abortion became an alternative method of abortion with the availability of prostaglandin analogs in the 1970s and the antiprogestogen mifepristone (also known as RU-486) in the 1980s. The most common early first trimester medical abortion regimens use mifepristone in combination with misoprostol (or sometimes another prostaglandin analog, gemeprost) up to 10 weeks (70 days) gestational age, methotrexate in combination with a prostaglandin analog up to 7 weeks gestation, or a prostaglandin analog alone. Mifepristone–misoprostol combination regimens work faster and are more effective at later gestational ages than methotrexate–misoprostol combination regimens, and combination regimens are more effective than misoprostol alone, particularly in the second trimester. Medical abortion regimens involving mifepristone followed by misoprostol in the cheek between 24 and 48 hours later are effective when performed before 70 days' gestation. In very early abortions, up to 7 weeks gestation, medical abortion using a mifepristone–misoprostol combination regimen is considered to be more effective than surgical abortion (vacuum aspiration), especially when clinical practice does not include detailed inspection of aspirated tissue. Early medical abortion regimens using mifepristone, followed 24–48 hours later by buccal or vaginal misoprostol are 98% effective up to 9 weeks gestational age; from 9 to 10 weeks efficacy decreases modestly to 94%. If medical abortion fails, surgical abortion must be used to complete the procedure. Early medical abortions account for the majority of abortions before 9 weeks gestation in Britain, France, Switzerland, United States, and the Nordic countries. Medical abortion regimens using mifepristone in combination with a prostaglandin analog are the most common methods used for second trimester abortions in Canada, most of Europe, China and India, in contrast to the United States where 96% of second trimester abortions are performed surgically by dilation and evacuation. A 2020 Cochrane Systematic Review concluded that providing women with medications to take home to complete the second stage of the procedure for an early medical abortion results in an effective abortion. Further research is required to determine if self-administered medical abortion is as safe as provider-administered medical abortion, where a health care professional is present to help manage the medical abortion. Safely permitting women to self-administer abortion medication has the potential to improve access to abortion. The review also noted a research gap concerning methods to support women who take medication at home for a self-administered abortion. Surgical Up to 15 weeks' gestation, suction-aspiration or vacuum aspiration are the most common surgical methods of induced abortion. Manual vacuum aspiration (MVA) consists of removing the fetus or embryo, placenta, and membranes by suction using a manual syringe, while electric vacuum aspiration (EVA) uses an electric pump. Both techniques can be used very early in pregnancy. MVA can be used up to 14 weeks but is more often used earlier in the U.S. EVA can be used later. MVA, also known as "mini-suction" and "menstrual extraction", or EVA can be used in very early pregnancy when cervical dilation may not be required. Dilation and curettage (D&C) refers to opening the cervix (dilation) and removing tissue (curettage) via suction or sharp instruments. D&C is a standard gynecological procedure performed for a variety of reasons, including examination of the uterine lining for possible malignancy, investigation of abnormal bleeding, and abortion. The World Health Organization recommends sharp curettage only when suction aspiration is unavailable. Dilation and evacuation (D&E), used after 12 to 16 weeks, consists of opening the cervix and emptying the uterus using surgical instruments and suction. D&E is performed vaginally and does not require an incision. Intact dilation and extraction (D&X) refers to a variant of D&E sometimes used after 18 to 20 weeks when removal of an intact fetus improves surgical safety or for other reasons. Abortion may also be performed surgically by hysterotomy or gravid hysterectomy. Hysterotomy abortion is a procedure similar to a caesarean section and is performed under general anesthesia. It requires a smaller incision than a caesarean section and can be used during later stages of pregnancy. Gravid hysterectomy refers to removal of the whole uterus while still containing the pregnancy. Hysterotomy and hysterectomy are associated with much higher rates of maternal morbidity and mortality than D&E or induction abortion. First trimester procedures can generally be performed using local anesthesia, while second trimester methods may require deep sedation or general anesthesia. Labor induction abortion In places lacking the necessary medical skill for dilation and extraction, or when preferred by practitioners, an abortion can be induced by first inducing labor and then inducing fetal demise if necessary. This is sometimes called "induced miscarriage". This procedure may be performed from 13 weeks gestation to the third trimester. Although it is very uncommon in the United States, more than 80% of induced abortions throughout the second trimester are labor-induced abortions in Sweden and other nearby countries. Only limited data are available comparing labor-induced abortion with the dilation and extraction method. Unlike D&E, labor-induced abortions after 18 weeks may be complicated by the occurrence of brief fetal survival, which may be legally characterized as live birth. For this reason, labor-induced abortion is legally risky in the United States. Other methods Historically, a number of herbs reputed to possess abortifacient properties have been used in folk medicine. Such herbs include tansy, pennyroyal, black cohosh, and the now-extinct silphium. In 1978, one woman in Colorado died and another developed organ damage when they attempted to terminate their pregnancies by taking pennyroyal oil. Because the indiscriminant use of herbs as abortifacients can cause serious—even lethal—side effects, such as multiple organ failure, such use is not recommended by physicians. Abortion is sometimes attempted by causing trauma to the abdomen. The degree of force, if severe, can cause serious internal injuries without necessarily succeeding in inducing miscarriage. In Southeast Asia, there is an ancient tradition of attempting abortion through forceful abdominal massage. One of the bas reliefs decorating the temple of Angkor Wat in Cambodia depicts a demon performing such an abortion upon a woman who has been sent to the underworld. Reported methods of unsafe, self-induced abortion include misuse of misoprostol and insertion of non-surgical implements such as knitting needles and clothes hangers into the uterus. These and other methods to terminate pregnancy may be called "induced miscarriage". Such methods are rarely used in countries where surgical abortion is legal and available. Safety The health risks of abortion depend principally on how, and under what conditions, the procedure is performed. The World Health Organization (WHO) defines unsafe abortions as those performed by unskilled individuals, with hazardous equipment, or in unsanitary facilities. Legal abortions performed in the developed world are among the safest procedures in medicine. According to a 2012 study in Obstetrics & Gynecology, in the United States the risk of maternal mortality is 14 times lower after induced abortion than after childbirth. The CDC estimated in 2019 that US pregnancy-related mortality was 17.2 maternal deaths per 100,000 live births, while the US abortion mortality rate was 0.43 maternal deaths per 100,000 procedures. In the UK, guidelines of the Royal College of Obstetricians and Gynaecologists state that "Women should be advised that abortion is generally safer than continuing a pregnancy to term." Worldwide, on average, abortion is safer than carrying a pregnancy to term. A 2007 study reported that "26% of all pregnancies worldwide are terminated by induced abortion," whereas "deaths from improperly performed [abortion] procedures constitute 13% of maternal mortality globally." In Indonesia in 2000 it was estimated that 2 million pregnancies ended in abortion, 4.5 million pregnancies were carried to term, and 14–16 percent of maternal deaths resulted from abortion. In the US from 2000 to 2009, abortion had a mortality rate lower than plastic surgery, lower or similar to running a marathon, and about equivalent to traveling in a passenger car. Five years after seeking abortion services, women who gave birth after being denied an abortion reported worse health than women who had either first or second trimester abortions. The risk of abortion-related mortality increases with gestational age, but remains lower than that of childbirth. Outpatient abortion is as safe from 64 to 70 days' gestation as it before 63 days. Safety of abortion methods There is little difference in terms of safety and efficacy between medical abortion using a combined regimen of mifepristone and misoprostol and surgical abortion (vacuum aspiration) in early first trimester abortions up to 10 weeks gestation. Medical abortion using the prostaglandin analog misoprostol alone is less effective and more painful than medical abortion using a combined regimen of mifepristone and misoprostol or surgical abortion. Safety and gestational age Vacuum aspiration in the first trimester is the safest method of surgical abortion, and can be performed in a primary care office, abortion clinic, or hospital. Complications, which are rare, can include uterine perforation, pelvic infection, and retained products of conception requiring a second procedure to evacuate. Infections account for one-third of abortion-related deaths in the United States. The rate of complications of vacuum aspiration abortion in the first trimester is similar regardless of whether the procedure is performed in a hospital, surgical center, or office. Preventive antibiotics (such as doxycycline or metronidazole) are typically given before abortion procedures, as they are believed to substantially reduce the risk of postoperative uterine infection; however, antibiotics are not routinely given with abortion pills. The rate of failed procedures does not appear to vary significantly depending on whether the abortion is performed by a doctor or a mid-level practitioner. Complications after second trimester abortion are similar to those after first trimester abortion, and depend somewhat on the method chosen. The risk of death from abortion approaches roughly half the risk of death from childbirth the farther along a woman is in pregnancy; from one in a million before 9 weeks gestation to nearly one in ten thousand at 21 weeks or more (as measured from the last menstrual period). It appears that having had a prior surgical uterine evacuation (whether because of induced abortion or treatment of miscarriage) correlates with a small increase in the risk of preterm birth in future pregnancies. The studies supporting this did not control for factors not related to abortion or miscarriage, and hence the causes of this correlation have not been determined, although multiple possibilities have been suggested. Mental health Current evidence finds no relationship between most induced abortions and mental health problems other than those expected for any unwanted pregnancy. A report by the American Psychological Association concluded that a woman's first abortion is not a threat to mental health when carried out in the first trimester, with such women no more likely to have mental-health problems than those carrying an unwanted pregnancy to term; the mental-health outcome of a woman's second or greater abortion is less certain. Some older reviews concluded that abortion was associated with an increased risk of psychological problems; however, later reviews of the medical literature found that previous reviews did not use an appropriate control group. When a control group is utilized, receiving abortion is not associated with adverse psychological outcomes. However, women seeking abortion who are denied access to abortion have an increase in anxiety after the denial. Although some studies show negative mental-health outcomes in women who choose abortions after the first trimester because of fetal abnormalities, more rigorous research would be needed to show this conclusively. Some proposed negative psychological effects of abortion have been referred to by anti-abortion advocates as a separate condition called "post-abortion syndrome", but this is not recognized by medical or psychological professionals in the United States. A 2020 long term-study among US women found that about 99% of women felt that they made the right decision five years after they had an abortion. Relief was the primary emotion with few women feeling sadness or guilt. Social stigma was a main factor predicting negative emotions and regret years later. The researchers also stated: "These results add to the scientific evidence that emotions about an abortion are associated with personal and social context, and are not a product of the abortion procedure itself." Safety in the abortion debate Some purported risks of abortion are promoted primarily by anti-abortion groups, but lack scientific support. For example, the question of a link between induced abortion and breast cancer has been investigated extensively. Major medical and scientific bodies (including the WHO, National Cancer Institute, American Cancer Society, Royal College of OBGYN and American Congress of OBGYN) have concluded that abortion does not cause breast cancer. In the past even illegality has not automatically meant that the abortions were unsafe. Referring to the U.S., historian Linda Gordon states: "In fact, illegal abortions in this country have an impressive safety record." According to Rickie Solinger, A 1940s American physician spoke of his pride in having performed 13,844 illegal abortions without any fatalities. In 1870s New York City, the abortionist/midwife Madame Restell (Anna Trow Lohman) is said to have lost very few women among her more than 100,000 patients—a lower mortality rate than the childbirth mortality rate at the time. In 1936, obstetrics and gynecology professor Frederick J. Taussig wrote that a cause of increasing mortality during the years of illegality in the U.S. was that Unsafe abortion Women seeking an abortion may use unsafe methods, especially when abortion is legally restricted. They may attempt self-induced abortion or seek the help of a person without proper medical training or facilities. This can lead to severe complications, such as incomplete abortion, sepsis, hemorrhage, and damage to internal organs. Unsafe abortions are a major cause of injury and death among women worldwide. Although data are imprecise, it is estimated that approximately 20 million unsafe abortions are performed annually, with 97% taking place in developing countries. Unsafe abortions are believed to result in millions of injuries. Estimates of deaths vary according to methodology, and have ranged from 37,000 to 70,000 in the past decade; deaths from unsafe abortion account for around 13% of all maternal deaths. The World Health Organization believes that mortality has fallen since the 1990s. To reduce the number of unsafe abortions, public health organizations have generally advocated emphasizing the legalization of abortion, training of medical personnel, and ensuring access to reproductive-health services. A major factor in whether abortions are performed safely or not is the legal standing of abortion. Countries with restrictive abortion laws have higher rates of unsafe abortion and similar overall abortion rates compared to countries where abortion is legal and available. For example, the 1996 legalization of abortion in South Africa led to an immediate reduction in abortion-related complications, with abortion-related deaths dropping by more than 90%. Similar reductions in maternal mortality have been observed after other countries have liberalized their abortion laws, such as Romania and Nepal. A 2011 study concluded that in the United States, some state-level anti-abortion laws are correlated with lower rates of abortion in that state. The analysis, however, did not take into account travel to other states without such laws to obtain an abortion. In addition, a lack of access to effective contraception contributes to unsafe abortion. It has been estimated that the incidence of unsafe abortion could be reduced by up to 75% (from 20 million to 5 million annually) if modern family planning and maternal health services were readily available globally. Rates of such abortions may be difficult to measure because they can be reported variously as miscarriage, "induced miscarriage", "menstrual regulation", "mini-abortion", and "regulation of a delayed/suspended menstruation". Forty percent of the world's women are able to access therapeutic and elective abortions within gestational limits, while an additional 35 percent have access to legal abortion if they meet certain physical, mental, or socioeconomic criteria. While maternal mortality seldom results from safe abortions, unsafe abortions result in 70,000 deaths and 5 million disabilities per year. Complications of unsafe abortion account for approximately an eighth of maternal mortalities worldwide, though this varies by region. Secondary infertility caused by an unsafe abortion affects an estimated 24 million women. The rate of unsafe abortions has increased from 44% to 49% between 1995 and 2008. Health education, access to family planning, and improvements in health care during and after abortion have been proposed to address consequences of unsafe abortion. Incidence There are two commonly used methods of measuring the incidence of abortion: Abortion rate – number of abortions annually per 1,000 women between 15 and 44 years of age; some sources use a range of 15–49. Abortion percentage – number of abortions out of 100 known pregnancies; pregnancies include live births, abortions, and miscarriages. In many places, where abortion is illegal or carries a heavy social stigma, medical reporting of abortion is not reliable. For this reason, estimates of the incidence of abortion must be made without determining certainty related to standard error. The number of abortions performed worldwide was characterized as stable in the early 2000s, with 41.6 million having been performed in 2003 and 43.8 million having been performed in 2008. The abortion rate worldwide was 28 per 1000 women per year, though it was 24 per 1000 women per year for developed countries and 29 per 1000 women per year for developing countries. The same 2012 study indicated that in 2008, the estimated abortion percentage of known pregnancies was at 21% worldwide, with 26% in developed countries and 20% in developing countries. On average, the incidence of abortion is similar in countries with restrictive abortion laws and those with more liberal access to abortion. Restrictive abortion laws are associated with increases in the percentage of abortions performed unsafely. The unsafe abortion rate in developing countries is partly attributable to lack of access to modern contraceptives; according to the Guttmacher Institute, providing access to contraceptives would result in about 14.5 million fewer unsafe abortions and 38,000 fewer deaths from unsafe abortion annually worldwide. The rate of legal, induced abortion varies extensively worldwide. According to the report of employees of Guttmacher Institute it ranged from 7 per 1000 women per year (Germany and Switzerland) to 30 per 1000 women per year (Estonia) in countries with complete statistics in 2008. The proportion of pregnancies that ended in induced abortion ranged from about 10% (Israel, the Netherlands and Switzerland) to 30% (Estonia) in the same group, though it might be as high as 36% in Hungary and Romania, whose statistics were deemed incomplete. An American study in 2002 concluded that about half of women having abortions were using a form of contraception at the time of becoming pregnant. Inconsistent use was reported by half of those using condoms and three-quarters of those using the birth control pill; 42% of those using condoms reported failure through slipping or breakage. Of the other half of women, who were not using contraception at the time of becoming pregnant, the vast majority had used contraception at some point in the past, indicating some level of dissatisfaction with the contraceptive options available to them. Indeed, 32% of these contraceptive nonusers cited concerns about contraceptive methods as their reason for nonuse, and a more recent study found similar results. Taken together, these statistics suggest that new contraceptive methods, such as non-hormonal contraceptives or male contraceptives, could reduce unintended pregnancy and abortion rates. The Guttmacher Institute has found that "most abortions in the United States are obtained by minority women" because minority women "have much higher rates of unintended pregnancy". In a 2022 analysis by the Kaiser Family Foundation, while people of color comprise 44% of the population in Mississippi, 59% of the population in Texas, 42% of the population in Louisiana, and 35% of the population in Alabama, they comprise 80%, 74%, 72%, and 70%, respectively, of those receiving abortions. Gestational age and method Abortion rates vary depending on the stage of pregnancy and the method practiced. In 2003, the Centers for Disease Control and Prevention (CDC) reported that 26% of reported legal induced abortions in the United States were known to have been obtained at the end of 6 weeks of gestation or less, 18% at 7 weeks, 15% at 8 weeks, 18% at 9 through 10 weeks, 10% at 11 through 12 weeks, 6% at 13 through 15 weeks, 4% at 16 through 20 weeks and 1% at more than 21 weeks. 91% of these were classified as having been done by "curettage" (suction-aspiration, dilation and curettage, dilation and evacuation), 8% by "medical" means (mifepristone), >1% by "intrauterine instillation" (saline or prostaglandin), and 1% by "other" (including hysterotomy and hysterectomy). According to the CDC, due to data collection difficulties the data must be viewed as tentative and some fetal deaths reported beyond 20 weeks may be natural deaths erroneously classified as abortions if the removal of the dead fetus is accomplished by the same procedure as an induced abortion. The Guttmacher Institute estimated there were 2,200 intact dilation and extraction procedures in the US during 2000; this accounts for <0.2% of the total number of abortions performed that year. Similarly, in England and Wales in 2006, 89% of terminations occurred at or under 12 weeks, 9% between 13 and 19 weeks, and 2% at or over 20 weeks. 64% of those reported were by vacuum aspiration, 6% by D&E, and 30% were medical. There are more second trimester abortions in developing countries such as China, India and Vietnam than in developed countries. There are both medical and non-medical reasons to have an abortion later in pregnancy (after 20 weeks). A study was conducted from 2008 to 2010 at the University of California San Francisco where more than 440 women were asked about why they experienced delays in obtaining abortion care, if there were any. This study found that almost half of individuals who obtained an abortion after 20 weeks did not suspect that they were pregnant until later in their pregnancy. Other barriers to abortion care found in the study included lack of information about where to access an abortion, difficulties with transportation, lack of insurance coverage, and inability to pay for the abortion procedure. Medical reasons for seeking an abortion later in pregnancy include fetal anomalies and health risk to the pregnant person. There are prenatal tests that can diagnose Down Syndrome or cystic fibrosis as early as 10 weeks into gestation, but structural fetal anomalies are often detected much later in pregnancy. A proportion of structural fetal anomalies are lethal, which means that the fetus will almost certainly die before or shortly after birth. Life-threatening conditions may also develop later in pregnancy, such as early severe preeclampsia, newly diagnosed cancer in need of urgent treatment, and intrauterine infection (chorioamnionitis), which often occurs along with premature rupture of the amniotic sac (PPROM). If serious medical conditions such as these arise before the fetus is viable, the person carrying the pregnancy may pursue an abortion to preserve their own health. Motivation Personal The reasons why women have abortions are diverse and vary across the world. Some of the reasons may include an inability to afford a child, domestic violence, lack of support, feeling they are too young, and the wish to complete education or advance a career. Additional reasons include not being able or willing to raise a child conceived as a result of rape or incest. Societal Some abortions are undergone as the result of societal pressures. These might include the preference for children of a specific sex or race, disapproval of single or early motherhood, stigmatization of people with disabilities, insufficient economic support for families, lack of access to or rejection of contraceptive methods, or efforts toward population control (such as China's one-child policy). These factors can sometimes result in compulsory abortion or sex-selective abortion. In cultures where there is a preference for male children, some women have sex selective abortions, which have partially replaced the earlier practice of female infanticide. Maternal health Some abortions are performed due to concerns over maternal health. In 1990s, women cited maternal health as their main motivating factor in about a third of abortions in three of 27 countries analyzed. In seven additional countries, about 7% of abortions were maternal health related. In the U.S., the Supreme Court decisions in Roe v. Wade and Doe v. Bolton: "ruled that the state's interest in the life of the fetus became compelling only at the point of viability, defined as the point at which the fetus can survive independently of its mother. Even after the point of viability, the state cannot favor the life of the fetus over the life or health of the pregnant woman. Under the right of privacy, physicians must be free to use their "medical judgment for the preservation of the life or health of the mother." On the same day that the Court decided Roe, it also decided Doe v. Bolton, in which the Court defined health very broadly: "The medical judgment may be exercised in the light of all factors—physical, emotional, psychological, familial, and the woman's age—relevant to the well-being of the patient. All these factors may relate to health. This allows the attending physician the room he needs to make his best medical judgment." Cancer The rate of cancer during pregnancy is 0.02–1%, and in many cases, cancer of the mother leads to consideration of abortion to protect the life of the mother, or in response to the potential damage that may occur to the fetus during treatment. This is particularly true for cervical cancer, the most common type of which occurs in 1 of every 2,000–13,000 pregnancies, for which initiation of treatment "cannot co-exist with preservation of fetal life (unless neoadjuvant chemotherapy is chosen)". Very early stage cervical cancers (I and IIa) may be treated by radical hysterectomy and pelvic lymph node dissection, radiation therapy, or both, while later stages are treated by radiotherapy. Chemotherapy may be used simultaneously. Treatment of breast cancer during pregnancy also involves fetal considerations, because lumpectomy is discouraged in favor of modified radical mastectomy unless late-term pregnancy allows follow-up radiation therapy to be administered after the birth. Exposure to a single chemotherapy drug is estimated to cause a 7.5–17% risk of teratogenic effects on the fetus, with higher risks for multiple drug treatments. Treatment with more than 40 Gy of radiation usually causes spontaneous abortion. Exposure to much lower doses during the first trimester, especially 8 to 15 weeks of development, can cause intellectual disability or microcephaly, and exposure at this or subsequent stages can cause reduced intrauterine growth and birth weight. Exposures above 0.005–0.025 Gy cause a dose-dependent reduction in IQ. It is possible to greatly reduce exposure to radiation with abdominal shielding, depending on how far the area to be irradiated is from the fetus. The process of birth itself may also put the mother at risk. According to Li et al., "[v]aginal delivery may result in dissemination of neoplastic cells into lymphovascular channels, haemorrhage, cervical laceration and implantation of malignant cells in the episiotomy site, while abdominal delivery may delay the initiation of non-surgical treatment." Fetal health Congenital disorders, revealed by prenatal screening, motivate some women to seek abortions. Health outcomes of preterm births include a significant probability of long-term neurodevelopmental impairment before gestational age of 29 weeks, with a higher probability with decreasing gestational age. In the United States, public opinion shifted after television personality Sherri Finkbine's was exposed to thalidomide, a teratogen, in her fifth month of pregnancy. Unable to obtain a legal abortion in the United States, Finkbine traveled to Sweden. From 1962 to 1965, an outbreak of German measles left 15,000 babies with severe birth defects. In 1967, the American Medical Association publicly supported liberalization of abortion laws. A National Opinion Research Center poll in 1965 showed 73% supported abortion when the mother's life was at risk, 57% when birth defects were present and 59% for pregnancies resulting from rape or incest. History and religion Since ancient times, abortions have been done using a number of methods, including herbal medicines acting as abortifacients, sharp tools through the use of force, or through other traditional medicine methods. Induced abortion has a long history and can be traced back to civilizations as varied as ancient China (abortifacient knowledge is often attributed to the mythological ruler Shennong), ancient India since its Vedic age, ancient Egypt with its Ebers Papyrus (), and the Roman Empire in the time of Juvenal (). One of the earliest known artistic representations of abortion is in a bas relief at Angkor Wat (). Found in a series of friezes that represent judgment after death in Hindu and Buddhist culture, it depicts the technique of abdominal abortion. In Judaism (Genesis 2:7), the fetus is not considered to have a human soul until it is safely outside of the woman, is viable, and has taken its first breath. The fetus is considered valuable property of the woman and not a human life while in the womb (Exodus 21:22-23). While Judaism encourages people to be fruitful and multiply by having children, abortion is allowed and is deemed necessary when a pregnant woman's life is in danger. Several religions, including Judaism, which disagree that human life begins at conception, support the legality of abortion on religious freedom grounds. In Islam, abortion is traditionally permitted until a point in time when Muslims believe the soul enters the fetus, considered by various theologians to be at conception, 40 days after conception, 120 days after conception, or at quickening. Abortion is largely heavily restricted or forbidden in areas of high Islamic faith such as the Middle East and North Africa. Some medical scholars and abortion opponents have suggested that the Hippocratic Oath forbade physicians in Ancient Greece from performing abortions; other scholars disagree with this interpretation, and state that the medical texts of Hippocratic Corpus contain descriptions of abortive techniques right alongside the Oath. The physician Scribonius Largus wrote in 43 CE that the Hippocratic Oath prohibits abortion, as did Soranus of Ephesus, although apparently not all doctors adhered to it strictly at the time. According to Soranus' 1st or 2nd century CE work Gynaecology, one party of medical practitioners banished all abortives as required by the Hippocratic Oath; the other party to which he belonged was willing to prescribe abortions only for the sake of the mother's health. In Politics (350 BCE), Aristotle condemned infanticide as a means of population control. He preferred abortion in such cases, with the restriction that it "must be practised on it before it has developed sensation and life; for the line between lawful and unlawful abortion will be marked by the fact of having sensation and being alive." In the Catholic Church, opinion was divided on how serious abortion was in comparison with such acts as contraception and oral or anal sex. The Catholic Church did not begin vigorously opposing abortion until the 19th century. As early as ~100 CE, the Didache taught that abortion was sinful. Several historians argue that prior to the 19th century most Catholic authors did not regard termination of pregnancy before quickening or ensoulment as an abortion. Among these authors were the Doctors of the Church, such as St. Augustine, St. Thomas Aquinas, and St. Alphonsus Liguori. In 1588, Pope Sixtus V ( 1585–1590) was the only Pope before Pope Pius IX (in his 1869 bull, Apostolicae Sedis) to institute a Church policy labeling all abortion as homicide and condemning abortion regardless of the stage of pregnancy. Sixtus V's pronouncement was reversed in 1591 by Pope Gregory XIV. In the recodification of 1917 Code of Canon Law, Apostolicae Sedis was strengthened, in part to remove a possible reading that excluded excommunication of the mother. Statements made in the Catechism of the Catholic Church, the codified summary of the Church's teachings, considers abortion from the moment of conception as homicide and called for the end of legal abortion. Denominations that support abortion rights with some limits include the United Methodist Church, Episcopal Church, Evangelical Lutheran Church in America and Presbyterian Church USA. A 2014 Guttmacher survey of abortion patients in the United States found that many reported a religious affiliation: 24% were Catholic while 30% were Protestant. A 1995 survey reported that Catholic women are as likely as the general population to terminate a pregnancy, Protestants are less likely to do so, and evangelical Christians are the least likely to do so. A 2019 Pew Research Center study found that most Christian denominations were against overturning Roe v. Wade, which in the United States legalized abortion, at around 70%, except White Evangelicals at 35%. Abortion has been a fairly common practice, and was not always illegal or controversial until the 19th century. Under common law, including early English common law dating back to Edward Coke in 1648, abortion was generally permitted before quickening (14–26 weeks after conception, or between the fourth and sixth month), and at women's discretion; it was whether abortion was performed after quickening that determined if it was a crime. In Europe and North America, abortion techniques advanced starting in the 17th century; the conservatism of most in the medical profession with regards to sexual matters prevented the wide expansion of abortion techniques. Other medical practitioners in addition to some physicians advertised their services, and they were not widely regulated until the 19th century when the practice, sometimes called restellism, was banned in both the United States and the United Kingdom. Some 19th-century physicians, one of the most famous and consequential being the American Horatio Storer, argued for anti-abortion laws on racist and misogynist as well as moral grounds. Church groups were also highly influential in anti-abortion movements, and religious groups more so since the 20th century. Some of the early anti-abortion laws punished only the doctor or abortionist, and while women could be criminally tried for a self-induced abortion, they were rarely prosecuted in general. In the United States, some argued that abortion was more dangerous than childbirth until about 1930 when incremental improvements in abortion procedures relative to childbirth made abortion safer. Others maintain that in the 19th century early abortions under the hygienic conditions in which midwives usually worked were relatively safe. Several scholars argue that, despite improved medical procedures, the period from the 1930s until the 1970s saw more zealous enforcement of anti-abortion laws, alongside an increasing control of abortion providers by organized crime. In 1920, Soviet Russia became the first country to legalize abortion after Lenin insisted that no woman be forced to give birth. Iceland (1935) and Sweden (1938) would follow suit to legalize certain or all forms of abortion. In Nazi Germany (1935), a law permitted abortions for those deemed "hereditarily ill", while women considered of German stock were specifically prohibited from having abortions. Beginning in the second half of the 20th century, abortion was legalized in a greater number of countries. In Japan, abortion was first legalized by the 1948 "Eugenics Protection Law" meant to prevent the births of "inferior" humans. , due to Japan's continuing strongly patriarchal culture and traditional views on women's societal roles, women who want an abortion must normally get written permission from their partner. Society and culture Abortion debate Induced abortion has long been the source of considerable debate. Ethical, moral, philosophical, biological, religious and legal issues surrounding abortion are related to value systems. Opinions of abortion may be about fetal rights, governmental authority, and women's rights. In both public and private debate, arguments presented in favor of or against abortion access focus on either the moral permissibility of an induced abortion, or the justification of laws permitting or restricting abortion. The World Medical Association Declaration on Therapeutic Abortion notes, "circumstances bringing the interests of a mother into conflict with the interests of her unborn child create a dilemma and raise the question as to whether or not the pregnancy should be deliberately terminated." Abortion debates, especially pertaining to abortion laws, are often spearheaded by groups advocating one of these two positions. Groups who favor greater legal restrictions on abortion, including complete prohibition, most often describe themselves as "pro-life" while groups who are against such legal restrictions describe themselves as "pro-choice". Modern abortion law Current laws pertaining to abortion are diverse. Religious, moral, and cultural factors continue to influence abortion laws throughout the world. The right to life, the right to liberty, the right to security of person, and the right to reproductive health are major issues of human rights that sometimes constitute the basis for the existence or absence of abortion laws. In jurisdictions where abortion is legal, certain requirements must often be met before a woman may obtain a legal abortion (an abortion performed without the woman's consent is considered feticide and is generally illegal). These requirements usually depend on the age of the fetus, often using a trimester-based system to regulate the window of legality, or as in the U.S., on a doctor's evaluation of the fetus' viability. Some jurisdictions require a waiting period before the procedure, prescribe the distribution of information on fetal development, or require that parents be contacted if their minor daughter requests an abortion. Other jurisdictions may require that a woman obtain the consent of the fetus' father before aborting the fetus, that abortion providers inform women of health risks of the procedure—sometimes including "risks" not supported by the medical literature—and that multiple medical authorities certify that the abortion is either medically or socially necessary. Many restrictions are waived in emergency situations. China, which has ended their one-child policy, and now has a three-child policy, has at times incorporated mandatory abortions as part of their population control strategy. Other jurisdictions ban abortion almost entirely. Many, but not all, of these allow legal abortions in a variety of circumstances. These circumstances vary based on jurisdiction, but may include whether the pregnancy is a result of rape or incest, the fetus' development is impaired, the woman's physical or mental well-being is endangered, or socioeconomic considerations make childbirth a hardship. In countries where abortion is banned entirely, such as Nicaragua, medical authorities have recorded rises in maternal death directly and indirectly due to pregnancy as well as deaths due to doctors' fears of prosecution if they treat other gynecological emergencies. Some countries, such as Bangladesh, that nominally ban abortion, may also support clinics that perform abortions under the guise of menstrual hygiene. This is also a terminology in traditional medicine. In places where abortion is illegal or carries heavy social stigma, pregnant women may engage in medical tourism and travel to countries where they can terminate their pregnancies. Women without the means to travel can resort to providers of illegal abortions or attempt to perform an abortion by themselves. The organization Women on Waves has been providing education about medical abortions since 1999. The NGO created a mobile medical clinic inside a shipping container, which then travels on rented ships to countries with restrictive abortion laws. Because the ships are registered in the Netherlands, Dutch law prevails when the ship is in international waters. While in port, the organization provides free workshops and education; while in international waters, medical personnel are legally able to prescribe medical abortion drugs and counseling. Sex-selective abortion Sonography and amniocentesis allow parents to determine sex before childbirth. The development of this technology has led to sex-selective abortion, or the termination of a fetus based on its sex. The selective termination of a female fetus is most common. Sex-selective abortion is partially responsible for the noticeable disparities between the birth rates of male and female children in some countries. The preference for male children is reported in many areas of Asia, and abortion used to limit female births has been reported in Taiwan, South Korea, India, and China. This deviation from the standard birth rates of males and females occurs despite the fact that the country in question may have officially banned sex-selective abortion or even sex-screening. In China, a historical preference for a male child has been exacerbated by the one-child policy, which was enacted in 1979. Many countries have taken legislative steps to reduce the incidence of sex-selective abortion. At the International Conference on Population and Development in 1994 over 180 states agreed to eliminate "all forms of discrimination against the girl child and the root causes of son preference", conditions also condemned by a PACE resolution in 2011. The World Health Organization and UNICEF, along with other United Nations agencies, have found that measures to restrict access to abortion in an effort to reduce sex-selective abortions have unintended negative consequences, largely stemming from the fact that women may seek or be coerced into seeking unsafe, extralegal abortions. On the other hand, measures to reduce gender inequality can reduce the prevalence of such abortions without attendant negative consequences. Anti-abortion violence Abortion providers and facilities have been subjected to violence, including murder, assault, arson, and bombing. Some scholars consider anti-abortion violence to be within the definition of terrorism, a view shared by some governments. In the U.S. and Canada, over 8,000 incidents of violence, trespassing, and death threats have been recorded by providers since 1977, including over 200 bombings/arsons and hundreds of assaults. Abortion clinics have also been targeted by acid attacks, invasions, and vandalism The majority of abortion opponents have not been involved in violent acts. Physicians and other abortion clinic staff have been murdered by abortion opponents. In the United States, at least four physicians have been murdered in connection with their work at abortion clinics, including David Gunn (1993), John Britton (1994), Barnett Slepian (1998), and George Tiller (2009). In Canada, gynecologist Garson Romalis survived murder attempts in both 1994 and 2000. Besides physicians, killings have targeted other clinic staff, such as John Salvi's 1994 murder of two receptionists in Massachusetts clinic and Peter Knight's 2001 murder of a security guard in a Melbourne clinic. Notable perpetrators of anti-abortion violence include Eric Rudolph, Scott Roeder, Shelley Shannon, and Paul Hill, the first person to be executed in the United States for murdering an abortion provider. Some countries have laws to protecting access to abortion. Such laws prevent abortion opponents from interfering with access to legal abortion services. For example, the American Freedom of Access to Clinic Entrances Act bars the use of threats or violence to interfere with abortion access. Abortion access laws may also establish safe access zones around abortion clinics, with limits on protests and enhanced penalties for anti-abortion violence. Psychological pressure may also be used to limit abortion access. In 2003, Chris Danze organized anti-abortion organizations throughout Texas to prevent the construction of a Planned Parenthood facility in Austin. The organizations released the personal information online of those involved with construction, sent them up to 1200 phone calls a day and contacted their churches. Some protestors record women entering clinics on camera. Non-human examples Spontaneous abortion occurs in various animals. For example, in sheep it may be caused by stress or physical exertion, such as crowding through doors or being chased by dogs. In cows, abortion may be caused by contagious disease, such as brucellosis or Campylobacter, but can often be controlled by vaccination. Eating pine needles can also induce abortions in cows. Several plants, including broomweed, skunk cabbage, poison hemlock, and tree tobacco, are known to cause fetal deformities and abortion in cattle and in sheep and goats. In horses, a fetus may be aborted or reabsorbed if it has lethal white syndrome. Foal embryos that are homozygous for the dominant white gene (WW) are theorized to also be aborted or resorbed before birth. In many species of sharks and rays, stress-induced abortions occur frequently on capture. Viral infection can cause abortion in dogs. Cats can experience spontaneous abortion for many reasons, including hormonal imbalance. A combined abortion and spaying is performed on pregnant cats, especially in trap–neuter–return programs, to prevent unwanted kittens from being born. Female rodents may terminate a pregnancy when exposed to the smell of a male not responsible for the pregnancy, known as the Bruce effect. Abortion may also be induced in animals, in the context of animal husbandry. For example, abortion may be induced in mares that have been mated improperly, or that have been purchased by owners who did not realize the mares were pregnant, or that are pregnant with twin foals. Feticide can occur in horses and zebras due to male harassment of pregnant mares or forced copulation, although the frequency in the wild has been questioned. Male gray langur monkeys may attack females following male takeover, causing miscarriage.
Biology and health sciences
Health, fitness, and medicine
null
772
https://en.wikipedia.org/wiki/Ampere
Ampere
The ampere ( , ; symbol: A), often shortened to amp, is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 coulomb (C) moving past a point per second. It is named after French mathematician and physicist André-Marie Ampère (1775–1836), considered the father of electromagnetism along with Danish physicist Hans Christian Ørsted. As of the 2019 revision of the SI, the ampere is defined by fixing the elementary charge to be exactly , which means an ampere is an electric current equivalent to elementary charges moving every seconds or elementary charges moving in a second. Prior to the redefinition the ampere was defined as the current passing through two parallel wires 1 metre apart that produces a magnetic force of newtons per metre. The earlier CGS system has two units of current, one structured similarly to the SI's and the other using Coulomb's law as a fundamental relationship, with the CGS unit of charge defined by measuring the force between two charged metal plates. The CGS unit of current is then defined as one unit of charge per second. History The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current. The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized. The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is . Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance. Former definition in the SI Until 2019, the SI defined the ampere as follows: The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length. Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere. The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge was determined by steady current flowing for a time as . This definition of the ampere was most accurately realised using a Kibble balance, but in practice the unit was maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two could be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively. Techniques to establish the realisation of an ampere had a relative uncertainty of approximately a few parts in 10, and involved realisations of the watt, the ohm and the volt. Present definition The 2019 revision of the SI defined the ampere by taking the fixed numerical value of the elementary charge to be when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom. The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge is determined by steady current flowing for a time as . Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule. Units derived from the ampere The international system of units (SI) is based on seven SI base units the second, metre, kilogram, kelvin, ampere, mole, and candela representing seven fundamental types of physical quantity, or "dimensions", (time, length, mass, temperature, electric current, amount of substance, and luminous intensity respectively) with all other SI units being defined using these. These SI derived units can either be given special names e.g. watt, volt, lux, etc. or defined in terms of others, e.g. metre per second. The units with special names derived from the ampere are: There are also some SI units that are frequently used in the context of electrical engineering and electrical appliances, but are defined independently of the ampere, notably the hertz, joule, watt, candela, lumen, and lux. SI prefixes Like other SI units, the ampere can be modified by adding a prefix that multiplies it by a power of 10.
Physical sciences
Electromagnetism
null
775
https://en.wikipedia.org/wiki/Algorithm
Algorithm
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning). In contrast, a heuristic is an approach to solving problems that do not have well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation. As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. Etymology Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Hereby, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi". Around 1230, the English word algorism is attested and then by Chaucer in 1391, English adopted the French term. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. Definition One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device. History Ancient algorithms Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD). The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to describes the earliest division algorithm. During the Hammurabi dynasty , Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus . Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements ().Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta. The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. Computers Weight-driven clocks Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although a full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer". Electromechanical relay Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape () was in use, as were Hollerith cards (c. 1890). Then came the teleprinter () with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". Formalization In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939. Representations Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form, but are also used to define or document algorithms. Turing machines There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data in order to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine. Flowchart representation The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. Algorithmic analysis It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of , using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays. Formal versus empirical The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial or long life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly. Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Design Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases. Structured programming Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. Legal status By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). Classification By implementation Recursion A recursive algorithm invokes itself repeatedly until meeting a termination condition, and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are: Brute-force or exhaustive search Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords. Divide and conquer A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list can be divided into segments containing one item and sorting of entire list can be obtained by merging the segments. A simpler variant of divide and conquer is called a decrease-and-conquer algorithm, which solves one smaller instance of itself, and uses the solution to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial. The greedy method Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. Examples One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as: High-level description: If a set of numbers is empty, then there is no highest number. Assume the first number in the set is the largest. For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest. When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Input: A list of numbers L. Output: The largest number in the list L. if L.size = 0 return null largest ← L[0] for each item in L, do if item > largest, then largest ← item return largest
Mathematics
Mathematics: General
null
782
https://en.wikipedia.org/wiki/Mouthwash
Mouthwash
Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swirled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth. Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste. Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride, but is not as cost-effective as leaving the fluoride toothpaste on the teeth after brushing. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice. Use Common use involves rinsing the mouth with about of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris. Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing. Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away. Dangerous misuse Serious harm and even death can quickly result from ingestion due to the high alcohol content and other substances harmful to ingestion present in some brands of mouthwash. Zero percent alcohol mouthwashes do exist, as well as many other formulations for different needs (covered in the above sections). These risks may be higher in toddlers and young children if they are allowed to use toothpaste and/or mouthwash unsupervised, where they may swallow it. Misuse in this way can be avoided with parental admission or supervision and by using child-safe forms or a children's brand of mouthwash. Surrogate alcohol use such as ingestion of mouthwash is a common cause of death among homeless people during winter months, because a person can feel warmer after drinking it. Effects The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes. For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor. Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely. Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies. History The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. The ancient Chinese had also gargled salt water, tea and wine as a form of mouthwash after meals, due to the antiseptic properties of those liquids. Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers. Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms. In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden. That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours. Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the volatile sulfur compound–creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012). Research Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution. A clinical trial and laboratory studies have shown that alcohol-containing mouthwash could reduce the growth of Neisseria gonorrhoeae in the pharynx. However, subsequent trials have found that there was no difference in gonorrhoea cases among men using daily mouthwash compared to those who did not use mouthwash for 12 weeks. Ingredients Alcohol Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals. Alcohol in mouthwashes may act as a carcinogen (cancer-inducing agent) in some cases . Many newer brands of mouthwash are alcohol-free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption. Benzydamine (analgesic) In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating. Benzoic acid Benzoic acid acts as a buffer. Betamethasone Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis. Cetylpyridinium chloride (antiseptic, antimalodor) Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration. Chlorhexidine digluconate and hexetidine (antiseptic) Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.05–0.2% solution as a mouthwash. There is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. A randomized clinical trial conducted in Rabat University in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol-free 0.1% chlorhexidine mouthrinse. Chlorhexidine has good substantivity (the ability of a mouthwash to bind to hard and soft tissues in the mouth). It has anti-plaque action, and also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops, so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine mouthwash is more effective when used as an adjunctive treatment with toothbrushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine mouthwash is used as a temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before and after a tooth extraction may reduce the risk of a dry socket. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses. Chlorhexidine mouthwash is known to have minor adverse effects. Chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. A systematic review of commercial chlorhexidine products with anti-discoloration systems (ADSs) found that the ADSs were able to reduce tooth staining without affecting the beneficial effects of chlorhexidine. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation, irritation, and stomatitis of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis. Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties, but is considered an inferior alternative to chlorhexidine. Chlorine dioxide In dilute concentrations, chlorine dioxide is an ingredient that acts as an antiseptic agent in some mouthwashes. Edible oils In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature claims that oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth. Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, and the other health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling. The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used. Essential oils Phenolic compounds and monoterpenes include essential oil constituents that have some antibacterial properties, such as eucalyptol, eugenol, hinokitiol, menthol, phenol, or thymol. Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare as anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes. Fluoride (anticavity) Anti-cavity mouthwashes contain fluoride compounds (such as sodium fluoride, stannous fluoride, or sodium monofluorophosphate) to protect against tooth decay. Fluoride-containing mouthwashes are used as prevention for dental caries for individuals who are considered at higher risk for tooth decay, whether due to xerostomia related to salivary dysfunction or side effects of medication, to not drinking fluoridated water, or to being physically unable to care for their oral needs (brushing and flossing), and as treatment for those with dentinal hypersensitivity, gingival recession/ root exposure. Flavoring agents and xylitol Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity. Xylitol rinses double as a bacterial inhibitor, and have been used as substitute for alcohol to avoid dryness of mouth associated with alcohol. Hydrogen peroxide Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects can occur with prolonged use, including hypertrophy of the lingual papillae. Lactoperoxidase (saliva substitute) Enzymes and non-enzymatic proteins, such as lactoperoxidase, lysozyme, and lactoferrin, have been used in mouthwashes (e.g., Biotene) to reduce levels of oral bacteria, and, hence, of the acids produced by these bacteria. Lidocaine/xylocaine Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed, when it was tested in patients with oral mucositis who underwent a bone marrow transplant. Methyl salicylate Methyl salicylate functions as an antiseptic, antiinflammatory, and analgesic agent, a flavoring, and a fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth. Nystatin Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis. Potassium oxalate A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthwash was used in conjugation with toothbrushing. Povidone/iodine (PVP-I) A 2005 study found that gargling three times a day with simple water or with a povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect. PVP-I in general covers "a wider virucidal spectrum, covering both enveloped and nonenveloped viruses, than the other commercially available antiseptics", which also includes the novel SARS-CoV-2 virus. Sanguinarine Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor treatments. Sanguinarine is a toxic alkaloid herbal extract, obtained from plants such as Sanguinaria canadensis (bloodroot), Argemone mexicana (Mexican prickly poppy), and others. However, its use is strongly associated with the development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis", and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer. Sodium bicarbonate (baking soda) Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a saltwater mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth. Sodium chloride (salt) Saline has a mechanical cleansing action and an antiseptic action, as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot saltwater mouthwashes also encourage the draining of pus from dental abscesses. In contrast, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face . Saltwater mouthwashes are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider saltwater mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot saltwater mouthbaths should start about 24 hours after a dental extraction. The term mouth bath implies that the liquid is passively held in the mouth, rather than vigorously swilled around (which could dislodge a blood clot). Once the blood clot has stabilized, the mouthwash can be used more vigorously. These mouthwashes tend to be advised for use about 6 times per day, especially after meals (to remove food from the socket). Sodium lauryl sulfate (foaming agent) Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products, including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthwash. Sucralfate Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation, due to a lack of efficacy found in a well-designed, randomized controlled trial. Tetracycline (antibiotic) Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis, as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar. Tranexamic acid A 4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin). Triclosan Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action, and significant anti-plaque effect, especially when combined with a copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned. Zinc Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc, when used in combination with other antiseptic agents, can limit the buildup of tartar.
Biology and health sciences
Hygiene products
Health
786
https://en.wikipedia.org/wiki/Asparagales
Asparagales
Asparagales (asparagoid lilies) are a diverse order of flowering plants in the monocots. Under the APG IV system of flowering plant classification, Asparagales are the largest order of monocots with 14 families, 1,122 genera, and about 36,000 species, with members as varied as asparagus, orchids, yuccas, irises, onions, garlic, leeks, and other Alliums, daffodils, snowdrops, amaryllis, agaves, butcher's broom, Agapanthus, Solomon's seal, hyacinths, bluebells, spider plants, grasstrees, aloe, freesias, gladioli, crocuses, and saffron. Most species of Asparagales are herbaceous perennials, although some are climbers and some are trees or shrubs. The order also contains many geophytes (bulbs, corms, and various kinds of tuber). The leaves of almost all species form a tight rosette, either at the base of the plant or at the end of the stem, but occasionally along the stem. The flowers are not particularly distinctive, being 'lily type', with six tepals and up to six stamina. One of the defining characteristics (synapomorphies) of the order is the presence of phytomelanin, a black pigment present in the seed coat, creating a dark crust. Phytomelanin is found in most families of the Asparagales (although not in Orchidaceae, thought to be the sister-group of the rest of the order). The order Asparagales takes its name from the type family Asparagaceae and has only recently been recognized in classification systems. The order is clearly circumscribed on the basis of molecular phylogenetics, but it is difficult to define morphologically since its members are structurally diverse. The order was first put forward by Huber in 1977 and later taken up in the Dahlgren system of 1985 and then the Angiosperm Phylogeny Group systems. Before this, many of its families were assigned to the old order Liliales, which was redistributed over three orders, Liliales, Asparagales, and Dioscoreales, based on molecular phylogenetics. The boundaries of the Asparagales and of its families have undergone a series of changes in recent years; future research may lead to further changes and ultimately greater stability. The order is thought to have first diverged from other related monocots some 120–130 million years ago (early in the Cretaceous period), although given the difficulty in classifying the families involved, estimates are likely to be uncertain. From an economic point of view, the order Asparagales is second in importance within the monocots only to the order Poales (which includes grasses and cereals). Species are used as food and flavourings (e.g. onion, garlic, leek, asparagus, vanilla, saffron), in medicinal or cosmetic applications (Aloe), as cut flowers (e.g. freesia, gladiolus, iris, orchids), and as garden ornamentals (e.g. day lilies, lily of the valley, Agapanthus). Description Although most species in the order are herbaceous, some no more than 15 cm high, there are a number of climbers (e.g., some species of Asparagus), as well as several genera forming trees (e.g. Agave, Cordyline, Yucca, Dracaena, Aloidendron ), which can exceed 10 m in height. Succulent genera occur in several families (e.g. Aloe). Almost all species have a tight cluster of leaves (a rosette), either at the base of the plant or at the end of a more-or-less woody stem as with Yucca. In some cases, the leaves are produced along the stem. The flowers are in the main not particularly distinctive, being of a general 'lily type', with six tepals, either free or fused from the base and up to six stamina. They are frequently clustered at the end of the plant stem. The Asparagales are generally distinguished from the Liliales by the lack of markings on the tepals, the presence of septal nectaries in the ovaries, rather than the bases of the tepals or stamen filaments, and the presence of secondary growth. They are generally geophytes, but with linear leaves, and a lack of fine reticular venation. The seeds characteristically have the external epidermis either obliterated (in most species bearing fleshy fruit), or if present, have a layer of black carbonaceous phytomelanin in species with dry fruits (nuts). The inner part of the seed coat is generally collapsed, in contrast to Liliales whose seeds have a well developed outer epidermis, lack phytomelanin, and usually display a cellular inner layer. The orders which have been separated from the old Liliales are difficult to characterize. No single morphological character appears to be diagnostic of the order Asparagales. The flowers of Asparagales are of a general type among the lilioid monocots. Compared to Liliales, they usually have plain tepals without markings in the form of dots. If nectaries are present, they are in the septa of the ovaries rather than at the base of the tepals or stamens. Those species which have relatively large dry seeds have a dark, crust-like (crustose) outer layer containing the pigment phytomelan. However, some species with hairy seeds (e.g. Eriospermum, family Asparagaceae s.l.), berries (e.g. Maianthemum, family Asparagaceae s.l.), or highly reduced seeds (e.g. orchids) lack this dark pigment in their seed coats. Phytomelan is not unique to Asparagales (i.e. it is not a synapomorphy) but it is common within the order and rare outside it. The inner portion of the seed coat is usually completely collapsed. In contrast, the morphologically similar seeds of Liliales have no phytomelan, and usually retain a cellular structure in the inner portion of the seed coat. Most monocots are unable to thicken their stems once they have formed, since they lack the cylindrical meristem present in other angiosperm groups. Asparagales have a method of secondary thickening which is otherwise only found in Dioscorea (in the monocot order Disoscoreales). In a process called 'anomalous secondary growth', they are able to create new vascular bundles around which thickening growth occurs. Agave, Yucca, Aloidendron, Dracaena, Nolina and Cordyline can become massive trees, albeit not of the height of the tallest dicots, and with less branching. Other genera in the order, such as Lomandra and Aphyllanthes, have the same type of secondary growth but confined to their underground stems. Microsporogenesis (part of pollen formation) distinguishes some members of Asparagales from Liliales. Microsporogenesis involves a cell dividing twice (meiotically) to form four daughter cells. There are two kinds of microsporogenesis: successive and simultaneous (although intermediates exist). In successive microsporogenesis, walls are laid down separating the daughter cells after each division. In simultaneous microsporogenesis, there is no wall formation until all four cell nuclei are present. Liliales all have successive microsporogenesis, which is thought to be the primitive condition in monocots. It seems that when the Asparagales first diverged they developed simultaneous microsporogenesis, which the 'lower' Asparagales families retain. However, the 'core' Asparagales (see Phylogenetics ) have reverted to successive microsporogenesis. The Asparagales appear to be unified by a mutation affecting their telomeres (a region of repetitive DNA at the end of a chromosome). The typical 'Arabidopsis-type' sequence of bases has been fully or partially replaced by other sequences, with the 'human-type' predominating. Other apomorphic characters of the order according to Stevens are: the presence of chelidonic acid, anthers longer than wide, tapetal cells bi- to tetra-nuclear, tegmen not persistent, endosperm helobial, and loss of mitochondrial gene sdh3. According to telomere sequence, at least two evolutionary switch-points happened within the order. The basal sequence is formed by TTTAGGG like in the majority of higher plants. Basal motif was changed to vertebrate-like TTAGGG and finally, the most divergent motif CTCGGTTATGGG appears in Allium. Taxonomy As circumscribed within the Angiosperm Phylogeny Group system Asparagales is the largest order within the monocotyledons, with 14 families, 1,122 genera and about 25,000–42,000 species, thus accounting for about 50% of all monocots and 10–15% of the flowering plants (angiosperms). The attribution of botanical authority for the name Asparagales belongs to Johann Heinrich Friedrich Link (1767–1851) who coined the word 'Asparaginae' in 1829 for a higher order taxon that included Asparagus although Adanson and Jussieau had also done so earlier (see History). Earlier circumscriptions of Asparagales attributed the name to Bromhead (1838), who had been the first to use the term 'Asparagales'. History Pre-Darwinian The type genus, Asparagus, from which the name of the order is derived, was described by Carl Linnaeus in 1753, with ten species. He placed Asparagus within the Hexandria Monogynia (six stamens, one carpel) in his sexual classification in the Species Plantarum. The majority of taxa now considered to constitute Asparagales have historically been placed within the very large and diverse family, Liliaceae. The family Liliaceae was first described by Michel Adanson in 1763, and in his taxonomic scheme he created eight sections within it, including the Asparagi with Asparagus and three other genera. The system of organising genera into families is generally credited to Antoine Laurent de Jussieu who formally described both the Liliaceae and the type family of Asparagales, the Asparagaceae, as Lilia and Asparagi, respectively, in 1789. Jussieu established the hierarchical system of taxonomy (phylogeny), placing Asparagus and related genera within a division of Monocotyledons, a class (III) of Stamina Perigynia and 'order' Asparagi, divided into three subfamilies. The use of the term Ordo (order) at that time was closer to what we now understand as Family, rather than Order. In creating his scheme he used a modified form of Linnaeus' sexual classification but using the respective topography of stamens to carpels rather than just their numbers. While De Jussieu's Stamina Perigynia also included a number of 'orders' that would eventually form families within the Asparagales such as the Asphodeli (Asphodelaceae), Narcissi (Amaryllidaceae) and Irides (Iridaceae), the remainder are now allocated to other orders. Jussieu's Asparagi soon came to be referred to as Asparagacées in the French literature (Latin: Asparagaceae). Meanwhile, the 'Narcissi' had been renamed as the 'Amaryllidées' (Amaryllideae) in 1805, by Jean Henri Jaume Saint-Hilaire, using Amaryllis as the type species rather than Narcissus, and thus has the authority attribution for Amaryllidaceae. In 1810, Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae and in 1813 de Candolle described Liliacées Juss. and Amaryllidées Brown as two quite separate families. The literature on the organisation of genera into families and higher ranks became available in the English language with Samuel Frederick Gray's A natural arrangement of British plants (1821). Gray used a combination of Linnaeus' sexual classification and Jussieu's natural classification to group together a number of families having in common six equal stamens, a single style and a perianth that was simple and petaloid, but did not use formal names for these higher ranks. Within the grouping he separated families by the characteristics of their fruit and seed. He treated groups of genera with these characteristics as separate families, such as Amaryllideae, Liliaceae, Asphodeleae and Asparageae. The circumscription of Asparagales has been a source of difficulty for many botanists from the time of John Lindley (1846), the other important British taxonomist of the early nineteenth century. In his first taxonomic work, An Introduction to the Natural System of Botany (1830) he partly followed Jussieu by describing a subclass he called Endogenae, or Monocotyledonous Plants (preserving de Candolle's Endogenæ phanerogamæ) divided into two tribes, the Petaloidea and Glumaceae. He divided the former, often referred to as petaloid monocots, into 32 orders, including the Liliaceae (defined narrowly), but also most of the families considered to make up the Asparagales today, including the Amaryllideae. By 1846, in his final scheme Lindley had greatly expanded and refined the treatment of the monocots, introducing both an intermediate ranking (Alliances) and tribes within orders (i.e. families). Lindley placed the Liliaceae within the Liliales, but saw it as a paraphyletic ("catch-all") family, being all Liliales not included in the other orders, but hoped that the future would reveal some characteristic that would group them better. The order Liliales was very large and included almost all monocotyledons with colourful tepals and without starch in their endosperm (the lilioid monocots). The Liliales was difficult to divide into families because morphological characters were not present in patterns that clearly demarcated groups. This kept the Liliaceae separate from the Amaryllidaceae (Narcissales). Of these, Liliaceae was divided into eleven tribes (with 133 genera) and Amaryllidaceae into four tribes (with 68 genera), yet both contained many genera that would eventually segregate to each other's contemporary orders (Liliales and Asparagales respectively). The Liliaceae would be reduced to a small 'core' represented by the tribe Tulipae, while large groups such Scilleae and Asparagae would become part of Asparagales either as part of the Amaryllidaceae or as separate families. While of the Amaryllidaceae, the Agaveae would be part of Asparagaceae but the Alstroemeriae would become a family within the Liliales. The number of known genera (and species) continued to grow and by the time of the next major British classification, that of the Bentham & Hooker system in 1883 (published in Latin) several of Lindley's other families had been absorbed into the Liliaceae. They used the term 'series' to indicate suprafamilial rank, with seven series of monocotyledons (including Glumaceae), but did not use Lindley's terms for these. However, they did place the Liliaceous and Amaryllidaceous genera into separate series. The Liliaceae were placed in series Coronariae, while the Amaryllideae were placed in series Epigynae. The Liliaceae now consisted of twenty tribes (including Tulipeae, Scilleae and Asparageae), and the Amaryllideae of five (including Agaveae and Alstroemerieae). An important addition to the treatment of the Liliaceae was the recognition of the Allieae as a distinct tribe that would eventually find its way to the Asparagales as the subfamily Allioideae of the Amaryllidaceae. Post-Darwinian The appearance of Charles Darwin's Origin of Species in 1859 changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata. The Darwinian approach led to the concept of phylogeny (tree-like structure) in assembling classification systems, starting with Eichler. Eichler, having established a hierarchical system in which the flowering plants (angiosperms) were divided into monocotyledons and dicotyledons, further divided into former into seven orders. Within the Liliiflorae were seven families, including Liliaceae and Amaryllidaceae. Liliaceae included Allium and Ornithogalum (modern Allioideae) and Asparagus. Engler, in his system developed Eichler's ideas into a much more elaborate scheme which he treated in a number of works including Die Natürlichen Pflanzenfamilien (Engler and Prantl 1888) and Syllabus der Pflanzenfamilien (1892–1924). In his treatment of Liliiflorae the Liliineae were a suborder which included both families Liliaceae and Amaryllidaceae. The Liliaceae had eight subfamilies and the Amaryllidaceae four. In this rearrangement of Liliaceae, with fewer subdivisions, the core Liliales were represented as subfamily Lilioideae (with Tulipae and Scilleae as tribes), the Asparagae were represented as Asparagoideae and the Allioideae was preserved, representing the alliaceous genera. Allieae, Agapantheae and Gilliesieae were the three tribes within this subfamily. In the Amaryllidaceae, there was little change from the Bentham & Hooker. A similar approach was adopted by Wettstein. Twentieth century In the twentieth century the Wettstein system (1901–1935) placed many of the taxa in an order called 'Liliiflorae'. Next Johannes Paulus Lotsy (1911) proposed dividing the Liliiflorae into a number of smaller families including Asparagaceae. Then Herbert Huber (1969, 1977), following Lotsy's example, proposed that the Liliiflorae be split into four groups including the 'Asparagoid' Liliiflorae. The widely used Cronquist system (1968–1988) used the very broadly defined order Liliales. These various proposals to separate small groups of genera into more homogeneous families made little impact till that of Dahlgren (1985) incorporating new information including synapomorphy. Dahlgren developed Huber's ideas further and popularised them, with a major deconstruction of existing families into smaller units. They created a new order, calling it Asparagales. This was one of five orders within the superorder Liliiflorae. Where Cronquist saw one family, Dahlgren saw forty distributed over three orders (predominantly Liliales and Asparagales). Over the 1980s, in the context of a more general review of the classification of angiosperms, the Liliaceae were subjected to more intense scrutiny. By the end of that decade, the Royal Botanic Gardens at Kew, the British Museum of Natural History and the Edinburgh Botanical Gardens formed a committee to examine the possibility of separating the family at least for the organization of their herbaria. That committee finally recommended that 24 new families be created in the place of the original broad Liliaceae, largely by elevating subfamilies to the rank of separate families. Phylogenetics The order Asparagales as currently circumscribed has only recently been recognized in classification systems, through the advent of phylogenetics. The 1990s saw considerable progress in plant phylogeny and phylogenetic theory, enabling a phylogenetic tree to be constructed for all of the flowering plants. The establishment of major new clades necessitated a departure from the older but widely used classifications such as Cronquist and Thorne based largely on morphology rather than genetic data. This complicated the discussion about plant evolution and necessitated a major restructuring. rbcL gene sequencing and cladistic analysis of monocots had redefined the Liliales in 1995. from four morphological orders sensu Dahlgren. The largest clade representing the Liliaceae, all previously included in Liliales, but including both the Calochortaceae and Liliaceae sensu Tamura. This redefined family, that became referred to as core Liliales, but corresponded to the emerging circumscription of the Angiosperm Phylogeny Group (1998). Phylogeny and APG system The 2009 revision of the Angiosperm Phylogeny Group system, APG III, places the order in the clade monocots. From the Dahlgren system of 1985 onwards, studies based mainly on morphology had identified the Asparagales as a distinct group, but had also included groups now located in Liliales, Pandanales and Zingiberales. Research in the 21st century has supported the monophyly of Asparagales, based on morphology, 18S rDNA, and other DNA sequences, although some phylogenetic reconstructions based on molecular data have suggested that Asparagales may be paraphyletic, with Orchidaceae separated from the rest. Within the monocots, Asparagales is the sister group of the commelinid clade. This cladogram shows the placement of Asparagales within the orders of Lilianae sensu Chase & Reveal (monocots) based on molecular phylogenetic evidence. The lilioid monocot orders are bracketed, namely Petrosaviales, Dioscoreales, Pandanales, Liliales and Asparagales. These constitute a paraphyletic assemblage, that is groups with a common ancestor that do not include all direct descendants (in this case commelinids as the sister group to Asparagales); to form a clade, all the groups joined by thick lines would need to be included. While Acorales and Alismatales have been collectively referred to as "alismatid monocots" (basal or early branching monocots), the remaining clades (lilioid and commelinid monocots) have been referred to as the "core monocots". The relationship between the orders (with the exception of the two sister orders) is pectinate, that is diverging in succession from the line that leads to the commelinids. Numbers indicate crown group (most recent common ancestor of the sampled species of the clade of interest) divergence times in mya (million years ago). Subdivision A phylogenetic tree for the Asparagales, generally to family level, but including groups which were recently and widely treated as families but which are now reduced to subfamily rank, is shown below. The tree shown above can be divided into a basal paraphyletic group, the 'lower Asparagales (asparagoids)', from Orchidaceae to Asphodelaceae, and a well-supported monophyletic group of 'core Asparagales' (higher asparagoids), comprising the two largest families, Amaryllidaceae sensu lato and Asparagaceae sensu lato. Two differences between these two groups (although with exceptions) are: the mode of microsporogenesis and the position of the ovary. The 'lower Asparagales' typically have simultaneous microsporogenesis (i.e. cell walls develop only after both meiotic divisions), which appears to be an apomorphy within the monocots, whereas the 'core Asparagales' have reverted to successive microsporogenesis (i.e. cell walls develop after each division). The 'lower Asparagales' typically have an inferior ovary, whereas the 'core Asparagales' have reverted to a superior ovary. A 2002 morphological study by Rudall treated possessing an inferior ovary as a synapomorphy of the Asparagales, stating that reversions to a superior ovary in the 'core Asparagales' could be associated with the presence of nectaries below the ovaries. However, Stevens notes that superior ovaries are distributed among the 'lower Asparagales' in such a way that it is not clear where to place the evolution of different ovary morphologies. The position of the ovary seems a much more flexible character (here and in other angiosperms) than previously thought. Changes to family structure in APG III The APG III system when it was published in 2009, greatly expanded the families Xanthorrhoeaceae, Amaryllidaceae, and Asparagaceae. Thirteen of the families of the earlier APG II system were thereby reduced to subfamilies within these three families. The expanded Xanthorrhoeaceae is now called "Asphodelaceae". The APG II families (left) and their equivalent APG III subfamilies (right) are as follows: Structure of Asparagales Orchid clade Orchidaceae is possibly the largest family of all angiosperms (only Asteraceae might – or might not – be more speciose) and hence by far the largest in the order. The Dahlgren system recognized three families of orchids, but DNA sequence analysis later showed that these families are polyphyletic and so should be combined. Several studies suggest (with high bootstrap support) that Orchidaceae is the sister of the rest of the Asparagales. Other studies have placed the orchids differently in the phylogenetic tree, generally among the Boryaceae-Hypoxidaceae clade. The position of Orchidaceae shown above seems the best current hypothesis, but cannot be taken as confirmed. Orchids have simultaneous microsporogenesis and inferior ovaries, two characters that are typical of the 'lower Asparagales'. However, their nectaries are rarely in the septa of the ovaries, and most orchids have dust-like seeds, atypical of the rest of the order. (Some members of Vanilloideae and Cypripedioideae have crustose seeds, probably associated with dispersal by birds and mammals that are attracted by fermenting fleshy fruit releasing fragrant compounds, e.g. vanilla.) In terms of the number of species, Orchidaceae diversification is remarkable, with recent estimations suggesting that despite the old origin of the family dating back to the late cretaceous, modern orchid diversity originated mostly during the last 5 million years. However, although the other Asparagales may be less rich in species, they are more variable morphologically, including tree-like forms. Boryaceae to Hypoxidaceae The four families excluding Boryaceae form a well-supported clade in studies based on DNA sequence analysis. All four contain relatively few species, and it has been suggested that they be combined into one family under the name Hypoxidaceae sensu lato. The relationship between Boryaceae (which includes only two genera, Borya and Alania), and other Asparagales has remained unclear for a long time. The Boryaceae are mycorrhizal, but not in the same way as orchids. Morphological studies have suggested a close relationship between Boryaceae and Blandfordiaceae. There is relatively low support for the position of Boryaceae in the tree shown above. Ixioliriaceae to Xeronemataceae The relationship shown between Ixioliriaceae and Tecophilaeaceae is still unclear. Some studies have supported a clade of these two families, others have not. The position of Doryanthaceae has also varied, with support for the position shown above, but also support for other positions. The clade from Iridaceae upwards appears to have stronger support. All have some genetic characteristics in common, having lost Arabidopsis-type telomeres. Iridaceae is distinctive among the Asparagales in the unique structure of the inflorescence (a rhipidium), the combination of an inferior ovary and three stamens, and the common occurrence of unifacial leaves whereas bifacial leaves are the norm in other Asparagales. Members of the clade from Iridaceae upwards have infra-locular septal nectaries, which Rudall interpreted as a driver towards secondarily superior ovaries. Asphodelaceae + 'core Asparagales' The next node in the tree (Xanthorrhoeaceae sensu lato + the 'core Asparagales') has strong support. 'Anomalous' secondary thickening occurs among this clade, e.g. in Xanthorrhoea (family Asphodelaceae) and Dracaena (family Asparagaceae sensu lato), with species reaching tree-like proportions. The 'core Asparagales', comprising Amaryllidaceae sensu lato and Asparagaceae sensu lato, are a strongly supported clade, as are clades for each of the families. Relationships within these broadly defined families appear less clear, particularly within the Asparagaceae sensu lato. Stevens notes that most of its subfamilies are difficult to recognize, and that significantly different divisions have been used in the past, so that the use of a broadly defined family to refer to the entire clade is justified. Thus the relationships among subfamilies shown above, based on APWeb , is somewhat uncertain. Evolution Several studies have attempted to date the evolution of the Asparagales, based on phylogenetic evidence. Earlier studies generally give younger dates than more recent studies, which have been preferred in the table below. A 2009 study suggests that the Asparagales have the highest diversification rate in the monocots, about the same as the order Poales, although in both orders the rate is little over half that of the eudicot order Lamiales, the clade with the highest rate. Comparison of family structures The taxonomic diversity of the monocotyledons is described in detail by Kubitzki. Up-to-date information on the Asparagales can be found on the Angiosperm Phylogeny Website. The APG III system's family circumscriptions are being used as the basis of the Kew-hosted World Checklist of Selected Plant Families. With this circumscription, the order consists of 14 families (Dahlgren had 31) with approximately 1120 genera and 26000 species. Order Asparagales Link Family Amaryllidaceae J.St.-Hil. (including Agapanthaceae F.Voigt, Alliaceae Borkh.) Family Asparagaceae Juss. (including Agavaceae Dumort. [which includes Anemarrhenaceae, Anthericaceae, Behniaceae and Herreriaceae], Aphyllanthaceae Burnett, Hesperocallidaceae Traub, Hyacinthaceae Batsch ex Borkh., Laxmanniaceae Bubani, Ruscaceae M.Roem. [which includes Convallariaceae] and Themidaceae Salisb.) Family Asteliaceae Dumort. Family Blandfordiaceae R.Dahlgren & Clifford Family Boryaceae M.W. Chase, Rudall & Conran Family Doryanthaceae R.Dahlgren & Clifford Family Hypoxidaceae R.Br. Family Iridaceae Juss. Family Ixioliriaceae Nakai Family Lanariaceae R.Dahlgren & A.E.van Wyk Family Orchidaceae Juss. Family Tecophilaeaceae Leyb. Family Xanthorrhoeaceae Dumort. (including Asphodelaceae Juss. and Hemerocallidaceae R.Br.), now Asphodelaceae Juss. Family Xeronemataceae M.W.Chase, Rudall & M.F.Fay The earlier 2003 version, APG II, allowed 'bracketed' families, i.e. families which could either be segregated from more comprehensive families or could be included in them. These are the families given under "including" in the list above. APG III does not allow bracketed families, requiring the use of the more comprehensive family; otherwise the circumscription of the Asparagales is unchanged. A separate paper accompanying the publication of the 2009 APG III system provided subfamilies to accommodate the families which were discontinued. The first APG system of 1998 contained some extra families, included in square brackets in the list above. Two older systems which use the order Asparagales are the Dahlgren system and the Kubitzki system. The families included in the circumscriptions of the order in these two systems are shown in the first and second columns of the table below. The equivalent family in the modern APG III system (see below) is shown in the third column. Note that although these systems may use the same name for a family, the genera which it includes may be different, so the equivalence between systems is only approximate in some cases. Uses The Asparagales include many important crop plants and ornamental plants. Crops include Allium, Asparagus and Vanilla, while ornamentals include irises, hyacinths and orchids.
Biology and health sciences
Asparagales
Plants
789
https://en.wikipedia.org/wiki/Asterales
Asterales
Asterales ( ) is an order of dicotyledonous flowering plants that includes the large family Asteraceae (or Compositae) known for composite flowers made of florets, and ten families related to the Asteraceae. While asterids in general are characterized by fused petals, composite flowers consisting of many florets create the false appearance of separate petals (as found in the rosids). The order is cosmopolitan (plants found throughout most of the world including desert and frigid zones), and includes mostly herbaceous species, although a small number of trees (such as the Lobelia deckenii, the giant lobelia, and Dendrosenecio, giant groundsels) and shrubs are also present. Asterales are organisms that seem to have evolved from one common ancestor. Asterales share characteristics on morphological and biochemical levels. Synapomorphies (a character that is shared by two or more groups through evolutionary development) include the presence in the plants of oligosaccharide inulin, a nutrient storage molecule used instead of starch; and unique stamen morphology. The stamens are usually found around the style, either aggregated densely or fused into a tube, probably an adaptation in association with the plunger (brush; or secondary) pollination that is common among the families of the order, wherein pollen is collected and stored on the length of the pistil. Taxonomy The name and order Asterales is botanically venerable, dating back to at least 1926 in the Hutchinson system of plant taxonomy when it contained only five families, of which only two are retained in the APG III classification. Under the Cronquist system of taxonomic classification of flowering plants, Asteraceae was the only family in the group, but newer systems (such as APG II and APG III) have expanded it to 11. In the classification system of Rolf Dahlgren the Asterales were in the superorder Asteriflorae (also called Asteranae). The order Asterales currently includes 11 families, the largest of which are the Asteraceae, with about 25,000 species, and the Campanulaceae (bellflowers), with about 2,000 species. The remaining families count together for less than 1500 species. The two large families are cosmopolitan, with many of their species found in the Northern Hemisphere, and the smaller families are usually confined to Australia and the adjacent areas, or sometimes South America. Only the Asteraceae have composite flower heads; the other families do not, but share other characteristics such as storage of inulin that define the 11 families as more closely related to each other than to other plant families or orders such as the rosids. The phylogenetic tree according to APG III for the Campanulid clade is as below. Phylogeny Although most extant species of Asteraceae are herbaceous, the examination of the basal members in the family suggests that the common ancestor of the family was an arborescent plant, a tree or shrub, perhaps adapted to dry conditions, radiating from South America. Less can be said about the Asterales themselves with certainty, although since several families in Asterales contain trees, the ancestral member is most likely to have been a tree or shrub. Because all clades are represented in the Southern Hemisphere but many not in the Northern Hemisphere, it is natural to conjecture that there is a common southern origin to them. Asterales belong to angiosperms or flowering plants, a clade that appeared about 140 million years ago. The Asterales order probably originated in the Cretaceous (145 – 66 Mya) on the supercontinent Gondwana which broke up from 184 – 80 Mya, forming the area that is now Australia, South America, Africa, India and Antarctica. Asterales contain about 14% of eudicot diversity. From an analysis of relationships and diversities within the Asterales and with their superorders, estimates of the age of the beginning of the Asterales have been made, which range from 116 Mya to 82Mya. However few fossils have been found, of the Menyanthaceae-Asteraceae clade in the Oligocene, about 29 Mya. Fossil evidence of the Asterales is rare and belongs to rather recent epochs, so the precise estimation of the order's age is quite difficult. An Oligocene (34 – 23 Mya) pollen is known for Asteraceae and Goodeniaceae, and seeds from Oligocene and Miocene (23 – 5.3 Mya) are known for Menyanthaceae and Campanulaceae respectively. According to molecular clock calculations, the lineage that led to Asterales split from other plants about 112 million years ago or 94 million years ago. Biogeography The core Asterales are Stylidiaceae (six genera), APA clade (Alseuosmiaceae, Phellinaceae and Argophyllaceae, together seven genera), MGCA clade (Menyanthaceae, Goodeniaceae, Calyceraceae, in total twenty genera), and Asteraceae (about sixteen hundred genera). Other Asterales are Rousseaceae (four genera), Campanulaceae (eighty-four genera) and Pentaphragmataceae (one genus). All Asterales families are represented in the Southern Hemisphere; however, Asteraceae and Campanulaceae are cosmopolitan and Menyanthaceae nearly so. Uses The Asterales, by dint of being a super-set of the family Asteraceae, include some species grown for food, including the sunflower (Helianthus annuus), lettuce (Lactuca sativa) and chicory (Cichorium). Many are also used as spices and traditional medicines. Asterales are common plants and have many known uses. For example, pyrethrum (derived from Old World members of the genus Chrysanthemum) is a natural insecticide with minimal environmental impact. Wormwood, derived from a genus that includes the sagebrush, is used as a source of flavoring for absinthe, a bitter classical liquor of European origin.
Biology and health sciences
Asterales
Plants
791
https://en.wikipedia.org/wiki/Asteroid
Asteroid
An asteroid is a minor planet—an object that is neither a true planet nor an identified comet— that orbits within the inner Solar System. They are rocky, metallic, or icy bodies with no atmosphere, classified as C-type (carbonaceous), M-type (metallic), or S-type (silicaceous). The size and shape of asteroids vary significantly, ranging from small rubble piles under a kilometer across and larger than meteoroids, to Ceres, a dwarf planet almost 1000 km in diameter. A body is classified as a comet, not an asteroid, if it shows a coma (tail) when warmed by solar radiation, although recent observations suggest a continuum between these types of bodies. Of the roughly one million known asteroids, the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in a region known as the main asteroid belt. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun. Asteroids have historically been observed from Earth. The first close-up observation of an asteroid was made by the Galileo spacecraft. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, is tasked with studying ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched October 2023, aims to study the metallic asteroid Psyche. Near-Earth asteroids have the potential for catastrophic consequences if they strike Earth, with a notable example being the Chicxulub impact, widely thought to have induced the Cretaceous–Paleogene mass extinction. As an experiment to meet this danger, in September 2022 the Double Asteroid Redirection Test spacecraft successfully altered the orbit of the non-threatening asteroid Dimorphos by crashing into it. Terminology In 2006, the International Astronomical Union (IAU) introduced the currently preferred broad term small Solar System body, defined as an object in the Solar System that is neither a planet, a dwarf planet, nor a natural satellite; this includes asteroids, comets, and more recently discovered classes. According to IAU, "the term 'minor planet' may still be used, but generally, 'Small Solar System Body' will be preferred." Historically, the first discovered asteroid, Ceres, was at first considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably. Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid, never officially defined, can be informally used to mean "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions". The main difference between an asteroid and a comet is that a comet shows a coma (tail) due to sublimation of its near-surface ices by solar radiation. A few objects were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; highly eccentric asteroids are probably dormant or extinct comets. The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects. For almost two centuries after the discovery of Ceres in 1801, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few, such as 944 Hidalgo, ventured farther for part of their orbit. Starting in 1977 with 2060 Chiron, astronomers discovered small bodies that permanently resided further out than Jupiter, now called centaurs. In 1992, 15760 Albion was discovered, the first object beyond the orbit of Neptune (other than Pluto); soon large numbers of similar objects were observed, now called trans-Neptunian object. Further out are Kuiper-belt objects, scattered-disc objects, and the much more distant Oort cloud, hypothesized to be the main reservoir of dormant comets. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies exhibit little cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets. The Kuiper-belt bodies are called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line. In 2006, the IAU created the class of dwarf planets for the largest minor planets—those massive enough to have become ellipsoidal under their own gravity. Only the largest object in the asteroid belt has been placed in this category: Ceres, at about across. History of observations Despite their large numbers, asteroids are a relatively recent discovery, with the first one—Ceres—only being identified in 1801. Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye in dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be briefly visible to the naked eye. , the Minor Planet Center had data on 1,199,224 minor planets in the inner and outer Solar System, of which about 614,690 had enough information to be given numbered designations. Discovery of Ceres In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a numerical procession known as the Titius–Bode law (now discredited). Except for an unexplained gap between Mars and Jupiter, Bode's formula seemed to predict the orbits of the known planets. He wrote the following explanation for the existence of a "missing planet": This latter point seems in particular to follow from the astonishing relation which the known six planets observe in their distances from the Sun. Let the distance from the Sun to Saturn be taken as 100, then Mercury is separated by 4 such parts from the Sun. Venus is 4 + 3 = 7. The Earth 4 + 6 = 10. Mars 4 + 12 = 16. Now comes a gap in this so orderly progression. After Mars there follows a space of 4 + 24 = 28 parts, in which no planet has yet been seen. Can one believe that the Founder of the universe had left this space empty? Certainly not. From here we come to the distance of Jupiter by 4 + 48 = 52 parts, and finally to that of Saturn by 4 + 96 = 100 parts. Bode's formula predicted another planet would be found with an orbital radius near 2.8 astronomical units (AU), or 420 million km, from the Sun. The Titius–Bode law got a boost with William Herschel's discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal Monatliche Correspondenz (Monthly Correspondence), sent requests to 24 experienced astronomers (whom he dubbed the "celestial police"), asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids 2 Pallas, 3 Juno and 4 Vesta. One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the Academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving star-like object, which he first thought was a comet: The light was a little faint, and of the colour of Jupiter, but similar to many others which generally are reckoned of the eighth magnitude. Therefore I had no doubt of its being any other than a fixed star. [...] The evening of the third, my suspicion was converted into certainty, being assured it was not a fixed star. Nevertheless before I made it known, I waited till the evening of the fourth, when I had the satisfaction to see it had moved at the same rate as on the preceding days. Piazzi observed Ceres a total of 24 times, the final time on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to only two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the Monatliche Correspondenz. By this time, the apparent position of Ceres had changed (mostly due to Earth's motion around the Sun), and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Toward the end of the year, Ceres should have been visible again, but after such a long time it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then 24 years old, developed an efficient method of orbit determination. In a few weeks, he predicted the path of Ceres and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and thus recovered it. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; however, Neptune, once discovered in 1846, was 8 AU closer than predicted, leading most astronomers to conclude that the law was a coincidence. Piazzi named the newly discovered object Ceres Ferdinandea, "in honor of the patron goddess of Sicily and of King Ferdinand of Bourbon". Further search Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered by von Zach's group over the next few years, with Vesta found in 1807. No new asteroids were discovered until 1845. Amateur astronomer Karl Ludwig Hencke started his searches of new asteroids in 1830, and fifteen years later, while looking for Vesta, he found the asteroid later named 5 Astraea. It was the first new asteroid discovery in 38 years. Carl Friedrich Gauss was given the honor of naming the asteroid. After this, other astronomers joined; 15 asteroids were found by the end of 1851. In 1868, when James Craig Watson discovered the 100th asteroid, the French Academy of Sciences engraved the faces of Karl Theodor Robert Luther, John Russell Hind, and Hermann Goldschmidt, the three most successful asteroid-hunters at that time, on a commemorative medallion marking the event. In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named. 19th and 20th centuries In the past, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. A body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations. These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step is sending the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union. Naming By 1851, the Royal Astronomical Society decided that asteroids were being discovered at such a rapid rate that a different system was needed to categorize or name asteroids. In 1852, when de Gasparis discovered the twentieth asteroid, Benjamin Valz gave it a name and a number designating its rank among asteroid discoveries, 20 Massalia. Sometimes asteroids were discovered and not seen again. So, starting in 1892, new asteroids were listed by the year and a capital letter indicating the order in which the asteroid's orbit was calculated and registered within that specific year. For example, the first two asteroids discovered in 1892 were labeled 1892A and 1892B. However, there were not enough letters in the alphabet for all of the asteroids discovered in 1893, so 1893Z was followed by 1893AA. A number of variations of these methods were tried, including designations that included year plus a Greek letter in 1914. A simple chronological numbering system was established in 1925. Currently all newly discovered asteroids receive a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number—e.g. (433) Eros—but dropping the parentheses is quite common. Informally, it is also common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union. Symbols The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1852 there were two dozen asteroid symbols, which often occurred in multiple variants. In 1851, after the fifteenth asteroid, Eunomia, had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid. The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides. Formation Many asteroids are the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. It is thought that planetesimals in the asteroid belt evolved much like the rest of objects in the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust. In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres. Distribution within the Solar System Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include: Asteroid belt The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter. Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16. The total mass of the asteroid belt is estimated to be kg, which is just 3% of the mass of the Moon; the mass of the Kuiper Belt and Scattered Disk is over 100 times as large. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, account for maybe 62% of the belt's total mass, with 39% accounted for by Ceres alone. Trojans Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, and , which lie 60° ahead of and behind the larger body. In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 28 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn and Uranus probably do not have any primordial trojans. Near-Earth asteroids Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , a total of 28,772 near-Earth asteroids were known; 878 have a diameter of one kilometer or larger. A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter. Many asteroids have natural satellites (minor-planet moons). , there were 85 NEAs known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest potentially hazardous asteroids with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth. Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q): The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU. The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.) The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.) The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars. Martian moons It is unclear whether Martian moons Phobos and Deimos are captured asteroids or were formed due to impact event on Mars. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear whether sufficient time was available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces. Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars. Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon. Characteristics Size distribution Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across, below which an object is classified as a meteoroid. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies. The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the brightest of the four main-belt asteroids that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis. The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be , ≈ 3.25% of the mass of the Moon. Of this, Ceres comprises , about 40% of the total. Adding in the next three most massive objects, Vesta (11%), Pallas (8.5%), and Hygiea (3–4%), brings this figure up to a bit over 60%, whereas the next seven most-massive asteroids bring the total up to 70%. The number of asteroids increases rapidly as their individual masses decrease. The number of asteroids decreases markedly with increasing size. Although the size distribution generally follows a power law, there are 'bumps' at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately 120 km in diameter are primordial (surviving from the accretion epoch), whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today. Largest asteroids Three largest objects in the asteroid belt, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. The four largest asteroids constitute half the mass of the asteroid belt. Ceres is the only asteroid that appears to have a plastic shape under its own gravity and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth. Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth. Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids. Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium, or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing. Internal differentiation of large asteroids is possibly related to their lack of natural satellites, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating a rubble pile structure. Rotation Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period less than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids. Color Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids. Surface features Except for the "big four" (Ceres, Pallas, Vesta, and Hygiea), asteroids are likely to be broadly similar in appearance, if irregular in shape. 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius. Earth-based observations of 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida, that have been observed up close, also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid. Dawn spacecraft revealed that Ceres has a heavily cratered surface, but with fewer large craters than expected. Models based on the formation of the current asteroid belt had suggested Ceres should possess 10 to 15 craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts. Composition Asteroids are classified by their characteristic emission spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These describe carbonaceous (carbon-rich), metallic, and silicaceous (stony) compositions, respectively. The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle; Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. Thought to be the largest undifferentiated asteroid, 10 Hygiea seems to have a uniformly primitive composition of carbonaceous chondrite, but it may actually be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal. Most small asteroids are believed to be piles of rubble held together loosely by gravity, although the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or possibly a planet. In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less than 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than . Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Few asteroids are larger than 87 Sylvia, none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly. Water Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. In 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. The presence of ice on 24 Themis makes the initial theory plausible. In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that "every cubic metre of irradiated rock could contain up to 20 litres"; study was conducted using an atom probe tomography, numbers are given for the Itokawa S-type asteroid. Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have an ultraporous lithology (UPL): porous texture that could be formed by removal of ice that filled these pores, this suggests that UPL "represent fossils of primordial ice". Organic compounds Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (an event called "panspermia"). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space. In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia. Classification Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum. Orbital classification Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor. About 30–35% of the bodies in the asteroid belt belong to dynamical families, each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet . Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or another planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites. Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with the outer planets as well. Spectral classification In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied. The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes. The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals. Problems Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials. Active asteroids Active asteroids are objects that have asteroid-like orbits but show comet-like visual characteristics. That is, they show comae, tails, or other visual evidence of mass-loss (like a comet), but their orbit remains within Jupiter's orbit (like an asteroid). These bodies were originally designated main-belt comets (MBCs) in 2006 by astronomers David Jewitt and Henry Hsieh, but this name implies they are necessarily icy in composition like a comet and that they only exist within the main-belt, whereas the growing population of active asteroids shows that this is not always the case. The first active asteroid discovered is 7968 Elst–Pizarro. It was discovered (as an asteroid) in 1979 but then was found to have a tail by Eric Elst and Guido Pizarro in 1996 and given the cometary designation 133P/Elst-Pizarro. Another notable object is 311P/PanSTARRS: observations made by the Hubble Space Telescope revealed that it had six comet-like tails. The tails are suspected to be streams of material ejected by the asteroid as a result of a rubble pile asteroid spinning fast enough to remove material from it. By smashing into the asteroid Dimorphos, NASA's Double Asteroid Redirection Test spacecraft made it an active asteroid. Scientists had proposed that some active asteroids are the result of impact events, but no one had ever observed the activation of an asteroid. The DART mission activated Dimorphos under precisely known and carefully observed impact conditions, enabling the detailed study of the formation of an active asteroid for the first time. Observations show that Dimorphos lost approximately 1 million kilograms after the collision. Impact produced a dust plume that temporarily brightened the Didymos system and developed a -long dust tail that persisted for several months. Observation and exploration Until the age of space travel, objects in the asteroid belt could only be observed with large telescopes, their shapes and terrain remaining a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can only resolve a small amount of detail on the surfaces of the largest asteroids. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (variation in brightness during rotation) and their spectral properties. Sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. Spacecraft flybys can provide much more data than any ground or space-based observations; sample-return missions gives insights about regolith composition. Ground-based observations As asteroids are rather small and faint objects, the data that can be obtained from ground-based observations (GBO) are limited. By means of ground-based optical telescopes the visual magnitude can be obtained; when converted into the absolute magnitude it gives a rough estimate of the asteroid's size. Light-curve measurements can also be made by GBO; when collected over a long period of time it allows an estimate of the rotational period, the pole orientation (sometimes), and a rough estimate of the asteroid's shape. Spectral data (both visible-light and near-infrared spectroscopy) gives information about the object's composition, used to classify the observed asteroids. Such observations are limited as they provide information about only the thin layer on the surface (up to several micrometers). As planetologist Patrick Michel writes: Mid- to thermal-infrared observations, along with polarimetry measurements, are probably the only data that give some indication of actual physical properties. Measuring the heat flux of an asteroid at a single wavelength gives an estimate of the dimensions of the object; these measurements have lower uncertainty than measurements of the reflected sunlight in the visible-light spectral region. If the two measurements can be combined, both the effective diameter and the geometric albedo—the latter being a measure of the brightness at zero phase angle, that is, when illumination comes from directly behind the observer—can be derived. In addition, thermal measurements at two or more wavelengths, plus the brightness in the visible-light region, give information on the thermal properties. The thermal inertia, which is a measure of how fast a material heats up or cools off, of most observed asteroids is lower than the bare-rock reference value but greater than that of the lunar regolith; this observation indicates the presence of an insulating layer of granular material on their surface. Moreover, there seems to be a trend, perhaps related to the gravitational environment, that smaller objects (with lower gravity) have a small regolith layer consisting of coarse grains, while larger objects have a thicker regolith layer consisting of fine grains. However, the detailed properties of this regolith layer are poorly known from remote observations. Moreover, the relation between thermal inertia and surface roughness is not straightforward, so one needs to interpret the thermal inertia with caution. Near-Earth asteroids that come into close vicinity of the planet can be studied in more details with radar; it provides information about the surface of the asteroid (for example can show the presence of craters and boulders). Such observations were conducted by the Arecibo Observatory in Puerto Rico (305 meter dish) and Goldstone Observatory in California (70 meter dish). Radar observations can also be used for accurate determination of the orbital and rotational dynamics of observed objects. Space-based observations Both space and ground-based observatories conducted asteroid search programs; the space-based searches are expected to detect more objects because there is no atmosphere to interfere and because they can observe larger portions of the sky. NEOWISE observed more than 100,000 asteroids of the main belt, Spitzer Space Telescope observed more than 700 near-Earth asteroids. These observations determined rough sizes of the majority of observed objects, but provided limited detail about surface properties (such as regolith depth and composition, angle of repose, cohesion, and porosity). Asteroids were also studied by the Hubble Space Telescope, such as tracking the colliding asteroids in the main belt, break-up of an asteroid, observing an active asteroid with six comet-like tails, and observing asteroids that were chosen as targets of dedicated missions. Space probe missions According to Patrick Michel The internal structure of asteroids is inferred only from indirect evidence: bulk densities measured by spacecraft, the orbits of natural satellites in the case of asteroid binaries, and the drift of an asteroid's orbit due to the Yarkovsky thermal effect. A spacecraft near an asteroid is perturbed enough by the asteroid's gravity to allow an estimate of the asteroid's mass. The volume is then estimated using a model of the asteroid's shape. Mass and volume allow the derivation of the bulk density, whose uncertainty is usually dominated by the errors made on the volume estimate. The internal porosity of asteroids can be inferred by comparing their bulk density with that of their assumed meteorite analogues, dark asteroids seem to be more porous (>40%) than bright ones. The nature of this porosity is unclear. Dedicated missions The first asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), 5535 Annefrank (by Stardust in 2002), 2867 Šteins and 21 Lutetia (by the Rosetta probe in 2008), and 4179 Toutatis (China's lunar orbiter Chang'e 2, which flew within in 2012). The first dedicated asteroid probe was NASA's NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. It was the first spacecraft to successfully orbit and land on an asteroid. From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and returned samples of its surface to Earth on 13 June 2010, the first asteroid sample-return mission. In 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta for a year, and observed the dwarf planet Ceres for three years. Hayabusa2, a probe launched by JAXA 2014, orbited its target asteroid 162173 Ryugu for more than a year and took samples that were delivered to Earth in 2020. The spacecraft is now on an extended mission and expected to arrive at a new target in 2031. NASA launched the OSIRIS-REx in 2016, a sample return mission to asteroid 101955 Bennu. In 2021, the probe departed the asteroid with a sample from its surface. Sample was delivered to Earth in September 2023. The spacecraft continues its extended mission, designated OSIRIS-APEX, to explore near-Earth asteroid Apophis in 2029. In 2021, NASA launched Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential hazardous objects. DART deliberately crashed into the minor-planet moon Dimorphos of the double asteroid Didymos in September 2022 to assess the potential of a spacecraft impact to deflect an asteroid from a collision course with Earth. In October, NASA declared DART a success, confirming it had shortened Dimorphos' orbital period around Didymos by about 32 minutes. NASA's Lucy, launched in 2021, is a multiple-asteroid flyby probe focused on flying by 7 Jupiter trojans of varying types. While not yet set to reach its first main target, 3548 Eurybates, until 2027, it has made a flyby of main-belt asteroid 152830 Dinkinesh and is set to flyby another asteroid 52246 Donaldjohanson in 2025. Planned missions NASA's Psyche, launched in October 2023, is intended to study the large metallic asteroid of the same name, and is on track to arrive there in 2029. ESA's Hera, launched in October 2024, is intended study the results of the DART impact. It is expected to measure the size and morphology of the crater, and momentum transmitted by the impact, to determine the efficiency of the deflection produced by DART. JAXA's DESTINY+ is a mission for a flyby of the Geminids meteor shower parent body 3200 Phaethon, as well as various minor bodies. Its launch is planned for 2024. CNSA's Tianwen-2 is planned to launch in 2025. If all goes as planned, it will use solar electric propulsion to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS. The spacecraft is tasked with collecting samples of the regolith of Kamo'oalewa. Asteroid mining The concept of asteroid mining was proposed in 1970s. Matt Anderson defines successful asteroid mining as "the development of a mining program that is both financially self-sustaining and profitable to its investors". It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth, or materials for constructing space habitats. Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction. As resource depletion on Earth becomes more real, the idea of extracting valuable elements from asteroids and returning these to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots. From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable. Threats to Earth There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth. The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact. Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across. All of these considerations helped spur the launch of highly efficient surveys, consisting of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. , the LINEAR system alone had discovered 147,132 asteroids. Among the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter. In June 2018, the National Science and Technology Council warned that the United States is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. Asteroid deflection strategies Various collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, operations, and technology readiness. There are various methods for changing the course of an asteroid/comet. These can be differentiated by various types of attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station). Strategies fall into two basic sets: fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately 12,750 km in diameter and moves at approx. 30 km per second in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth. "Project Icarus" was one of the first projects designed in 1967 as a contingency plan in case of collision with 1566 Icarus. The plan relied on the new Saturn V rocket, which did not make its first flight until after the report had been completed. Six Saturn V rockets would be used, each launched at variable intervals from months to hours away from impact. Each rocket was to be fitted with a single 100-megaton nuclear warhead as well as a modified Apollo Service Module and uncrewed Apollo Command Module for guidance to the target. The warheads would be detonated 30 meters from the surface, deflecting or partially destroying the asteroid. Depending on the subsequent impacts on the course or the destruction of the asteroid, later missions would be modified or cancelled as needed. The "last-ditch" launch of the sixth rocket would be 18 hours prior to impact. Fiction Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
Physical sciences
Astronomy
null
798
https://en.wikipedia.org/wiki/Aries%20%28constellation%29
Aries (constellation)
Aries is one of the constellations of the zodiac. It is located in the Northern celestial hemisphere between Pisces to the west and Taurus to the east. The name Aries is Latin for ram. Its old astronomical symbol is (♈︎). It is one of the 48 constellations described by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is a mid-sized constellation ranking 39th in overall size, with an area of 441 square degrees (1.1% of the celestial sphere). Aries has represented a ram since late Babylonian times. Before that, the stars of Aries formed a farmhand. Different cultures have incorporated the stars of Aries into different constellations including twin inspectors in China and a porpoise in the Marshall Islands. Aries is a relatively dim constellation, possessing only four bright stars: Hamal (Alpha Arietis, second magnitude), Sheratan (Beta Arietis, third magnitude), Mesarthim (Gamma Arietis, fourth magnitude), and 41 Arietis (also fourth magnitude). The few deep-sky objects within the constellation are quite faint and include several pairs of interacting galaxies. Several meteor showers appear to radiate from Aries, including the Daytime Arietids and the Epsilon Arietids. History and mythology Aries is now recognized as an official constellation, albeit as a specific region of the sky, by the International Astronomical Union. It was originally defined in ancient texts as a specific pattern of stars, and has remained a constellation since ancient times; it now includes the ancient pattern and the surrounding stars. In the description of the Babylonian zodiac given in the clay tablets known as the MUL.APIN, the constellation, now known as Aries, was the final station along the ecliptic. The MUL.APIN was a comprehensive table of the rising and settings of stars, which likely served as an agricultural calendar. Modern-day Aries was known as , "The Agrarian Worker" or "The Hired Man". Although likely compiled in the 12th or 11th century BC, the MUL.APIN reflects a tradition that marks the Pleiades as the vernal equinox, which was the case with some precision at the beginning of the Middle Bronze Age. The earliest identifiable reference to Aries as a distinct constellation comes from the boundary stones that date from 1350 to 1000 BC. On several boundary stones, a zodiacal ram figure is distinct from the other characters. The shift in identification from the constellation as the Agrarian Worker to the Ram likely occurred in later Babylonian tradition because of its growing association with Dumuzi the Shepherd. By the time the MUL.APIN was created—in 1000 BC—modern Aries was identified with both Dumuzi's ram and a hired labourer. The exact timing of this shift is difficult to determine due to the lack of images of Aries or other ram figures. In ancient Egyptian astronomy, Aries was associated with the god Amun-Ra, who was depicted as a man with a ram's head and represented fertility and creativity. Because it was the location of the vernal equinox, it was called the "Indicator of the Reborn Sun". During the times of the year when Aries was prominent, priests would process statues of Amon-Ra to temples, a practice that was modified by Persian astronomers centuries later. Aries acquired the title of "Lord of the Head" in Egypt, referring to its symbolic and mythological importance. Aries was not fully accepted as a constellation until classical times. In Hellenistic astrology, the constellation of Aries is associated with the golden ram of Greek mythology that rescued Phrixus and Helle on orders from Hermes, taking Phrixus to the land of Colchis. Phrixus and Helle were the son and daughter of King Athamas and his first wife Nephele. The king's second wife, Ino, was jealous and wished to kill his children. To accomplish this, she induced famine in Boeotia, then falsified a message from the Oracle of Delphi that said Phrixus must be sacrificed to end the famine. Athamas was about to sacrifice his son atop Mount Laphystium when Aries, sent by Nephele, arrived. Helle fell off of Aries's back in flight and drowned in the Dardanelles, also called the Hellespont in her honour. Historically, Aries has been depicted as a crouched, wingless ram with its head turned towards Taurus. Ptolemy asserted in his Almagest that Hipparchus depicted Alpha Arietis as the ram's muzzle, though Ptolemy did not include it in his constellation figure. Instead, it was listed as an "unformed star", and denoted as "the star over the head". John Flamsteed, in his Atlas Coelestis, followed Ptolemy's description by mapping it above the figure's head. Flamsteed followed the general convention of maps by depicting Aries lying down. Astrologically, Aries has been associated with the head and its humors. It was strongly associated with Mars, both the planet and the god. It was considered to govern Western Europe and Syria and to indicate a strong temper in a person. The First Point of Aries, the location of the vernal equinox, is named for the constellation. This is because the Sun crossed the celestial equator from south to north in Aries more than two millennia ago. Hipparchus defined it in 130 BC. as a point south of Gamma Arietis. Because of the precession of the equinoxes, the First Point of Aries has since moved into Pisces and will move into Aquarius by around 2600 AD. The Sun now appears in Aries from late April through mid-May, though the constellation is still associated with the beginning of spring. Medieval Muslim astronomers depicted Aries in various ways. Astronomers like al-Sufi saw the constellation as a ram, modelled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself. The obsolete constellations Apes, Vespa, Lilium, and Musca Borealis all centred on the same four stars, now known as 33, 35, 39, and 41 Arietis. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch for Vespa, representing a wasp. In 1679, Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellations became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his Firmamentum Sobiescianum. To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries. In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1h 46.4m and 3h 29.4m and its declination is between 10.36° and 31.22° in the equatorial coordinate system. In non-Western astronomy In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called 'Lou',variously translated as "bond" or "lasso" also "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called Wei (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation Tianyin (天陰), thought to represent the Emperor's hunting partner. Zuogeng (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by Yeou-kang, a constellation depicting an official in charge of pasture distribution. In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as "Aja" and "Mesha". In Hebrew astronomy Aries was named "Taleh"; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it Na-pai-ka; the Māori constellation Pipiri may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder of when to hold the annual harvest festival, Ayri Huay. Features Stars Bright stars Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" (ras al-hamal), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1. β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from "sharatayn", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as "qarna al-hamal", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit. γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as "qarna al-hamal". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines. The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds. Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude of −0.1 and a spectral class of K2. ζ Arietis is a star of magnitude 4.89, 263 light-years away. Its spectral class is A0 and its absolute magnitude is 0.0. 14 Arietis is a star of magnitude 4.98, 288 light-years away. Its spectral class is F2 and its absolute magnitude is 0.6. 39 Arietis (Lilii Borea) is a similar star of magnitude 4.51, 172 light-years away. Its spectral class is K1 and its absolute magnitude is 0.0. 35 Arietis is a dim star of magnitude 4.55, 343 light-years away. Its spectral class is B3 and its absolute magnitude is −1.7. 41 Arietis, known both as c Arietis and Nair al Butain, is a brighter star of magnitude 3.63, 165 light-years away. Its spectral class is B8 and it has a luminosity of . Its absolute magnitude is −0.2. 53 Arietis is a runaway star of magnitude 6.09, 815 light-years away. Its spectral class is B2. It was likely ejected from the Orion Nebula approximately five million years ago, possibly due to supernovae. Finally, Teegarden's Star is the closest star to Earth in Aries. It is a red dwarf of magnitude 15.14 and spectral class M6.5V. With a proper motion of 5.1 arcseconds per year, it is the 24th closest star to Earth overall. Variable stars Aries has its share of variable stars, including R and U Arietis, Mira-type variable stars, and T Arietis, a semi-regular variable star. R Arietis is a Mira variable star that ranges in magnitude from a minimum of 13.7 to a maximum of 7.4 with a period of 186.8 days. It is 4,080 light-years away. U Arietis is another Mira variable star that ranges in magnitude from a minimum of 15.2 to a maximum of 7.2 with a period of 371.1 days. T Arietis is a semiregular variable star that ranges in magnitude from a minimum of 11.3 to a maximum of 7.5 with a period of 317 days. It is 1,630 light-years away. One particularly interesting variable in Aries is SX Arietis, a rotating variable star considered to be the prototype of its class, helium variable stars. SX Arietis stars have very prominent emission lines of Helium I and Silicon III. They are normally main-sequence B0p—B9p stars, and their variations are not usually visible to the naked eye. Therefore, they are observed photometrically, usually having periods that fit in the course of one night. Similar to α2s, SX Arietis stars have periodic changes in their light and magnetic field, which correspond to the periodic rotation; they differ from the α2 Canum Venaticorum variables in their higher temperature. There are between 39 and 49 SX Arietis variable stars currently known; ten are noted as being "uncertain" in the General Catalog of Variable Stars. Deep sky objects NGC 772 is a spiral galaxy with an integrated magnitude of 10.3, located southeast of β Arietis and 15 arcminutes west of 15 Arietis. It is a relatively bright galaxy and shows obvious nebulosity and ellipticity in an amateur telescope. It is 7.2 by 4.2 arcminutes, meaning that its surface brightness, magnitude 13.6, is significantly lower than its integrated magnitude. NGC 772 is a class SA(s)b galaxy, which means that it is an unbarred spiral galaxy without a ring that possesses a somewhat prominent bulge and spiral arms that are wound somewhat tightly. The main arm, on the northwest side of the galaxy, is home to many star forming regions; this is due to previous gravitational interactions with other galaxies. NGC 772 has a small companion galaxy, NGC 770, that is about 113,000 light-years away from the larger galaxy. The two galaxies together are also classified as Arp 78 in the Arp peculiar galaxy catalog. NGC 772 has a diameter of 240,000 light-years and the system is 114 million light-years from Earth. Another spiral galaxy in Aries is NGC 673, a face-on class SAB(s)c galaxy. It is a weakly barred spiral galaxy with loosely wound arms. It has no ring and a faint bulge and is 2.5 by 1.9 arcminutes. It has two primary arms with fragments located farther from the core. 171,000 light-years in diameter, NGC 673 is 235 million light-years from Earth. NGC 678 and NGC 680 are a pair of galaxies in Aries that are only about 200,000 light-years apart. Part of the NGC 691 group of galaxies, both are at a distance of approximately 130 million light-years. NGC 678 is an edge-on spiral galaxy that is 4.5 by 0.8 arcminutes. NGC 680, an elliptical galaxy with an asymmetrical boundary, is the brighter of the two at magnitude 12.9; NGC 678 has a magnitude of 13.35. Both galaxies have bright cores, but NGC 678 is the larger galaxy at a diameter of 171,000 light-years; NGC 680 has a diameter of 72,000 light-years. NGC 678 is further distinguished by its prominent dust lane. NGC 691 itself is a spiral galaxy slightly inclined to our line of sight. It has multiple spiral arms and a bright core. Because it is so diffuse, it has a low surface brightness. It has a diameter of 126,000 light-years and is 124 million light-years away. NGC 877 is the brightest member of an 8-galaxy group that also includes NGC 870, NGC 871, and NGC 876, with a magnitude of 12.53. It is 2.4 by 1.8 arcminutes and is 178 million light-years away with a diameter of 124,000 light-years. Its companion is NGC 876, which is about 103,000 light-years from the core of NGC 877. They are interacting gravitationally, as they are connected by a faint stream of gas and dust. Arp 276 is a different pair of interacting galaxies in Aries, consisting of NGC 935 and IC 1801. NGC 821 is an E6 elliptical galaxy. It is unusual because it has hints of an early spiral structure, which is normally only found in lenticular and spiral galaxies. NGC 821 is 2.6 by 2.0 arcminutes and has a visual magnitude of 11.3. Its diameter is 61,000 light-years and it is 80 million light-years away. Another unusual galaxy in Aries is Segue 2, a dwarf and satellite galaxy of the Milky Way, recently discovered to be a potential relic of the epoch of reionization. Meteor showers Aries is home to several meteor showers. The Daytime Arietid meteor shower is one of the strongest meteor showers that occurs during the day, lasting from 22 May to 2 July. It is an annual shower associated with the Marsden group of comets that peaks on 7 June with a maximum zenithal hourly rate of 54 meteors. Its parent body may be the asteroid Icarus. The meteors are sometimes visible before dawn, because the radiant is 32 degrees away from the Sun. They usually appear at a rate of 1–2 per hour as "earthgrazers", meteors that last several seconds and often begin at the horizon. Because most of the Daytime Arietids are not visible to the naked eye, they are observed in the radio spectrum. This is possible because of the ionized gas they leave in their wake. Other meteor showers radiate from Aries during the day; these include the Daytime Epsilon Arietids and the Northern and Southern Daytime May Arietids. The Jodrell Bank Observatory discovered the Daytime Arietids in 1947 when James Hey and G. S. Stewart adapted the World War II-era radar systems for meteor observations. The Delta Arietids are another meteor shower radiating from Aries. Peaking on 9 December with a low peak rate, the shower lasts from 8 December to 14 January, with the highest rates visible from 8 to 14 December. The average Delta Arietid meteor is very slow, with an average velocity of per second. However, this shower sometimes produces bright fireballs. This meteor shower has northern and southern components, both of which are likely associated with 1990 HA, a near-Earth asteroid. The Autumn Arietids also radiate from Aries. The shower lasts from 7 September to 27 October and peaks on 9 October. Its peak rate is low. The Epsilon Arietids appear from 12 to 23 October. Other meteor showers radiating from Aries include the October Delta Arietids, Daytime Epsilon Arietids, Daytime May Arietids, Sigma Arietids, Nu Arietids, and Beta Arietids. The Sigma Arietids, a class IV meteor shower, are visible from 12 to 19 October, with a maximum zenithal hourly rate of less than two meteors per hour on 19 October. Planetary systems Aries contains several stars with extrasolar planets. HIP 14810, a G5 type star, is orbited by three giant planets (those more than ten times the mass of Earth). HD 12661, like HIP 14810, is a G-type main sequence star, slightly larger than the Sun, with two orbiting planets. One planet is 2.3 times the mass of Jupiter, and the other is 1.57 times the mass of Jupiter. HD 20367 is a G0 type star, approximately the size of the Sun, with one orbiting planet. The planet, discovered in 2002, has a mass 1.07 times that of Jupiter and orbits every 500 days. In 2019, scientists conducting the CARMENES survey at the Calar Alto Observatory announced evidence of two Earth-mass exoplanets orbiting Teegarden's star, located in Aries, within its habitable zone. The star is a small red dwarf with only around a tenth of the mass and radius of the Sun. It has a large radial velocity.
Physical sciences
Zodiac
Astronomy
799
https://en.wikipedia.org/wiki/Aquarius%20%28constellation%29
Aquarius (constellation)
Aquarius is an equatorial constellation of the zodiac, between Capricornus and Pisces. Its name is Latin for "water-carrier" or "cup-carrier", and its old astronomical symbol is (♒︎), a representation of water. Aquarius is one of the oldest of the recognized constellations along the zodiac (the Sun's apparent path). It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. It is found in a region often called the Sea due to its profusion of constellations with watery associations such as Cetus the whale, Pisces the fish, and Eridanus the river. At apparent magnitude 2.9, Beta Aquarii is the brightest star in the constellation. History and mythology Aquarius is identified as "The Great One" in the Babylonian star catalogues and represents the god Ea himself, who is commonly depicted holding an overflowing vase. The Babylonian star-figure appears on entitlement stones and cylinder seals from the second millennium. It contained the winter solstice in the Early Bronze Age. In Old Babylonian astronomy, Ea was the ruler of the southernmost quarter of the Sun's path, the "Way of Ea", corresponding to the period of 45 days on either side of winter solstice. Aquarius was also associated with the destructive floods that the Babylonians regularly experienced, and thus was negatively connoted. In Ancient Egypt astronomy, Aquarius was associated with the annual flood of the Nile; the banks were said to flood when Aquarius put his jar into the river, beginning spring. In the Greek tradition, the constellation came to be represented simply as a single vase from which a stream poured down to Piscis Austrinus. The name in the Hindu zodiac is likewise kumbha "water-pitcher". In Greek mythology, Aquarius is sometimes associated with Deucalion, the son of Prometheus who built a ship with his wife Pyrrha to survive an imminent flood. They sailed for nine days before washing ashore on Mount Parnassus. Aquarius is also sometimes identified with beautiful Ganymede, a youth in Greek mythology and the son of Trojan king Tros, who was taken to Mount Olympus by Zeus to act as cup-carrier to the gods. Neighboring Aquila represents the eagle, under Zeus' command, that snatched the young boy; some versions of the myth indicate that the eagle was in fact Zeus transformed. One tradition, stated that he was carried off by Eos. Yet another figure associated with the water bearer is Cecrops I, a king of Athens who sacrificed water instead of wine to the gods. Depictions In the first century, Ptolemy's Almagest established the common Western depiction of Aquarius. His water jar, an asterism itself, consists of Gamma, Pi, Eta, and Zeta Aquarii; it pours water in a stream of more than 20 stars terminating with Fomalhaut, now assigned solely to Piscis Austrinus. The water bearer's head is represented by 5th magnitude 25 Aquarii while his left shoulder is Beta Aquarii; his right shoulder and forearm are represented by Alpha and Gamma Aquarii respectively. In Eastern astronomy In Chinese astronomy, the stream of water flowing from the Water Jar was depicted as the "Army of Yu-Lin" (Yu-lim-kiun or Yulinjun, Hanzi: 羽林君). The name "Yu-lin" means "feathers and forests", referring to the numerous light-footed soldiers from the northern reaches of the empire represented by these faint stars. The constellation's stars were the most numerous of any Chinese constellation, numbering 45, the majority of which were located in modern Aquarius. The celestial army was protected by the wall Leibizhen (垒壁阵), which counted Iota, Lambda, Phi, and Sigma Aquarii among its 12 stars. 88, 89, and 98 Aquarii represent Fou-youe, the axes used as weapons and for hostage executions. Also in Aquarius is Loui-pi-tchin, the ramparts that stretch from 29 and 27 Piscium and 33 and 30 Aquarii through Phi, Lambda, Sigma, and Iota Aquarii to Delta, Gamma, Kappa, and Epsilon Capricorni. Similarly in the Hindu calendar Aquarius is depicted as Kumbha, and Kumbha, which means a pot or a jug, stands for the zodiac sign of Aquarius. Near the border with Cetus, the axe Fuyue was represented by three stars; its position is disputed and may have instead been located in Sculptor. Tienliecheng also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, Fenmu. Nearby, the emperors' mausoleum Xiuliang stood, demarcated by Kappa Aquarii and three other collinear stars. Ku ("crying") and Qi ("weeping"), each composed of two stars, were located in the same region. Three of the Chinese lunar mansions shared their name with constellations. Nu, also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation Xu ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. Wei, the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries. Features Stars Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less bright than (The Apparent Magnitude scale is reverse logarithmic, with increasingly bright objects having lower and lower (more negative) magnitudes.) Recent research has shown that there are several stars lying within its borders that possess planetary systems. The two brightest stars, α Aquarii and β Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. β Aquarii is the brightest star in Aquarius with apparent — only slightly brighter than α Aquarii. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is from Earth. α Aquarii, also known as Sadalmelik, has apparent It is distant from Earth, and is around 6.5 times as massive as the Sun, and 3000 times as luminous. It is 53 million years old. γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around 2.5 times the Sun's mass (), and double its radius. Its magnitude is 3.85, and it is away, hence its luminosity is . The name Sadachbia comes from the Arabic for "lucky stars of the tents", sa'd al-akhbiya. δ Aquarii, also known as Skat or Scheat is a blue-white spectral type A2 star with apparent magnitude 3.27 and luminosity . ε Aquarii, also known as Albali, is a blue-white spectral type A1 star with apparent magnitude 3.77, absolute magnitude 1.2, and a luminosity of . ζ Aquarii is a spectral type F2 double star; both stars are white. In combination, they appear to be magnitude 3.6 with luminosity . The primary has magnitude 4.53 and the secondary's magnitude is 4.31, but both have absolute The system's orbital period is 760 years; currently the two components are moving farther apart. θ Aquarii, sometimes called Ancha, is spectral type G8 with apparent magnitude 4.16 and an absolute κ Aquarii, also called Situla, has an apparent λ Aquarii, also called Hudoor or Ekchusis, is spectral type M2 with magnitude 3.74 and luminosity . ξ Aquarii, also called Bunda, is spectral type A7 with an apparent magnitude 4.69 and an absolute π Aquarii, also called Seat, is spectral type B0 with apparent magnitude 4.66 and absolute Planetary systems Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days. There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses. There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun1.15 solar massesit is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation. As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652. On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, as many as four may lie within the system's habitable zone, and may have liquid water on their surfaces. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth. Deep sky objects Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the asterism Messier 73. While M73 was originally catalogued as a sparsely populated open cluster, modern analysis indicates the 6 main stars are not close enough together to fit this definition, reclassifying M73 as an asterism. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii. M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th magnitude globular cluster located approximately 56,000 light-years from Earth. M73, also catalogued as NGC 6994, is an open cluster with highly disputed status. Aquarius is also home to several planetary nebulae. NGC 7009, also known as the Saturn Nebula, is an 8th magnitude planetary nebula located 3,000 light-years from Earth. It was given its moniker by the 19th century astronomer Lord Rosse for its resemblance to the planet Saturn in a telescope; it has faint protrusions on either side that resemble Saturn's rings. It appears blue-green in a telescope and has a central star of magnitude 11.3. Compared to the Helix Nebula, another planetary nebula in Aquarius, it is quite small. NGC 7293, also known as the Helix Nebula, is the closest planetary nebula to Earth at a distance of 650 light-years. It covers 0.25 square degrees, making it also the largest planetary nebula as seen from Earth. However, because it is so large, it is only viewable as a very faint object, though it has a fairly high integrated magnitude of 6.0. One of the visible galaxies in Aquarius is NGC 7727, of particular interest for amateur astronomers who wish to discover or observe supernovae. A spiral galaxy (type S), it has an integrated magnitude of 10.7 and is 3 by 3 arcseconds. NGC 7252 is a tangle of stars resulting from the collision of two large galaxies and is known as the Atoms-for-Peace galaxy because of its resemblance to a cartoon atom. Meteor showers There are three major meteor showers with radiants in Aquarius: the Eta Aquariids, the Delta Aquariids, and the Iota Aquariids. The Eta Aquariids are the strongest meteor shower radiating from Aquarius. It peaks between 5 and 6 May with a rate of approximately 35 meteors per hour. Originally discovered by Chinese astronomers in 401, Eta Aquariids can be seen coming from the Water Jar beginning on 21 April and as late as 12 May. The parent body of the shower is Halley's Comet, a periodic comet. Fireballs are common shortly after the peak, approximately between 9 May and 11 May. The normal meteors appear to have yellow trails. The Delta Aquariids is a double radiant meteor shower that peaks first on 29 July and second on 6 August. The first radiant is located in the south of the constellation, while the second radiant is located in the northern circlet of Pisces asterism. The southern radiant's peak rate is about 20 meteors per hour, while the northern radiant's peak rate is about 10 meteors per hour. The Iota Aquariids is a fairly weak meteor shower that peaks on 6 August, with a rate of approximately 8 meteors per hour. Astrology , the Sun appears in the constellation Aquarius from 16 February to 12 March. In tropical astrology, the Sun is considered to be in the sign Aquarius from 20 January to 19 February, and in sidereal astrology, from 15 February to 14 March. Aquarius is also associated with the Age of Aquarius, a concept popular in 1960s counterculture and Medieval Alchemy. The date of the start of The Age of Aquarius is a topic of much debate.
Physical sciences
Zodiac
Astronomy
809
https://en.wikipedia.org/wiki/Anaconda
Anaconda
Anacondas or water boas are a group of large boas of the genus Eunectes. They are a semiaquatic group of snakes found in tropical South America. Three to five extant and one extinct species are currently recognized, including one of the largest snakes in the world, E. murinus, the green anaconda. Description Although the name applies to a group of snakes, it is often used to refer only to one species, in particular, the common or green anaconda (Eunectes murinus), which is the largest snake in the world by weight, and the second longest after the reticulated python. Origin The recent fossil record of Eunectes is relatively sparse compared to other vertebrates and other genera of snakes. The fossil record of this group is effected by an artifact called the Pull of the Recent. Fossils of recent ancestors are not known, so the living species 'pull' the historical range of the genus to the present. Etymology The name Eunectes is derived from . The South American names anacauchoa and anacaona were suggested in an account by Peter Martyr d'Anghiera. The idea of a South American origin was questioned by Henry Walter Bates who, in his travels in South America, failed to find any similar name in use. The word anaconda is derived from the name of a snake from Ceylon (Sri Lanka) that John Ray described in Latin in his (1693) as . Ray used a catalogue of snakes from the Leyden museum supplied by Dr. Tancred Robinson. The description of its habit was based on Andreas Cleyer, who in 1684 described a gigantic snake that crushed large animals by coiling around their bodies and crushing their bones. Henry Yule in his 1886 work Hobson-Jobson, notes that the word became more popular due to a piece of fiction published in 1768 in the Scots Magazine by a certain R. Edwin. Edwin described a 'tiger' being crushed to death by an anaconda, when there were never any tigers in Sri Lanka. Yule and Frank Wall noted that the snake was a python and suggested a Tamil origin meaning elephant killer. A Sinhalese origin was also suggested by Donald Ferguson who pointed out that the word ( lightning/large and stem/trunk) was used in Sri Lanka for the small whip snake (Ahaetulla pulverulenta) and somehow got misapplied to the python before myths were created. The name commonly used for the anaconda in Brazil is sucuri, sucuriju or sucuriuba. Distribution and habitat Found in tropical South America from Ecuador, Brazil, Colombia and Venezuela south to Argentina. Feeding All five species are aquatic snakes that prey on other aquatic animals, including fish, river fowl, and caiman. Videos exist of anacondas preying on domestic animals such as goats and sometimes even jaguars that venture too close to the water. Relationship with humans While encounters between people and anacondas may be dangerous, they do not regularly hunt humans. Nevertheless, threat from anacondas is a familiar trope in comics, movies, and adventure stories (often published in pulp magazines or adventure magazines) set in the Amazon jungle. Local communities and some European explorers have given accounts of giant anacondas, legendary snakes of much greater proportion than any confirmed specimen. Although charismatic, there is little known on the biology of wild anacondas. Most of our knowledge comes from the work of Dr. Jesús A. Rivas and his team working in the Venezuelan Llanos. Species Rivas et al. revised the taxonomy of Eunectes, describing a new species of green anaconda (Eunectes akayima) and merging E. deschauenseei and E. beniensis with E. notaeus, which resulted in the recognition of only three species of anaconda. The result of their phylogenetic analysis is represented below: In a response paper, Dubois et al. questioned the results of the mtDNA analysis above and the validity of Eunectes akayima. The name of the new species was considered a nomen nudum. Mating system The mating seasons in Eunectes varies both between species and within species depending on locality, although the trend appears to be the dry season. The green anaconda (E. murinus) is the most well-studied species of Eunectes in terms of their mating system, followed by the yellow anaconda (E. notaeus); unfortunately E. deschauenseei and E. beniensis are much less common, making the specific details of their mating systems less well understood. Sexual dimorphism Sexual size dimorphism in Eunectes is the opposite of most other vertebrates. Females are larger than males in most snakes, and green anacondas (E. murinus) have one of the most extreme size differences, where females average roughly and males average only around . This size difference has several benefits for both sexes. Large size in females leads to higher fecundity and larger offspring; as a result male mate choice favours larger females. Large size is also favoured in males because larger males tend to be more successful at reproducing, both because of their size advantage in endurance rivalry and their advantage in sperm competition because larger males are able to produce more sperm. One reason that males are so much smaller in Eunectes is that large males can be confused for females, which interferes with their ability to mate when smaller males mistakenly coil them in breeding balls; as a result, there is an optimum size for males where they are large enough to successfully compete, but not large enough to risk other males trying to mate with them. Breeding balls During the mating season female anacondas release pheromones to attract males for breeding, which can result in polyandrous breeding balls; these breeding balls have been observed in E. murinus, E. notaeus, and E. deschauenseei, and likely also occur in E. beniensis. In the green anaconda (E. murinus), up to 13 males have been observed in a breeding ball, which have been recorded to last two weeks on average. In anaconda breeding balls, several males coil around one female and attempt to position themselves as close to her cloaca as possible where they use their pelvic spurs to "tickle" and encourage her to allow penetration. Since there are often many males present and only one male can mate with the female at a time, the success of a male often depends on his persistence and endurance, because physical combat is not a part of the Eunectes mating ritual, apart from firmly pushing against other males in an attempt to secure the best position on the female. Sexual cannibalism Cannibalism is quite easy in anacondas since females are so much larger than males, but sexual cannibalism has only been confirmed in E. murinus. Females gain the direct benefit of a post-copulatory high-protein meal when they consume their mates, along with the indirect benefit of additional resources to use for the formation of offspring; cannibalism in general (outside of the breeding season) has been confirmed in all but E. deschauenseei, although it is likely that it occurs in all Eunectes species. Asexual reproduction Although sexual reproduction is by far the most common in Eunectes, E. murinus has been observed to undergo facultative parthenogenesis. In both cases, the females had lived in isolation from other anacondas for over eight years, and DNA analysis showed that the few fully formed offspring were genetically identical to the mothers; although this is not commonly observed, it is likely possible in all species of Eunectes and several other species of Boidae. Indigenous mythology According to the founding myth of the Huni Kuin, a man named Yube fell in love with an anaconda woman and was turned into an anaconda as well. He began to live with her in the deep world of waters. In this world, Yube discovered a hallucinogenic drink with healing powers and access to knowledge. One day, without telling his anaconda wife, Yube decided to return to the land of men and resume his old human form. The myth also explains the origin of cipó or ayahuasca — a hallucinogenic drink taken ritualistically by the Huni Kuin.
Biology and health sciences
Snakes
Animals
840
https://en.wikipedia.org/wiki/Axiom%20of%20choice
Axiom of choice
In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets, there exists an indexed set such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem. The axiom of choice is equivalent to the statement that every partition has a transversal. In many cases, a set created by choosing elements can be made without invoking the axiom of choice, particularly if the number of sets from which to choose the elements is finite, or if a canonical rule on how to choose the elements is available — some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}}, the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets are collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. But no definite choice function is known for the collection of all non-empty subsets of the real numbers. In that case, the axiom of choice must be invoked. Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate collection (i.e. set) of shoes; this makes it possible to define a choice function directly. For an infinite collection of pairs of socks (assumed to have no distinguishing features such as being a left sock rather than a right sock), there is no obvious way to make a function that forms a set out of selecting one sock from each pair without invoking the axiom of choice. Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. Statement A choice function (also called selector or selection) is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated: Formally, this may be expressed as follows: Thus, the negation of the axiom may be expressed as the existence of a collection of nonempty sets which has no choice function. Formally, this may be derived making use of the logical equivalence of to . Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to: Given any family of nonempty sets, their Cartesian product is a nonempty set. Nomenclature In this article and other discussions of the Axiom of Choice the following abbreviations are common: AC – the Axiom of Choice. More rarely, AoC is used. ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice. ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice. Variants There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it. One variation avoids the use of choice functions by, in effect, replacing each choice function with its range: Given any set X, if the empty set is not an element of X and the elements of X are pairwise disjoint, then there exists a set C such that its intersection with any of the elements of X contains exactly one element. This can be formalized in first-order logic as: ∀x ( ∃o (o ∈ x ∧ ¬∃n (n ∈ o)) ∨ ∃a ∃b ∃c (a ∈ x ∧ b ∈ x ∧ c ∈ a ∧ c ∈ b ∧ ¬(a = b)) ∨ ∃c ∀e (e ∈ x → ∃a (a ∈ e ∧ a ∈ c ∧ ∀b ((b ∈ e ∧ b ∈ c) → a = b)))) Note that P ∨ Q ∨ R is logically equivalent to (¬P ∧ ¬Q) → R. In English, this first-order sentence reads: Given any set X, X contains the empty set as an element or the elements of X are not pairwise disjoint or there exists a set C such that its intersection with any of the elements of X contains exactly one element. This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition. Another equivalent axiom only considers collections X that are essentially powersets of other sets: For any set A, the power set of A (with the empty set removed) has a choice function. Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as Every set has a choice function. which is equivalent to For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B. The negation of the axiom can thus be expressed as: There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B. Restriction to finite sets The usual statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by the principle of finite induction. In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections. Usage Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo. Examples The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to add the axiom of choice to our axioms of set theory. The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our selection forms a legitimate set (as defined by the other ZF axioms of set theory)? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails. Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations, that is, rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of pairwise disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to form a set from selecting a point in each orbit requires that one add the axiom of choice to our axioms of set theory. See non-measurable set for more details. In classical arithmetic, the natural numbers are well-ordered: for every nonempty subset of the natural numbers, there is a unique least element under the natural ordering. In this way, one may specify a set from any given subset. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds. Criticism and acceptance A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no individual well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable. The axiom of choice asserts the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice. Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox, which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets. Moreover, paradoxical consequences of the axiom of choice for the no-signaling principle in physics have recently been pointed out. Despite these seemingly paradoxical results, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. But the debate is interesting enough that it is considered notable when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type that requires the axiom of choice to be true. Theorems of ZF hold true in any model of that theory, regardless of the truth or falsity of the axiom of choice in that particular model. The implications of choice below, including weaker versions of the axiom itself, are listed because they are not theorems of ZF. The Banach–Tarski paradox, for example, is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Such statements can be rephrased as conditional statements—for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice. In constructive mathematics As discussed above, in the classical theory of ZFC, the axiom of choice enables nonconstructive proofs in which the existence of a type of object is proved without an explicit instance being constructed. In fact, in set theory and topos theory, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle. The principle is thus not available in constructive set theory, where non-classical logic is employed. The situation is different when the principle is formulated in Martin-Löf type theory. There and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. The type theoretical context is discussed further below. Different choice principles have been thoroughly studied in the constructive contexts and the principles' status varies between different school and varieties of the constructive mathematics. Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle. Errett Bishop, who is notable for developing a framework for constructive analysis, argued that an axiom of choice was constructively acceptable, saying Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned. Independence It has been known since as early as 1922 that the axiom of choice may fail in a variant of ZF with urelements, through the technique of permutation models introduced by Abraham Fraenkel and developed further by Andrzej Mostowski. The basic technique can be illustrated as follows: Let xn and yn be distinct urelements for , and build a model where each set is symmetric under the interchange xn ↔ yn for all but a finite number of n. Then the set can be in the model but sets such as cannot, and thus X cannot have a choice function. In 1938, Kurt Gödel showed that the negation of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) that satisfies ZFC, thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model that satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent. Cohen's model is a symmetric model, which is similar to permutation models, but uses "generic" subsets of the natural numbers (justified by forcing) in place of urelements. Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. It must be made on other grounds. One argument in favor of using the axiom of choice is that it is convenient because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems provable using choice are of an elegant general character: the cardinalities of any two sets are comparable, every nontrivial ring with unity has a maximal ideal, every vector space has a basis, every connected graph has a spanning tree, and every product of compact spaces is compact, among many others. Frequently, the axiom of choice allows generalizing a theorem to "larger" objects. For example, it is provable without the axiom of choice that every vector space of finite dimension has a basis, but the generalization to all vector spaces requires the axiom of choice. Likewise, a finite product of compact spaces can be proven to be compact without the axiom of choice, but the generalization to infinite products (Tychonoff's theorem) requires the axiom of choice. The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When attempting to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF. The axiom of choice is not the only significant statement that is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF. Stronger axioms The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to Grothendieck universe, is stronger than the axiom of choice. Equivalents There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem. Set theory Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A. Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other. Given two non-empty sets, one has a surjection to the other. Every surjective function has a right inverse. The Cartesian product of any family of nonempty sets is nonempty. In other words, every family of nonempty sets has a choice function (i.e. a function which maps each of the nonempty sets to one of its elements). König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot itself be defined without some aspect of the axiom of choice.) Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal. Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element. Hausdorff maximal principle: Every partially ordered set has a maximal chain. Equivalently, in any partially ordered set, every chain can be extended to a maximal chain. Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion. Antichain principle: Every partially ordered set has a maximal antichain. Equivalently, in any partially ordered set, every antichain can be extended to a maximal antichain. The powerset of any ordinal can be well-ordered. Abstract algebra Every vector space has a basis (i.e., a linearly independent spanning subset). In other words, vector spaces are equivalent to free modules. Krull's theorem: Every unital ring (other than the trivial ring) contains a maximal ideal. Equivalently, in any nontrivial unital ring, every ideal can be extended to a maximal ideal. For every non-empty set S there is a binary operation defined on S that gives it a group structure. (A cancellative binary operation is enough, see group structure and the axiom of choice.) Every free abelian group is projective. Baer's criterion: Every divisible abelian group is injective. Every set is a projective object in the category Set of sets. Functional analysis The closed unit ball of the dual of a normed vector space over the reals has an extreme point. Point-set topology The Cartesian product of any family of connected topological spaces is connected. Tychonoff's theorem: The Cartesian product of any family of compact topological spaces is compact. In the product topology, the closure of a product of subsets is equal to the product of the closures. Mathematical logic If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below. Graph theory Every connected graph has a spanning tree. Equivalently, every nonempty graph has a spanning forest. Category theory Several results in category theory invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above. Examples of category-theoretic statements which require choice include: Every small category has a skeleton. If two small categories are weakly equivalent, then they are equivalent. Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem). Weaker forms There are several weaker statements that are not equivalent to the axiom of choice but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice. Given an ordinal parameter α ≥ ω+2 — for every set S with rank less than α, S is well-orderable. Given an ordinal parameter α ≥ 1 — for every set S with Hartogs number less than ωα, S is well-orderable. As the ordinal parameter is increased, these approximate the full axiom of choice more and more closely. Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter. Results requiring AC (or weaker forms) but weaker than it One of the most interesting aspects of the axiom of choice is the large number of places in mathematics where it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF. Set theory The ultrafilter lemma (with ZF) can be used to prove the Axiom of choice for finite sets: Given and a collection of non-empty sets, their product is not empty. The union of any countable family of countable sets is countable (this requires countable choice but not the full axiom of choice). If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite). Eight definitions of a finite set are equivalent. Every infinite game in which is a Borel subset of Baire space is determined. Every infinite cardinal κ satisfies 2×κ = κ. Measure theory The Vitali theorem on the existence of non-measurable sets, which states that there exists a subset of the real numbers that is not Lebesgue measurable. There exist Lebesgue-measurable subsets of the real numbers that are not Borel sets. That is, the Borel σ-algebra on the real numbers (which is generated by all real intervals) is strictly included the Lebesgue-measure σ-algebra on the real numbers. The Hausdorff paradox. The Banach–Tarski paradox. Algebra Every field has an algebraic closure. Every field extension has a transcendence basis. Every infinite-dimensional vector space contains an infinite linearly independent subset (this requires dependent choice, but not the full axiom of choice). Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem. The Nielsen–Schreier theorem, that every subgroup of a free group is free. The additive groups of R and C are isomorphic. Functional analysis The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals. The theorem that every Hilbert space has an orthonormal basis. The Banach–Alaoglu theorem about compactness of sets of functionals. The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem. On every infinite-dimensional topological vector space there is a discontinuous linear map. General topology A uniform space is compact if and only if it is complete and totally bounded. Every Tychonoff space has a Stone–Čech compactification. Mathematical logic Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set. The compactness theorem: If is a set of first-order (or alternatively, zero-order) sentences such that every finite subset of has a model, then has a model. Possibly equivalent implications of AC There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. Zermelo cited the partition principle, which was formulated before AC itself, as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown whether they can hold without choice. Set theory Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size. Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous. Weak partition principle: if there is an injection and a surjection from A to B, then A and B are equinumerous. Equivalently, a partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed. There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905. Abstract algebra Hahn embedding theorem: Every ordered abelian group G order-embeds as a subgroup of the additive group endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of G. This equivalence was conjectured by Hahn in 1907. Stronger forms of the negation of AC If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is. It is also consistent with ZF + DC that every set of reals is Lebesgue measurable, but this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals). Quine's system of axiomatic set theory, New Foundations (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article that introduced it. In the NF axiomatic system, the axiom of choice can be disproved. Statements implying the negation of AC There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to validate the negation of some standard ZFC theorems. As any model of ZF¬C is also a model of ZF, it is the case that for each of the following statements, there exists a model of ZF in which that statement is true. The negation of the weak partition principle: There is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models. There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a). There is an infinite set of real numbers without a countably infinite subset. The real numbers are a countable union of countable sets. This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice. There is a field with no algebraic closure. In all models of ZF¬C there is a vector space with no basis. There is a vector space with two bases of different cardinalities. There is a free complete Boolean algebra on countably many generators. There is a set that cannot be linearly ordered. There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis. In all models of ZF¬C, the generalized continuum hypothesis does not hold. For proofs, see . Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma. Axiom of choice in type theory In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation R between objects of type σ and objects of type τ. The axiom of choice states that if for each x of type σ there exists a y of type τ such that R(x,y), then there is a function f from objects of type σ to objects of type τ such that R(x,f(x)) holds for all x of type σ: Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which R varies over all formulas or over all formulas of a particular logical form.
Mathematics
Discrete mathematics
null
849
https://en.wikipedia.org/wiki/Aircraft
Aircraft
An aircraft (: aircraft) is a vehicle that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or the dynamic lift of an airfoil, or, in a few cases, direct downward thrust from its engines. Common examples of aircraft include airplanes, rotorcraft, helicopters, airships (including blimps), gliders, paramotors, and hot air balloons. Part 1 (Definitions and Abbreviations) of Subchapter A of Chapter I of Title 14 of the U. S. Code of Federal Regulations states that aircraft "means a device that is used or intended to be used for flight in the air." The human activity that surrounds aircraft is called aviation. The science of aviation, including designing and building aircraft, is called aeronautics. Crewed aircraft are flown by an onboard pilot, whereas unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion (if any), usage and others. History Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent — and safe descent — in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914 First World War, 1914 to 1918 Aviation in the interwar period, 1918 to 1939 Second World War, 1939 to 1945 Postwar era, also called the Jet Age, 1945 to the present day Methods of lift Lighter-than-air Lighter-than-air aircraft or aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large cells or canopies, filled with a lifting gas such as helium, hydrogen or hot air, which is less dense than the surrounding air. When the weight of the lifting gas is added to the weight of the aircraft itself, it is same or less than the mass of the air that the craft displaces. Small hot-air balloons, called sky lanterns, were first invented in ancient China prior to the 3rd century BC and used primarily in cultural celebrations, and were only the second type of aircraft to fly, the first being kites, which were also first invented in ancient China over two thousand years ago (see Han Dynasty). A balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs — usually fixed-wing. In 1919, Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". — though none had yet been built. The advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by a rigid outer framework and separate aerodynamic skin surrounding the gas bags, were produced, the Zeppelins being the largest and most famous. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, so "airship" came to be synonymous with these aircraft. Then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a "balloon" is an unpowered aerostat and an "airship" is a powered one. A powered, steerable aerostat is called a dirigible. Sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship (which may then be rigid or non-rigid). Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back. These soon became known as blimps. During World War II, this shape was widely adopted for tethered balloons; in windy weather, this both reduces the strain on the tether and stabilizes the balloon. The nickname blimp was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered. Heavier-than-air Heavier-than-air aircraft or aerodynes are denser than air and thus must find some way to obtain enough lift that can overcome the aircraft's weight. There are two ways to produce dynamic upthrust — aerodynamic lift by having air flowing past an aerofoil (such dynamic interaction of aerofoils with air is the origin of the term "aerodyne"), or powered lift in the form of reactional lift from downward engine thrust. Aerodynamic lift involving wings is the most common, and can be achieved via two methods. Fixed-wing aircraft (airplanes and gliders) achieve airflow past the wings by having the entire aircraft moving forward through the air, while rotorcraft (helicopters and autogyros) do so by having mobile, elongated wings spinning rapidly around a mast in an assembly known as the rotor. As aerofoils, there must be air flowing over the wing to create pressure difference between above and below, thus generating upward lift over the entire wetted area of the wing. A flexible wing is a wing made of fabric or thin sheet material, often stretched over a rigid frame, similar to the flight membranes on many flying and gliding animals. A kite is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary. With powered lift, the aircraft directs its engine thrust vertically downward. V/STOL aircraft, such as the Harrier jump jet and Lockheed Martin F-35B take off and land vertically using powered lift and transfer to aerodynamic lift in steady flight. A pure rocket is not usually regarded as an aerodyne because its flight does not depend on interaction with the air at all (and thus can even fly in the vacuum of outer space); however, many aerodynamic lift vehicles have been powered or assisted by rocket motors. Rocket-powered missiles that obtain aerodynamic lift at very high speed due to airflow over their bodies are a marginal case. Fixed-wing The forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly and were invented in China around 500 BC. Much aerodynamic research was done with kites before test aircraft, wind tunnels, and computer modelling programs became available. The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853. The first powered and controllable fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright. Besides the method of propulsion (if any), fixed-wing aircraft are in general characterized by their wing configuration. The most important wing characteristics are: Number of wings – Monoplane, biplane, triplane, or multiplane. Wing support – Braced or cantilever, rigid or flexible. Wing planform – including aspect ratio, angle of sweep, and any variations along the span (including the important class of delta wings). Location of the horizontal stabilizer, if any. Dihedral angle – positive, zero, or negative (anhedral). A variable geometry aircraft can change its wing configuration during flight. A flying wing has no fuselage, though it may have small blisters or pods. The opposite of this is a lifting body, which has no wings, though it may have small stabilizing and control surfaces. Wing-in-ground-effect vehicles are generally not considered aircraft. They "fly" efficiently close to the surface of the ground or water, like conventional aircraft during takeoff. An example is the Russian ekranoplan nicknamed the "Caspian Sea Monster". Man-powered aircraft also rely on ground effect to remain airborne with minimal pilot power, but this is only because they are so underpowered—in fact, the airframe is capable of flying higher. Rotorcraft Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil cross-section blades (a rotary wing) to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft. Helicopters have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips. Some have a tail rotor to counteract the rotation of the main rotor, and to aid directional control. Autogyros have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight. Compound rotorcraft have wings that provide some or all of the lift in forward flight. They are nowadays classified as powered lift types and not as rotorcraft. Tiltrotor aircraft (such as the Bell Boeing V-22 Osprey), tiltwing, tail-sitter, and coleopter aircraft have their rotors/propellers horizontal for vertical flight and vertical for forward flight. Other methods of lift A lifting body is an aircraft body shaped to produce lift. If there are any wings, they are too small to provide significant lift and are used only for stability and control. Lifting bodies are not efficient: they suffer from high drag, and must also travel at high speed to generate enough lift to fly. Many of the research prototypes, such as the Martin Marietta X-24, which led up to the Space Shuttle, were lifting bodies, though the Space Shuttle is not, and some supersonic missiles obtain lift from the airflow over a tubular body. Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier jump jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others. A few experimental designs rely entirely on engine thrust to provide lift throughout the whole flight, including personal fan-lift hover platforms and jetpacks. VTOL research designs include the Rolls-Royce Thrust Measuring Rig. Some rotor wings employ horizontal-axis wings, in which airflow across a spinning rotor generates lift. The Flettner airplane uses a rotating cylinder, obtaining lift from the Magnus effect. The FanWing uses a cross-flow fan, while the mechanically more complex cyclogyro comprises multiple wings which rotate together around a central axis. The ornithopter obtains thrust by flapping its wings. Size and speed extremes Size The smallest aircraft are toys/recreational items, and nano aircraft. The largest aircraft by dimensions and volume (as of 2016) is the long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to , and an airborne endurance of two weeks with a payload of up to . The largest aircraft by weight and largest regular fixed-wing aircraft ever built, , was the Antonov An-225 Mriya. That Soviet-built (Ukrainian SSR) six-engine transport of the 1980s was long, with an wingspan. It holds the world payload record, after transporting of goods, and has flown loads commercially. With a maximum loaded weight of , it was also the heaviest aircraft built to date. It could cruise at . The aircraft was destroyed during the Russo-Ukrainian War. The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over . The 8-engine, piston/propeller Hughes H-4 Hercules "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94m/260 ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78 ft) — flew only one short hop in the late 1940s and never flew out of ground effect. The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over ), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner). Speeds The fastest fixed-wing aircraft and fastest glider, is the Space Shuttle, which re-entered the atmosphere at nearly Mach 25 or The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.68 or on 16 November 2004. Prior to the X-43A, the fastest recorded powered airplane flight, and still the record for the fastest manned powered airplane, was the North American X-15, rocket-powered airplane at Mach 6.7 or 7,274 km/h (4,520 mph) on 3 October 1967. The fastest manned, air-breathing powered airplane is the Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, having reached on 28 July 1976. Propulsion Unpowered aircraft Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", i.e., gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders. Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control. Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air. Powered aircraft Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage. Propeller aircraft Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in tractor configuration but can be mounted behind in pusher configuration. Variations of propeller layout include contra-rotating propellers and ducted fans. Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands. Jet aircraft Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the exhaust rearwards to provide thrust. Different jet engine configurations include the turbojet and turbofan, sometimes with the addition of an afterburner. Those with no rotating turbomachinery include the pulsejet and ramjet. These mechanically simple engines produce no thrust when stationary, so the aircraft must be launched to flying speed using a catapult, like the V-1 flying bomb, or a rocket, for example. Other engine types include the motorjet and the dual-cycle Pratt & Whitney J58. Compared to engines using propellers, jet engines can provide much higher thrust, higher speeds and, above about , greater efficiency. They are also much more fuel-efficient than rockets. As a consequence nearly all large, high-speed or high-altitude aircraft use jet engines. Rotorcraft Some rotorcraft, such as helicopters, have a powered rotary wing or rotor, where the rotor disc can be angled slightly forward so that a proportion of its lift is directed forwards. The rotor may, like a propeller, be powered by a variety of methods such as a piston engine or turbine. Experiments have also used jet nozzles at the rotor blade tips. Other types of powered aircraft Rocket-powered aircraft have occasionally been experimented with, and the Messerschmitt Me 163 Komet fighter even saw action in the Second World War. Since then, they have been restricted to research aircraft, such as the North American X-15, which traveled up into space where air-breathing engines cannot work (rockets carry their own oxidant). Rockets have more often been used as a supplement to the main power plant, typically for the rocket-assisted take off of heavily loaded aircraft, but also to provide high-speed dash capability in some hybrid designs such as the Saunders-Roe SR.53. The ornithopter obtains thrust by flapping its wings. Design and construction Aircraft are designed according to many factors such as customer and manufacturer demand, safety protocols and physical and economic constraints. For many types of aircraft the design process is regulated by national airworthiness authorities. The key parts of an aircraft are generally divided into three categories: The structure ("airframe") comprises the main load-bearing elements and associated equipment, as well as flight controls. The propulsion system ("powerplant") (if it is powered) comprises the power source and associated equipment, as described above. The avionics comprise the electrical and electronic control, navigation and communication systems. Structure The approach to structural design varies widely between different types of aircraft. Some, such as paragliders, comprise only flexible materials that act in tension and rely on aerodynamic pressure to hold their shape. A balloon similarly relies on internal gas pressure, but may have a rigid basket or gondola slung below it to carry its payload. Early aircraft, including airships, often employed flexible doped aircraft fabric covering to give a reasonably smooth aeroshell stretched over a rigid frame. Later aircraft employed semi-monocoque techniques, where the skin of the aircraft is stiff enough to share much of the flight loads. In a true monocoque design there is no internal structure left. The key structural parts of an aircraft depend on what type it is. Aerostats Lighter-than-air types are characterised by one or more gasbags, typically with a supporting structure of flexible cables or a rigid framework called its hull. Other elements such as engines or a gondola may also be attached to the supporting structure. Aerodynes Heavier-than-air types are characterised by one or more wings and a central fuselage. The fuselage typically also carries a tail or empennage for stability and control, and an undercarriage for takeoff and landing. Engines may be located on the fuselage or wings. On a fixed-wing aircraft the wings are rigidly attached to the fuselage, while on a rotorcraft the wings are attached to a rotating vertical shaft. Smaller designs sometimes use flexible materials for part or all of the structure, held in place either by a rigid frame or by air pressure. The fixed parts of the structure comprise the airframe. Power The source of motive power for an aircraft is normally called the powerplant, and includes engine or motor, propeller or rotor, (if any), jet nozzles and thrust reversers (if any), and accessories essential to the functioning of the engine or motor (e.g.: starter, ignition system, intake system, exhaust system, fuel system, lubrication system, engine cooling system, and engine controls). Powered aircraft are typically powered by internal combustion engines (piston or turbine) burning fossil fuels—typically gasoline (avgas) or jet fuel. A very few are powered by rocket power, ramjet propulsion, or by electric motors, or by internal combustion engines of other types, or using other fuels. A very few have been powered, for short flights, by human muscle energy (e.g.: Gossamer Condor). Avionics The avionics comprise any electronic aircraft flight control systems and related equipment, including electronic cockpit instrumentation, navigation, radar, monitoring, and communications systems. Flight characteristics Flight envelope The flight envelope of an aircraft refers to its approved design capabilities in terms of airspeed, load factor and altitude. The term can also refer to other assessments of aircraft performance such as maneuverability. When an aircraft is abused, for instance by diving it at too-high a speed, it is said to be flown outside the envelope, something considered foolhardy since it has been taken beyond the design limits which have been established by the manufacturer. Going beyond the envelope may have a known outcome such as flutter or entry to a non-recoverable spin (possible reasons for the boundary). Range The range is the distance an aircraft can fly between takeoff and landing, as limited by the time it can remain airborne. For a powered aircraft the time limit is determined by the fuel load and rate of consumption. For an unpowered aircraft, the maximum flight time is limited by factors such as weather conditions and pilot endurance. Many aircraft types are restricted to daylight hours, while balloons are limited by their supply of lifting gas. The range can be seen as the average ground speed multiplied by the maximum time in the air. The Airbus A350-900ULR is among the longest range airliners. Flight dynamics Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation around three axes which pass through the vehicle's center of gravity, known as pitch, roll, and yaw. Roll is a rotation about the longitudinal axis (equivalent to the rolling or heeling of a ship) giving an up-down movement of the wing tips measured by the roll or bank angle. Pitch is a rotation about the sideways horizontal axis giving an up-down movement of the aircraft nose measured by the angle of attack. Yaw is a rotation about the vertical axis giving a side-to-side movement of the nose known as sideslip. Flight dynamics is concerned with the stability and control of an aircraft's rotation about each of these axes. Stability An aircraft that is unstable tends to diverge from its intended flight path and so is difficult to fly. A very stable aircraft tends to stay on its flight path and is difficult to maneuver. Therefore, it is important for any design to achieve the desired degree of stability. Since the widespread use of digital computers, it is increasingly common for designs to be inherently unstable and rely on computerised control systems to provide artificial stability. A fixed wing is typically unstable in pitch, roll, and yaw. Pitch and yaw stabilities of conventional fixed wing designs require horizontal and vertical stabilisers, which act similarly to the feathers on an arrow. These stabilizing surfaces allow equilibrium of aerodynamic forces and to stabilise the flight dynamics of pitch and yaw. They are usually mounted on the tail section (empennage), although in the canard layout, the main aft wing replaces the canard foreplane as pitch stabilizer. Tandem wing and tailless aircraft rely on the same general rule to achieve stability, the aft surface being the stabilising one. A rotary wing is typically unstable in yaw, requiring a vertical stabiliser. A balloon is typically very stable in pitch and roll due to the way the payload is slung underneath the center of lift. Control Flight control surfaces enable the pilot to control an aircraft's flight attitude and are usually part of the wing or mounted on, or integral with, the associated stabilizing surface. Their development was a critical advance in the history of aircraft, which had until that point been uncontrollable in flight. Aerospace engineers develop control systems for a vehicle's orientation (attitude) about its center of mass. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the aerodynamic center of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the aerodynamic center of the aircraft, causing the aircraft to pitch up or down. Control systems are also sometimes used to increase or decrease drag, for example to slow the aircraft to a safe speed for landing. The two main aerodynamic forces acting on any aircraft are lift supporting it in the air and drag opposing its motion. Control surfaces or other techniques may also be used to affect these forces directly, without inducing any rotation. Environmental impact Aircraft permit long distance, high speed travel and may be a more fuel efficient mode of transportation in some circumstances. Aircraft have environmental and climate impacts beyond fuel efficiency considerations, however. They are also relatively noisy compared to other forms of travel and high altitude aircraft generate contrails, which experimental evidence suggests may alter weather patterns. Uses for aircraft Aircraft are produced in several different types optimized for various uses; military aircraft, which includes not just combat types but many types of supporting aircraft, and civil aircraft, which include all non-military types, experimental and model. Military A military aircraft is any aircraft that is operated by a legal or insurrectionary armed service of any type. Military aircraft can be either combat or non-combat: Combat aircraft are aircraft designed to destroy enemy equipment using its own armament. Combat aircraft divide broadly into fighters and bombers, with several in-between types, such as fighter-bombers and attack aircraft, including attack helicopters. Non-combat aircraft are not designed for combat as their primary function, but may carry weapons for self-defense. Non-combat roles include search and rescue, reconnaissance, observation, transport, training, and aerial refueling. These aircraft are often variants of civil aircraft. Most military aircraft are powered heavier-than-air types. Other types, such as gliders and balloons, have also been used as military aircraft; for example, balloons were used for observation during the American Civil War and World War I, and military gliders were used during World War II to land troops. Civil Civil aircraft divide into commercial and general types, however there are some overlaps. Commercial aircraft include types designed for scheduled and charter airline flights, carrying passengers, mail and other cargo. The larger passenger-carrying types are the airliners, the largest of which are wide-body aircraft. Some of the smaller types are also used in general aviation, and some of the larger types are used as VIP aircraft. General aviation is a catch-all covering other kinds of private (where the pilot is not paid for time or expenses) and commercial use, and involving a wide range of aircraft types such as business jets (bizjets), trainers, homebuilt, gliders, warbirds and hot air balloons to name a few. The vast majority of aircraft today are general aviation types. Experimental An experimental aircraft is one that has not been fully proven in flight, or that carries a Special Airworthiness Certificate, called an Experimental Certificate in United States parlance. This often implies that the aircraft is testing new aerospace technologies, though the term also refers to amateur-built and kit-built aircraft, many of which are based on proven designs. Model A model aircraft is a small unmanned type made to fly for fun, for static display, for aerodynamic research or for other purposes. A scale model is a replica of some larger design.
Technology
Transportation
null
896
https://en.wikipedia.org/wiki/Argon
Argon
Argon is a chemical element; it has symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust. Nearly all argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas. The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990. Argon is extracted industrially by the fractional distillation of liquid air. It is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. It is also used in incandescent and fluorescent lighting, and other gas-discharge tubes. It makes a distinctive blue-green gas laser. It is also used in fluorescent glow starters. Characteristics Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature. Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized. History Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785. Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon. Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements. Prior to 1957, the symbol for argon was "A". This was changed to Ar after the International Union of Pure and Applied Chemistry published the work Nomenclature of Inorganic Chemistry in 1957. Occurrence Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. Isotopes The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as . The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Compounds Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Production Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. It is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of its applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Scientific research Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Preservative Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Laboratory equipment Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Medical use Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Lighting Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Miscellaneous uses Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse. Safety Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
Physical sciences
Chemical elements_2
null
897
https://en.wikipedia.org/wiki/Arsenic
Arsenic
Arsenic is a chemical element with the symbol As and the atomic number 33. It is a metalloid and one of the pnictogens, and therefore shares many properties with its group 15 neighbors phosphorus and antimony. Arsenic is a notoriously toxic heavy metal. It occurs naturally in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is also a common n-type dopant in semiconductor electronic devices, and a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds. Arsenic has been known since ancient times to be poisonous to humans. However, a few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic have been proposed to be an essential dietary element in rats, hamsters, goats, and chickens. Research has not been conducted to determine whether small amounts of arsenic may play a role in human metabolism. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic number 1 in its 2001 prioritized list of hazardous substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. Characteristics Physical characteristics The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor. Arsenic sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is at 3.63 MPa and . Isotopes Arsenic occurs in nature as one stable isotope, 75As, and is therefore called a monoisotopic element. As of 2024, at least 32 radioisotopes have also been synthesized, ranging in atomic mass from 64 to 95. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Chemistry Arsenic has a similar electronegativity and ionization energies to its lighter pnictogen congener phosphorus and therefore readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the +5 oxidation state than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds Compounds of arsenic resemble, in some respects, those of phosphorus, which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. Inorganic compounds One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and its salts, known as arsenates, are a major source of arsenic contamination of groundwater in regions with high levels of naturally-occurring arsenic minerals. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive, garlic-like odor; it is very toxic. Occurrence and production Arsenic is the 53rd most abundant element in the Earth's crust, comprising about 1.5 parts per million (0.00015%). Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Arsenic is the 22nd most abundant element in seawater and ranks 41st in abundance in the universe. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word zarnika, from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile". Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic". Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era. During the Bronze Age, arsenic was melted with copper to make arsenical bronze. Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide. In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic, which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper. Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. In small doses, soluble arsenic compounds act as stimulants, and were once popular as medicine by people in the mid-18th to 19th centuries; this use was especially prevalent for sport animals such as race horses or work dogs and continued into the 20th century. A 2006 study of the remains of the Australian racehorse Phar Lap determined that its 1932 death was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys. Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler), for treating diseases such as cancer or psoriasis. Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis in spite of their severe toxicity, since the disease is almost uniformly fatal if untreated. In 2000 the US Food and Drug Administration approved arsenic trioxide for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically. Arsenic was used in the taxidermy process up until the 1980s. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that the Halomonadaceae strain GFAJ-1 could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Potential role in higher animals Arsenic may be an essential trace mineral in birds, involved in the synthesis of methionine metabolites. However, the role of arsenic in bird nutrition is disputed, as other authors state that arsenic is toxic in small amounts Some evidence indicates that arsenic is an essential trace mineral in mammals. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. In Europe, an analysis based on 20,000 soil samples across all 28 countries show that 98% of sampled soils have concentrations less than 20 mg kg-1. In addition, the As hotspots are related to frequent fertilization and close distance to mining activities. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 μg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 μg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are , , , and at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where sulfate reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Arsenic removal Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or not at all due to charge repulsion. In coagulation, a positively charged coagulent such as iron and aluminum (commonly used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralize the negatively charged arsenate, enable it to settle. Flocculation follows where a flocculant bridges smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exists in uncharged arsenious acid, H3AsO3, at near-neutral pH. The major drawbacks of coagulation and flocculation are the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as iron may produce ion contamination that exceeds safety levels. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons (e.g. arsine). Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram, or 1000 ppb). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
Physical sciences
Chemical elements_2
null
898
https://en.wikipedia.org/wiki/Antimony
Antimony
Antimony is a chemical element; it has symbol Sb () and atomic number 51. A lustrous grey metal or metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of this metalloid in the West was written in 1540 by Vannoccio Biringuccio. China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron. The most common applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices. Characteristics Properties Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature but, if heated, it reacts with oxygen to produce antimony trioxide, Sb2O3. Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to mark hard objects. Coins of antimony were issued in China's Guizhou in 1931; durability was poor, and minting was soon discontinued because of its softness and toxicity. Antimony is resistant to attack by acids. The only stable allotrope of antimony under standard conditions is metallic, brittle, silver-white, and shiny. It crystallises in a trigonal cell, isomorphic with bismuth and the gray allotrope of arsenic, and is formed when molten antimony is cooled slowly. Amorphous black antimony is formed upon rapid cooling of antimony vapor, and is only stable as a thin film (thickness in nanometres); thicker samples spontaneously transform into the metallic form. It oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The supposed yellow allotrope of antimony, generated only by oxidation of stibine (SbH3) at −90 °C, is also impure and not a true allotrope; above this temperature and in ambient light, it transforms into the more stable black allotrope. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride, but it always contains appreciable chlorine and is not really an antimony allotrope. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Isotopes Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Antimony is the lightest element to have an isotope with an alpha decay branch, excluding 8Be and other light nuclides with beta-delayed alpha emission. Occurrence The abundance of antimony in the Earth's crust is estimated at 0.2 parts per million, comparable to thallium at 0.5 ppm and silver at 0.07 ppm. It is the 63rd most abundant element in the crust. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Compounds Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more common. Oxides and hydroxides Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. The most important antimony ore is stibnite (). Other sulfide minerals include pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric, which features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and . Halides Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: + 6 HF → 2 + 3 It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: + 6 HCl → 2 + 3 Arsenic sulfides are not readily attacked by the hydrochloric acid, so this method offers a route to As-free Sb. The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Antimonides, hydrides, and organoantimony compounds Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : + 3 → Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include triphenylstibine (Sb(C6H5)3) and pentaphenylantimony (Sb(C6H5)5). History Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable". The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. Antimony was frequently described in alchemical manuscripts, including the Summa Perfectionis of Pseudo-Geber, written around the 14th century. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. Etymology The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is . The origin of that is uncertain, and all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French , would mean "monk-killer", which is explained by the fact that many early alchemists were monks, and some antimony compounds were poisonous. Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". However, ancient Greek would more naturally express the pure negative as α- ("not"). Edmund Oscar von Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence. The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek. The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium. The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony. The Egyptians called antimony mśdmt or stm. The Arabic word for the substance, as opposed to the cosmetic, can appear as ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. The Greek word στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm. Production Process The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron: + 3 Fe → 2 Sb + 3 FeS The sulfide is converted to an oxide by roasting. The product is further purified by vaporizing the volatile antimony(III) oxide, which is recovered. This sublimate is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction: 2 + 3 C → 4 Sb + 3 The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces. Top producers and production volumes In 2022, according to the US Geological Survey, China accounted for 54.5% of total antimony production, followed in second place by Russia with 18.2% and Tajikistan with 15.5%. Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher. Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted. Reserves Supply risk For antimony-importing regions, such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan (8%), and Russia (4%), these sources are critical to supply. European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%). United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index. United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2022, no antimony was mined in the U.S. Applications Approximately 48% of antimony is consumed in flame retardants, 33% in lead–acid batteries, and 8% in plastics. Flame retardants Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed. Alloys Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. When casting it increases fluidity of the melt and reduces shrinkage during cooling. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes. Other applications Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments. In the 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide (InSb) is used as a material for mid-infrared detectors. The material Ge2Sb2Te5 is used as for phase-change memory, a type of computer memory. Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals. Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis. Early treatments used antimony(III) species (trivalent antimonials), but in 1922 Upendranath Brahmachari invented a much safer antimony(V) drug, and since then so-called pentavalent antimonials have been the standard first-line treatment. However, Leishmania strains in Bihar and neighboring regions have developed resistance to antimony. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination. Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources. The powder derived from crushed antimony sulfide (kohl) has been used for millennia as an eye cosmetic. Historically it was applied to the eyes with a metal rod and with one's spittle, and was thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries. Precautions Antimony and many of its compounds are toxic, and the effects of antimony poisoning are similar to arsenic poisoning. The toxicity of antimony is far lower than that of arsenic; this might be caused by the significant differences of uptake, metabolism and excretion between arsenic and antimony. The uptake of antimony(III) or antimony(V) in the gastrointestinal tract is at most 20%. Antimony(V) is not quantitatively reduced to antimony(III) in the cell (in fact antimony(III) is oxidised to antimony(V) instead). Since methylation of antimony does not occur, the excretion of antimony(V) in urine is the main way of elimination. Like arsenic, the most serious effect of acute antimony poisoning is cardiotoxicity and the resulting myocarditis; however, it can also manifest as Adams–Stokes syndrome, which arsenic does not. Reported cases of intoxication by antimony equivalent to 90 mg antimony potassium tartrate dissolved from enamel has been reported to show only short term effects. An intoxication with 6 g of antimony potassium tartrate was reported to result in death after three days. Inhalation of antimony dust is harmful and in certain cases may be fatal; in small doses, antimony causes headaches, dizziness, and depression. Larger doses such as prolonged skin contact may cause dermatitis, or damage the kidneys and the liver, causing violent and frequent vomiting, leading to death in a few days. Antimony is incompatible with strong oxidizing agents, strong acids, halogen acids, chlorine, or fluorine. It should be kept away from heat. Antimony leaches from polyethylene terephthalate (PET) bottles into liquids. While levels observed for bottled water are below drinking water guidelines, fruit juice concentrates (for which no guidelines are established) produced in the UK were found to contain up to 44.7 μg/L of antimony, well above the EU limits for tap water of 5 μg/L. The guidelines are: World Health Organization: 20 μg/L Japan: 15 μg/L United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 μg/L EU and German Federal Ministry of Environment: 5 μg/L The tolerable daily intake (TDI) proposed by WHO is 6 μg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3. Toxicity Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans. Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay.
Physical sciences
Chemical elements_2
null
899
https://en.wikipedia.org/wiki/Actinium
Actinium
Actinium is a chemical element; it has symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. The actinide series, a set of 15 elements between actinium and lawrencium in the periodic table, are named for actinium. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated. A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy. History André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times. Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89. The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde. Properties Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation. The first element of the actinides, actinium gave the set its name, much as lanthanum had done for the lanthanides. The actinides are much more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett). Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn] 6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. Although the 5f orbitals are unoccupied in an actinium atom, it can be used as a valence orbital in actinium complexes and hence it is generally considered the first 5f element by authors working on it. Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules. Chemical compounds Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent. Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters. Oxides Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at or the oxalate at , in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals. Halides Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product. AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above . Similarly to the oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at . However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia. Reaction of aluminium bromide and actinium oxide yields actinium tribromide: Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3 and treating it with ammonium hydroxide at results in the oxybromide AcOBr. Other compounds Actinium hydride was obtained by reduction of actinium trichloride with potassium at , and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain. Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at . Isotopes Naturally occurring actinium is principally composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-three radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 203 u () to 236 u (). Occurrence and synthesis Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical. The most concentrated actinium sample prepared from raw material consisted of 7 micrograms of 227Ac in less than 0.1 milligrams of La2O3, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. ^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between . Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile. Applications Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies. 227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations. 225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers. The medium half-life of 227Ac (21.77 years) makes it a very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior. There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K. Precautions 227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower, than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
Physical sciences
Chemical elements_2
null
900
https://en.wikipedia.org/wiki/Americium
Americium
Americium is a synthetic chemical element; it has symbol Am and atomic number 95. It is radioactive and a transuranic member of the actinide series in the periodic table, located under the lanthanide element europium and was thus named after the Americas by analogy. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with a silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. History Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. The times are half-lives The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am), though the quantities would be tiny and this has not been confirmed. Extraterrestrial long-lived 247Cm is probably also deposited on Earth and has 243Am as one of its intermediate decay products, but again this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have theoretically been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order . Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 μOhm·cm to 10 μOhm·cm after 40 hours, and saturates at about 16 μOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 μOhm·cm at liquid helium to 69 μOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states +2, +4, +5, +6 and +7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium(III) chloride hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -> AmOCl + 2HCl Chalcogenides and pnictides The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Organoamericium compounds Analogous to uranocene, americium is predicted to form the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Biological aspects Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. In the laboratory, both americium and curium were found to support the growth of methylotrophs. Fission The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. Isotopes About 18 isotopes and 11 nuclear isomers are known for americium, having mass numbers 229, 230, and 232 through 247. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Applications Ionization-type smoke detector Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. Radionuclide As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. Neutron source The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Production of other elements Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: ^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Spectrometer Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. Health concerns As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
Physical sciences
Chemical elements_2
null
901
https://en.wikipedia.org/wiki/Astatine
Astatine
Astatine is a chemical element; it has symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Consequently, a solid sample of the element has never been seen, because any macroscopic specimen would be immediately vaporized by the heat of its radioactivity. The bulk properties of astatine are not known with certainty. Many of them have been estimated from its position on the periodic table as a heavier analog of fluorine, chlorine, bromine, and iodine, the four stable halogens. However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver. The first synthesis of astatine was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley. They named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope, astatine-210, nor the medically useful astatine-211 occur naturally; they are usually produced by bombarding bismuth-209 with alpha particles. Characteristics Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of seconds or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than the longest-lived francium isotopes (205–211At) are in any case synthetic and do not occur in nature. The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted. Physical Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal). Astatine sublimes less readily than iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions. The structure of solid astatine is unknown. As an analog of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure, it may well be a superconductor, like the similar high-pressure phase of iodine. Metallic astatine is expected to have a density of 8.91–8.95 g/cm3. Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy <, and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2. Chemical The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects, astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution. Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008). Compounds Less reactive than iodine, astatine is the least reactive of the halogens; the chemical properties of tennessine, the next-heavier group 17 element, have not yet been investigated, however. Astatine compounds have been synthesized in nano-scale amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7. Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides. The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide. Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms. With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. History In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine, and astatine's radioactivity would have prevented him from handling it in the quantities he claimed. Moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 by observing its X-ray emission lines. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine-218, his means to detect it were too weak, by current standards, to enable correct identification; moreover, he could not perform chemical tests on the element. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Ancient Greek () meaning , because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine ... it also exhibits metallic properties, more like its metallic neighbors Po and Bi." Isotopes There are 41 known isotopes of astatine, with mass numbers of 188 and 190–229. Theoretical modeling suggests that about 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. Astatine-210 and most of the lighter isotopes exhibit beta plus decay (positron emission), astatine-217 and heavier isotopes except astatine-218 exhibit beta minus decay, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-213m1; its half-life of 110 nanoseconds is shorter than 125 nanoseconds for astatine-213, the shortest-lived ground state. Natural occurrence Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams). Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes. Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed. Synthesis Formation Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 gigabecquerels (about 86 nanograms or 2.47 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method. The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. Although higher energies can produce more astatine-211, it will produce unwanted astatine-210 that decays to toxic polonium-210 as well. Instead, the maximum energy of the particle accelerator is set to be below or slightly above the threshold of astatine-210 production, in order to maximize the production of astatine-211 while keeping the amount of astatine-210 at an acceptable level. Separation methods Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam. Dry The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine. Wet The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as dibutyl ether, diisopropyl ether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry. Uses and precautions Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 μm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell. Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue. Animal studies show that astatine, similarly to iodine—although to a lesser extent, perhaps because of its slightly more metallic nature—is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
Physical sciences
Chemical elements_2
null
902
https://en.wikipedia.org/wiki/Atom
Atom
Atoms are the basic particles of the chemical elements. An atom consists of a nucleus of protons and generally neutrons, surrounded by an electromagnetically bound swarm of electrons. The chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. Atoms with the same number of protons but a different number of neutrons are called isotopes of the same element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. Atoms are smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. They are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.9994% of an atom's mass is in the nucleus. Protons have a positive electric charge and neutrons have no charge, so the nucleus is positively charged. The electrons are negatively charged, and this opposing charge is what binds them to the nucleus. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral as a whole. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to attach and detach from each other is responsible for most of the physical changes observed in nature. Chemistry is the science that studies these changes. History of atomic theory In philosophy The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". But this ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton found evidence that matter really is composed of discrete units, and so applied the word atom to those units. Dalton's law of multiple proportions In the early 1800s, John Dalton compiled experimental data gathered by him and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in any group of chemical compounds which all contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This pattern suggested that each element combines with other elements in multiples of a basic unit of weight, with each element having a unit of unique weight. Dalton decided to call these units "atoms". For example, there are two types of tin oxide: one is a grey powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide there is one atom of oxygen for every atom of tin, and in the white oxide there are two atoms of oxygen for every atom of tin (SnO and SnO2). Dalton also analyzed iron oxides. There is one type of iron oxide that is a black powder which is 78.1% iron and 21.9% oxygen; and there is another iron oxide that is a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. Dalton concluded that in these oxides, for every two atoms of iron, there are two or three atoms of oxygen respectively (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays can be deflected by electric and magnetic fields, which meant that cathode rays are not a form of light but made of electrically charged particles, and their charge was negative given the direction the particles were deflected in. He measured these particles to be 1,700 times lighter than hydrogen (the lightest atom). He called these new particles corpuscles but they were later renamed electrons since these are the particles that carry electricity. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. Thomson explained that an electric current is the passing of electrons from one atom to the next, and when there was no current the electrons embedded themselves in the atoms. This in turn meant that atoms were not indivisible as scientists thought. The atom was composed of electrons whose negative charge was balanced out by some source of positive charge to create an electrically neutral atom. Ions, Thomson explained, must be atoms which have an excess or shortage of electrons. Discovery of the nucleus The electrons in the atom logically had to be balanced out by a commensurate amount of positive charge, but Thomson had no idea where this positive charge came from, so he tentatively proposed that it was everywhere in the atom, the atom being in the shape of a sphere. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. Following from this, Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout the sphere in a more or less even manner. Thomson's model is popularly known as the plum pudding model, though neither Thomson nor his colleagues used this analogy. Thomson's model was incomplete, it was unable to predict any other properties of the elements such as emission spectra and valencies. It was soon rendered obsolete by the discovery of the atomic nucleus. Between 1908 and 1913, Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They did this to measure the scattering patterns of the alpha particles. They spotted a small number of alpha particles being deflected by angles greater than 90°. This shouldn't have been possible according to the Thomson model of the atom, whose charges were too diffuse to produce a sufficiently strong electric field. The deflections should have all been negligible. Rutherford proposed that the positive charge of the atom is concentrated in a tiny volume at the center of the atom and that the electrons surround this nucleus in a diffuse cloud. This nucleus carried almost all of the atom's mass, the electrons being so very light. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field that could deflect the alpha particles so strongly. Bohr model A problem in classical mechanics is that an accelerating charged particle radiates electromagnetic radiation, causing the particle to lose kinetic energy. Circular motion counts as acceleration, which means that an electron orbiting a central charge should spiral down into that nucleus as it loses speed. In 1913, the physicist Niels Bohr proposed a new model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable and why elements absorb and emit electromagnetic radiation in discrete spectra. Bohr's model could only predict the emission spectra of hydrogen, not atoms with more than one electron. Discovery of protons and neutrons Back in 1815, William Prout observed that the atomic weights of many elements were multiples of hydrogen's atomic weight, which is in fact true for all of them if one takes isotopes into account. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion is equal to the negative charge of an electron, and these were then the smallest known charged particles. Thomson later found that the positive charge in an atom is a positive multiple of an electron's negative charge. In 1913, Henry Moseley discovered that the frequencies of X-ray emissions from an excited atom were a mathematical function of its atomic number and hydrogen's nuclear charge. In 1919 Rutherford bombarded nitrogen gas with alpha particles and detected hydrogen ions being emitted from the gas, and concluded that they were produced by alpha particles hitting and splitting the nuclei of the nitrogen atoms. These observations led Rutherford to conclude that the hydrogen nucleus is a singular particle with a positive charge equal to the electron's negative charge. He named this particle "proton" in 1920. The number of protons in an atom (which Rutherford called the "atomic number") was found to be equal to the element's ordinal number on the periodic table and therefore provided a simple and clear-cut way of distinguishing the elements from each other. The atomic weight of each element is higher than its proton number, so Rutherford hypothesized that the surplus weight was carried by unknown particles with no electric charge and a mass equal to that of the proton. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons. The current consensus model In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed that all particles behave like waves to some extent, and in 1926 Erwin Schroedinger used this idea to develop the Schroedinger equation, which describes electrons as three-dimensional waveforms rather than points in space. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be found. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is the least massive of these particles by four orders of magnitude at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass of . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a mass of . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to  femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E=mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon begins to decrease. That means that a fusion process producing a nucleus that has an atomic number higher than about 26, and a mass number higher than about 60, is an endothermic process. Thus, more massive nuclei cannot undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 251 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 161 (bringing the total to 251) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 35 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.1 stable isotopes per element. Twenty-six "monoisotopic elements" have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 251 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, and nitrogen-14. (Tantalum-180m is odd-odd and observationally stable, but is predicted to decay with a very long half-life.) Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Radioactive decay Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin  ħ, or "spin-". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. Energy levels The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. States Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. Identification While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state Baryonic matter forms about 4% of the total energy density of the observable universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Formation Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple-alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Earth Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms Superheavy elements All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics.
Physical sciences
Science and medicine
null
903
https://en.wikipedia.org/wiki/Arable%20land
Arable land
Arable land (from the , "able to be ploughed") is any land capable of being ploughed and used to grow crops. Alternatively, for the purposes of agricultural statistics, the term often has a more precise definition: A more concise definition appearing in the Eurostat glossary similarly refers to actual rather than potential uses: "land worked (ploughed or tilled) regularly, generally under a system of crop rotation". In Britain, arable land has traditionally been contrasted with pasturable land such as heaths, which could be used for sheep-rearing but not as farmland. Arable land is vulnerable to land degradation and some types of un-arable land can be enriched to create useful land. Climate change and biodiversity loss, are driving pressure on arable land. By country According to the Food and Agriculture Organization of the United Nations, in 2013, the world's arable land amounted to 1.407 billion hectares, out of a total of 4.924 billion hectares of land used for agriculture. Arable land (hectares per person) Non-arable land Agricultural land that is not arable according to the FAO definition above includes: Meadows and pasturesland used as pasture and grazed range, and those natural grasslands and sedge meadows that are used for hay production in some regions. Permanent cropland that produces crops from woody vegetation, e.g. orchard land, vineyards, coffee plantations, rubber plantations, and land producing nut trees; Other non-arable land includes land that is not suitable for any agricultural use. Land that is not arable, in the sense of lacking capability or suitability for cultivation for crop production, has one or more limitationsa lack of sufficient freshwater for irrigation, stoniness, steepness, adverse climate, excessive wetness with the impracticality of drainage, excessive salts, or a combination of these, among others. Although such limitations may preclude cultivation, and some will in some cases preclude any agricultural use, large areas unsuitable for cultivation may still be agriculturally productive. For example, United States NRCS statistics indicate that about 59 percent of US non-federal pasture and unforested rangeland is unsuitable for cultivation, yet such land has value for grazing of livestock. In British Columbia, Canada, 41 percent of the provincial Agricultural Land Reserve area is unsuitable for the production of cultivated crops, but is suitable for uncultivated production of forage usable by grazing livestock. Similar examples can be found in many rangeland areas elsewhere. Changes in arability Land conversion Land incapable of being cultivated for the production of crops can sometimes be converted to arable land. New arable land makes more food and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalination plants, planting trees for shade in the desert, hydroponics, fertilizer, nitrogen fertilizer, pesticides, reverse osmosis water processors, PET film insulation or other insulation against heat and cold, digging ditches and hills for protection against the wind, and installing greenhouses with internal light and heat for protection against the cold outside and to provide light in cloudy areas. Such modifications are often prohibitively expensive. An alternative is the seawater greenhouse, which desalinates water through evaporation and condensation using solar energy as the only energy input. This technology is optimized to grow crops on desert land close to the sea. The use of artifices does not make the land arable. Rock still remains rock, and shallowless than turnable soil is still not considered toilable. The use of artifice is an open-air non-recycled water hydroponics relationship. The below described circumstances are not in perspective, have limited duration, and have a tendency to accumulate trace materials in soil that either there or elsewhere cause deoxygenation. The use of vast amounts of fertilizer may have unintended consequences for the environment by devastating rivers, waterways, and river endings through the accumulation of non-degradable toxins and nitrogen-bearing molecules that remove oxygen and cause non-aerobic processes to form. Examples of infertile non-arable land being turned into fertile arable land include: Aran Islands: These islands off the west coast of Ireland (not to be confused with the Isle of Arran in Scotland's Firth of Clyde) were unsuitable for arable farming because they were too rocky. The people covered the islands with a shallow layer of seaweed and sand from the ocean. Today, crops are grown there, even though the islands are still considered non-arable. Israel: The construction of desalination plants along Israel's coast allowed agriculture in some areas that were formerly desert. The desalination plants, which remove the salt from ocean water, have produced a new source of water for farming, drinking, and washing. Slash and burn agriculture uses nutrients in wood ash, but these expire within a few years. Terra preta, fertile tropical soils produced by adding charcoal. Land degradation Examples Examples of fertile arable land being turned into infertile land include: Droughts such as the "Dust Bowl" of the Great Depression in the US turned farmland into desert. Each year, arable land is lost due to desertification and human-induced erosion. Improper irrigation of farmland can wick the sodium, calcium, and magnesium from the soil and water to the surface. This process steadily concentrates salt in the root zone, decreasing productivity for crops that are not salt-tolerant. Rainforest deforestation: The fertile tropical forests are converted into infertile desert land. For example, Madagascar's central highland plateau has become virtually totally barren (about ten percent of the country) as a result of slash-and-burn deforestation, an element of shifting cultivation practiced by many natives.
Technology
Basics_2
null
904
https://en.wikipedia.org/wiki/Aluminium
Aluminium
Aluminium (or aluminum in North American English) is a chemical element; it has symbol Al and atomic number 13. Aluminium has a density lower than that of other common metals, about one-third that of steel. It has a great affinity towards oxygen, forming a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, nonmagnetic, and ductile. It has one stable isotope, 27Al, which is highly abundant, making aluminium the twelfth-most common element in the universe. The radioactivity of 26Al leads to it being used in radiometric dating. Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it has more polarizing power, and bonds formed by aluminium have a more covalent character. The strong affinity of aluminium for oxygen leads to the common occurrence of its oxides in nature. Aluminium is found on Earth primarily in rocks in the crust, where it is the third-most abundant element, after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. It is obtained industrially by mining bauxite, a sedimentary rock rich in aluminium minerals. The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In the First and Second World Wars, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan. Despite its prevalence in the environment, no living organism is known to metabolize aluminium salts, but this aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of interest, and studies are ongoing. Physical characteristics Isotopes Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. It is therefore a mononuclidic element and its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals. All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago. The remaining isotopes of aluminium, with mass numbers ranging from 21 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute. Electron shell An aluminium atom has 13 electrons, arranged in an electron configuration of , with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale). A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group: boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium are a probable cause for it being soft with a low melting point and low electrical resistivity. Bulk Aluminium metal has an appearance ranging from silvery white to dull gray depending on its surface roughness. Aluminium mirrors are the most reflective of all metal mirrors for near ultraviolet and far infrared light. It is also one of the most reflective for light in the visible spectrum, nearly on par with silver in this respect, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface. The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial. Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50–70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast. Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents. Chemistry Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship. The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class. Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids. In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals. Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table. Inorganic compounds The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless. In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed. Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead: 2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at . With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction). Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement). The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination. Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements. Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing. There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand. Organoaluminium compounds and related hydrides A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds. The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2. Natural occurrence Space Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which upon capturing free protons and neutrons, becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter. Earth Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the universe at large. This is because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 0.41 µg/kg. Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−. Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India. History The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century. The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash. Attempts to produce aluminium date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium (the Wöhler process) and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium. As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. Because of its electricity-conducting capacity, aluminium was used as the cap of the Washington Monument, completed in 1885, the tallest building in the world at the time. The non-corroding metal cap was intended to serve as a lightning rod peak. The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of aluminium is based on the Bayer and Hall–Héroult processes. As large-scale production caused aluminium prices to drop, the metal became widely used in jewelry, eyeglass frames, optical instruments, tableware, and foil, and other everyday items in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher. By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958. Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013. The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity. Etymology The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, the primary naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer". Origins British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was created from the English word alum and the Latin suffix -ium; but it was customary then to give elements names originating in Latin, so this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English name alum does not come directly from Latin, whereas alumine/alumina comes from the Latin word alumen (upon declension, alumen changes to alumin-). One example was Essai sur la Nomenclature chimique (July 1811), written in French by a Swedish chemist, Jöns Jacob Berzelius, in which the name aluminium is given to the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide is the basis of sapphire, i.e. the same metal, as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The next year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since. Their usage is currently regional: aluminum dominates in the United States and Canada; aluminium is prevalent in the rest of the English-speaking world. Spelling In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he thought had a "less classical sound". This name persisted: although the spelling was occasionally used in Britain, the American scientific language used from the start. Ludwig Wilhelm Gilbert had proposed Thonerde-metall, after the German "Thonerde" for alumina, in his Annalen der Physik but that name never caught on at all even in Germany. Joseph W. Richards in 1891 found just one occurrence of argillium in Swedish, from the French "argille" for clay. The French themselves had used aluminium from the start. However, in England and Germany Davy's spelling aluminum was initially used; until German chemist Friedrich Wöhler published his account of the Wöhler process in 1827 in which he used the spelling aluminium, which caused that spelling's largely wholesale adoption in England and Germany, with the exception of a small number of what Richards characterized as "patriotic" English chemists that were "averse to foreign innovations" who occasionally still used aluminum. Most scientists throughout the world used in the 19th century; and it was entrenched in several other European languages, such as French, German, and Dutch. In 1828, an American lexicographer, Noah Webster, entered only the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling gained usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It is unknown whether this spelling was introduced by mistake or intentionally, but Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the United States, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; in the next decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling. The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry also acknowledges this spelling. IUPAC official publications use the spelling as primary, and they list both where it is appropriate. Production and refinement The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium. Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. Production of one kilogram of aluminium requires 7 kilograms of oil energy equivalent, as compared to 1.5 kilograms for steel and 2 kilograms for plastic. As of 2023, the world's largest producers of aluminium were China, Russia, India, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of over 55%. According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita). Bayer process Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds: After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled. Hall–Héroult process The conversion of alumina to aluminium is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing. Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode. The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%. Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible. Recycling Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%. White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases including, among others, acetylene, hydrogen sulfide and significant amounts of ammonia. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Its potential for hydrogen production has also been considered and researched. Applications Metal The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons). Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. The major uses for aluminium are in: Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density; Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof; Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important; Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion; A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage; Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength. Compounds The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent. Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement. Many aluminium compounds have niche applications, for example: Aluminium acetate in solution is used as an astringent. Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement. Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics. Lithium aluminium hydride is a powerful reducing agent used in organic chemistry. Organoaluminiums are used as Lewis acids and co-catalysts. Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene. Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris. In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Until 2004, most of the adjuvants used in vaccines were aluminium-adjuvanted. Biology Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams (about one pound) for a mouse. Toxicity Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus. Effects Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia. During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems. Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect. Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium. Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard. Exposure routes Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients. Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues. Treatment In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation therapy. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron. Environmental effects High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time. Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air. In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice. Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism. Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain. Biodegradation of metallic aluminium is extremely rare; most aluminium-corroding organisms do not directly attack or consume the aluminium, but instead produce corrosive wastes. The fungus Geotrichum candidum can consume the aluminium in compact discs. The bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium.
Physical sciences
Chemistry
null
911
https://en.wikipedia.org/wiki/Archipelago
Archipelago
An archipelago ( ), sometimes called an island group or island chain, is a chain, cluster, or collection of islands, or a sea containing a small number of scattered islands. An archipelago may be on a lake, river, or an ocean. The list of archipelagos includes the Canadian Arctic Archipelago, the Stockholm Archipelago, the Malay Archipelago (which includes the Indonesian and Philippine Archipelagos), the Bahamian Archipelago, the nation of Japan and the state of Hawaii. Etymology The word archipelago is derived from the Italian arcipelago, used as a proper name for the Aegean Sea, itself perhaps a deformation of the Greek Αιγαίον Πέλαγος. Later, usage shifted to refer to the Aegean Islands (since the sea has a large number of islands). The erudite paretymology deriving the word from Ancient Greek ἄρχι-(arkhi-, "chief") and πέλαγος (pélagos, "sea"), proposed by Buondelmonti, can still be found here and there. Geographic types Archipelagos may be found isolated in large amounts of water or neighboring a large land mass. For example, Scotland has more than 700 islands surrounding its mainland, which form an archipelago. Depending on their geological origin, islands forming archipelagos can be referred to as oceanic islands, continental fragments, or continental islands. Oceanic islands Oceanic islands are formed by volcanoes erupting from the ocean floor. The Hawaiian Islands and Galapagos Islands in the Pacific, and Mascarene Islands in the south Indian Ocean are examples. Continental fragments Continental fragments are islands that were once part of a continent, and became separated due to natural disasters. The fragments may also be formed by moving glaciers which cut out land, which then fills with water. The Farallon Islands off the coast of California are examples of continental islands. Continental Islands Continental islands are islands that were once part of a continent and still sit on the continental shelf, which is the edge of a continent that lies under the ocean. The islands of the Inside Passage off the coast of British Columbia and the Canadian Arctic Archipelago are examples. Artificial archipelagos Artificial archipelagos have been created in various countries for different purposes. Palm Islands and The World Islands in Dubai were or are being created for leisure and tourism purposes. Marker Wadden in the Netherlands is being built as a conservation area for birds and other wildlife. Superlatives The largest archipelago in the world by number of islands is the Archipelago Sea, which is part of Finland. There are approximately 40,000 islands, mostly uninhabited. The largest archipelagic state in the world by area, and by population, is Indonesia.
Physical sciences
Oceanic and coastal landforms
null
928
https://en.wikipedia.org/wiki/Axiom
Axiom
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'. The precise definition varies across fields of study. In classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. In modern logic, an axiom is a premise or starting point for reasoning. In mathematics, an axiom may be a "logical axiom" or a "non-logical axiom". Logical axioms are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms are substantive assertions about the elements of the domain of a specific mathematical theory, for example a + 0 = a in integer arithmetic. Non-logical axioms may also be called "postulates", "assumptions" or "proper axioms". In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., the parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there are typically many ways to axiomatize a given mathematical domain. Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics. Etymology The word axiom comes from the Greek word (axíōma), a verbal noun from the verb (axioein), meaning "to deem worthy", but also "to require", which in turn comes from (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers and mathematicians, axioms were taken to be immediately evident propositions, foundational and common to many fields of investigation, and self-evidently true without any further argument or proof. The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line). Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept. Historical development Early Greeks The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid. The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view. An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that: When an equal amount is taken from equals, an equal amount results. At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates. The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions). Postulates It is possible to draw a straight line from any point to any other point. It is possible to extend a line segment continuously in both directions. It is possible to describe a circle with any center and any radius. It is true that all right angles are equal to one another. ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles. Common notions Things which are equal to the same thing are also equal to one another. If equals are added to equals, the wholes are equal. If equals are subtracted from equals, the remainders are equal. Things which coincide with one another are equal to one another. The whole is greater than the part. Modern development A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement. Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience. When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all. It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system. Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development. Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions. In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom. It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent. The formalist project suffered a setback a century ago, when Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory. It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics. Other sciences Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mendel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms. As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified. Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidean geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidean length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds. In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this idea seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980s, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc.). Mathematical logic In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively). Logical axioms These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense. Examples Propositional logic In propositional logic, it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions: Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens. Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed. These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus. First-order logic Axiom of Equality.Let be a first-order language. For each variable , the below formula is universally valid. This means that, for any variable symbol , the formula can be regarded as an axiom. Additionally, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that. Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation: Axiom scheme for Universal Instantiation.Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid. Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization: Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid. Non-logical axioms Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate. Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that, in principle, every theory could be axiomatized in this way and formalized down to the bare language of logical formulas. Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For instance, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups. Examples This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms. Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic. The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory. This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry. Arithmetic The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem. We have a language where is a constant symbol and is a unary function and the following axioms: for any formula with one free variable. The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0. Euclidean geometry Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees. Real analysis The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis. Role in mathematical logic Deductive systems and completeness A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas , that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system. Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms. There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another. Further discussion Early mathematicians regarded axiomatic geometry as a model of physical space, implying, there could ultimately only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent.
Mathematics
Discrete mathematics
null
956
https://en.wikipedia.org/wiki/Asteraceae
Asteraceae
Asteraceae () is a large family of flowering plants that consists of over 32,000 known species in over 1,900 genera within the order Asterales. The number of species in Asteraceae is rivaled only by the Orchidaceae, and which is the larger family is unclear as the quantity of extant species in each family is unknown. The Asteraceae were first described in the year 1740 and given the original name Compositae. The family is commonly known as the aster, daisy, composite, or sunflower family. Most species of Asteraceae are herbaceous plants, and may be annual, biennial, or perennial, but there are also shrubs, vines, and trees. The family has a widespread distribution, from subpolar to tropical regions, in a wide variety of habitats. Most occur in hot desert and cold or hot semi-desert climates, and they are found on every continent but Antarctica. Their common primary characteristic is compound flower heads, technically known as capitula, consisting of sometimes hundreds of tiny individual florets enclosed by a whorl of protective involucral bracts. The oldest known fossils are pollen grains from the Late Cretaceous (Campanian to Maastrichtian) of Antarctica, dated to million years ago (mya). It is estimated that the crown group of Asteraceae evolved at least 85.9 mya (Late Cretaceous, Santonian) with a stem node age of 88–89 mya (Late Cretaceous, Coniacian). Asteraceae is an economically important family, providing food staples, garden plants, and herbal medicines. Species outside of their native ranges can become weedy or invasive. Description Members of the Asteraceae are mostly herbaceous plants, but some shrubs, vines, and trees (such as Lachanodes arborea) do exist. Asteraceae species are generally easy to distinguish from other plants because of their unique inflorescence and other shared characteristics, such as the joined anthers of the stamens. Nonetheless, determining genera and species of some groups such as Hieracium is notoriously difficult (see "damned yellow composite" for example). Roots Members of the family Asteraceae generally produce taproots, but sometimes they possess fibrous root systems. Some species have underground stems in the form of caudices or rhizomes. These can be fleshy or woody depending on the species. Stems The stems are herbaceous, aerial, branched, and cylindrical with glandular hairs, usually erect, but can be prostrate to ascending. The stems can contain secretory canals with resin, or latex, which is particularly common among the Cichorioideae. Leaves Leaves can be alternate, opposite, or whorled. They may be simple, but are often deeply lobed or otherwise incised, often conduplicate or revolute. The margins also can be entire or toothed. Resin or latex can also be present in the leaves. Inflorescences Nearly all Asteraceae bear their flowers in dense flower heads called capitula. They are surrounded by involucral bracts, and when viewed from a distance, each capitulum may appear to be a single flower. Enlarged outer (peripheral) flowers in the capitulum may resemble petals, and the involucral bracts may look like a calyx. Notable exceptions include Hecastocleis shockleyi (the only species in the subfamily Hecastocleidoideae) and the species of the genus Corymbium (the only genus in the subfamily Corymbioideae), which have one-flowered bisexual capitulas, Gundelia with one-flowered unisexual capitulas, and Gymnarrhena micrantha with one-flowered female capitulas and few flowered male capitulas. Floral heads In plants of the Asteraceae, what appears to be a single "daisy"-type flower is actually a composite of several much smaller flowers, known as the capitulum or head. By visually presenting as a single flower, the capitulum functions in attracting pollinators, in the same manner that other "showy" flowering plants in numerous other, older, plant families have evolved to attract pollinators. The previous name for the family, Compositae, reflects the fact that what appears to be a single floral entity is in fact a composite of much smaller flowers. The "petals" or "sunrays" in an "asteraceous" head are in fact individual strap-shaped flowers called ray flowers or ray florets, and the "sun disk" is made up of smaller, radially symmetric, individual flowers called disc flowers or disc florets. The word aster means "star" in Greek, referring to the appearance of most family members as a "celestial body with rays". The capitulum, which often appears to be a single flower, is often referred to as a head. In some species, the entire head is able to pivot its floral stem in the course of the day to track the sun (like a "smart" solar panel), thus maximizing the reflectivity of the entire floral unit and further attracting flying pollinators. Nearest to the flower stem lie a series of small, usually green, scale-like bracts. These are known as phyllaries; collectively, they form the involucre, which serves to protect the immature head of florets during its development. The individual florets are arranged atop a dome-like structure called the receptacle. The individual florets in a head consist, developmentally, of five fused petals (rarely four); instead of sepals, they have threadlike, hairy, or bristly structures, known collectively as a pappus, (plural pappi). The pappus surrounds the ovary and can, when mature and attached to a seed, adhere to animal fur or be carried by air currents, aiding in seed dispersal. The whitish, fluffy head of a dandelion, commonly blown on by children, consists of numerous seeds resting on the receptacle, each seed attached to its pappus. The pappi provide a parachute-like structure to help the seed travel from its point of origin to a more hospitable site. A ray flower is a two- or three-lobed, strap-shaped, individual flower, found in the head of most members of the Asteraceae. The corolla of the ray flower may have two tiny, vestigial teeth, opposite to the three-lobed strap, or tongue, indicating its evolution by fusion from an ancestral, five-part corolla. In some species, the 3:2 arrangement is reversed, with two lobes, and zero or three tiny teeth visible opposite the tongue. A ligulate flower is a five-lobed, strap-shaped, individual flower found in the heads of certain other asteraceous species. A ligule is the strap-shaped tongue of the corolla of either a ray flower or of a ligulate flower. A disk flower (or disc flower) is a radially symmetric individual flower in the head, which is ringed by the ray flowers when both are present. In some species, ray flowers may be arranged around the disc in irregular symmetry, or with a weakly bilaterally symmetric arrangement. Variations A radiate head has disc flowers surrounded by ray flowers. A ligulate head has all ligulate flowers and no disc flowers. When an Asteraceae flower head has only disc flowers that are either sterile, male, or bisexual (but not female and fertile), it is a discoid head. Disciform heads possess only disc flowers in their heads, but may produce two different sex types (male or female) within their disciform head. Some other species produce two different head types: staminate (all-male), or pistillate (all-female). In a few unusual species, the "head" will consist of one single disc flower; alternatively, a few species will produce both single-flowered female heads, along with multi-flowered male heads, in their "pollination strategy". Floral structures The distinguishing characteristic of Asteraceae is their inflorescence, a type of specialised, composite flower head or pseudanthium, technically called a calathium or capitulum, that may look superficially like a single flower. The capitulum is a contracted raceme composed of numerous individual sessile flowers, called florets, all sharing the same receptacle. A set of bracts forms an involucre surrounding the base of the capitulum. These are called "phyllaries", or "involucral bracts". They may simulate the sepals of the pseudanthium. These are mostly herbaceous but can also be brightly coloured (e.g. Helichrysum) or have a scarious (dry and membranous) texture. The phyllaries can be free or fused, and arranged in one to many rows, overlapping like the tiles of a roof (imbricate) or not (this variation is important in identification of tribes and genera). Each floret may be subtended by a bract, called a "palea" or "receptacular bract". These bracts are often called "chaff". The presence or absence of these bracts, their distribution on the receptacle, and their size and shape are all important diagnostic characteristics for genera and tribes. The florets have five petals fused at the base to form a corolla tube and they may be either actinomorphic or zygomorphic. Disc florets are usually actinomorphic, with five petal lips on the rim of the corolla tube. The petal lips may be either very short, or long, in which case they form deeply lobed petals. The latter is the only kind of floret in the Carduoideae, while the first kind is more widespread. Ray florets are always highly zygomorphic and are characterised by the presence of a ligule, a strap-shaped structure on the edge of the corolla tube consisting of fused petals. In the Asteroideae and other minor subfamilies these are usually borne only on florets at the circumference of the capitulum and have a 3+2 scheme – above the fused corolla tube, three very long fused petals form the ligule, with the other two petals being inconspicuously small. The Cichorioideae has only ray florets, with a 5+0 scheme – all five petals form the ligule. A 4+1 scheme is found in the Barnadesioideae. The tip of the ligule is often divided into teeth, each one representing a petal. Some marginal florets may have no petals at all (filiform floret). The calyx of the florets may be absent, but when present is always modified into a pappus of two or more teeth, scales or bristles and this is often involved in the dispersion of the seeds. As with the bracts, the nature of the pappus is an important diagnostic feature. There are usually four or five stamens. The filaments are fused to the corolla, while the anthers are generally connate (syngenesious anthers), thus forming a sort of tube around the style (theca). They commonly have basal and/or apical appendages. Pollen is released inside the tube and is collected around the growing style, and then, as the style elongates, is pushed out of the tube (nüdelspritze). The pistil consists of two connate carpels. The style has two lobes. Stigmatic tissue may be located in the interior surface or form two lateral lines. The ovary is inferior and has only one ovule, with basal placentation. Fruits and seeds In members of the Asteraceae the fruit is achene-like, and is called a cypsela (plural cypselae). Although there are two fused carpels, there is only one locule, and only one seed per fruit is formed. It may sometimes be winged or spiny because the pappus, which is derived from calyx tissue often remains on the fruit (for example in dandelion). In some species, however, the pappus falls off (for example in Helianthus). Cypsela morphology is often used to help determine plant relationships at the genus and species level. The mature seeds usually have little endosperm or none. Pollen The pollen of composites is typically echinolophate, a morphological term meaning "with elaborate systems of ridges and spines dispersed around and between the apertures." Metabolites In Asteraceae, the energy store is generally in the form of inulin rather than starch. They produce iso/chlorogenic acid, sesquiterpene lactones, pentacyclic triterpene alcohols, various alkaloids, acetylenes (cyclic, aromatic, with vinyl end groups), tannins. They have terpenoid essential oils that never contain iridoids. Asteraceae produce secondary metabolites, such as flavonoids and terpenoids. Some of these molecules can inhibit protozoan parasites such as Plasmodium, Trypanosoma, Leishmania and parasitic intestinal worms, and thus have potential in medicine. Taxonomy History Compositae, the original name for Asteraceae, were first described in 1740 by Dutch botanist Adriaan van Royen. Traditionally, two subfamilies were recognised: Asteroideae (or Tubuliflorae) and Cichorioideae (or Liguliflorae). The latter has been shown to be extensively paraphyletic, and has now been divided into 12 subfamilies, but the former still stands. The study of this family is known as synantherology. Phylogeny The phylogenetic tree of subfamilies presented below is based on Panero & Funk (2002) updated in 2014, and now also includes the monotypic Famatinanthoideae. The diamond (♦) denotes a very poorly supported node (<50% bootstrap support), the dot (•) a poorly supported node (<80%). The family includes over 32,000 currently accepted species, in over 1,900 genera (list) in 13 subfamilies. The number of species in the family Asteraceae is rivaled only by Orchidaceae. Which is the larger family is unclear, because of the uncertainty about how many extant species each family includes. The four subfamilies Asteroideae, Cichorioideae, Carduoideae and Mutisioideae contain 99% of the species diversity of the whole family (approximately 70%, 14%, 11% and 3% respectively). Because of the morphological complexity exhibited by this family, agreeing on generic circumscriptions has often been difficult for taxonomists. As a result, several of these genera have required multiple revisions. Paleontology and evolutionary processes The oldest known fossils of members of Asteraceae are pollen grains from the Late Cretaceous of Antarctica, dated to ~76–66 mya (Campanian to Maastrichtian) and assigned to the extant genus Dasyphyllum. Barreda, et al. (2015) estimated that the crown group of Asteraceae evolved at least 85.9 mya (Late Cretaceous, Santonian) with a stem node age of 88–89 mya (Late Cretaceous, Coniacian). It is not known whether the precise cause of their great success was the development of the highly specialised capitulum, their ability to store energy as fructans (mainly inulin), which is an advantage in relatively dry zones, or some combination of these and possibly other factors. Heterocarpy, or the ability to produce different fruit morphs, has evolved and is common in Asteraceae. It allows seeds to be dispersed over varying distances and each is adapted to different environments, increasing chances of survival. Etymology and pronunciation The original name Compositae is still valid under the International Code of Nomenclature for algae, fungi, and plants. It refers to the "composite" nature of the capitula, which consist of a few or many individual flowers. The alternative (as it came later) name Asteraceae () comes to international scientific vocabulary from Neo-Latin, from Aster, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. This genus name comes from the Classical Latin word , "star", which came from Ancient Greek (), "star". It refers to the star-like form of the inflorescence. The vernacular name daisy, widely applied to members of this family, is derived from the Old English name of the daisy (Bellis perennis): , meaning "day's eye". This is because the petals open at dawn and close at dusk. Distribution and habitat Asteraceae species have a widespread distribution, from subpolar to tropical regions in a wide variety of habitats. Most occur in hot desert and cold or hot semi-desert climates, and they are found on every continent but Antarctica. They are especially numerous in tropical and subtropical regions (notably Central America, eastern Brazil, the Mediterranean, the Levant, southern Africa, central Asia, and southwestern China). The largest proportion of the species occur in the arid and semi-arid regions of subtropical and lower temperate latitudes. The Asteraceae family comprises 10% of all flowering plant species. Ecology Asteraceae are especially common in open and dry environments. Many members of Asteraceae are pollinated by insects, which explains their value in attracting beneficial insects, but anemophily is also present (e.g. Ambrosia, Artemisia). There are many apomictic species in the family. Seeds are ordinarily dispersed intact with the fruiting body, the cypsela. Anemochory (wind dispersal) is common, assisted by a hairy pappus. Epizoochory is another common method, in which the dispersal unit, a single cypsela (e.g. Bidens) or entire capitulum (e.g. Arctium) has hooks, spines or some structure to attach to the fur or plumage (or even clothes, as in the photo) of an animal just to fall off later far from its mother plant. Some members of Asteraceae are economically important as weeds. Notable in the United States are Senecio jacobaea (ragwort), Senecio vulgaris (groundsel), and Taraxacum (dandelion). Some are invasive species in particular regions, often having been introduced by human agency. Examples include various tumbleweeds, Bidens, ragweeds, thistles, and dandelion. Dandelion was introduced into North America by European settlers who used the young leaves as a salad green. A number of species are toxic to grazing animals. Uses Asteraceae is an economically important family, providing products such as cooking oils, leaf vegetables like lettuce, sunflower seeds, artichokes, sweetening agents, coffee substitutes and herbal teas. Several genera are of horticultural importance, including pot marigold (Calendula officinalis), Echinacea (coneflowers), various daisies, fleabane, chrysanthemums, dahlias, zinnias, and heleniums. Asteraceae are important in herbal medicine, including Grindelia, yarrow, and many others. Commercially important plants in Asteraceae include the food crops Lactuca sativa (lettuce), Cichorium (chicory), Cynara scolymus (globe artichoke), Helianthus annuus (sunflower), Smallanthus sonchifolius (yacón), Carthamus tinctorius (safflower) and Helianthus tuberosus (Jerusalem artichoke). Plants are used as herbs and in herbal teas and other beverages. Chamomile, for example, comes from two different species: the annual Matricaria chamomilla (German chamomile) and the perennial Chamaemelum nobile (Roman chamomile). Calendula (known as pot marigold) is grown commercially for herbal teas and potpourri. Echinacea is used as a medicinal tea. The wormwood genus Artemisia includes absinthe (A. absinthium) and tarragon (A. dracunculus). Winter tarragon (Tagetes lucida), is commonly grown and used as a tarragon substitute in climates where tarragon will not survive. Many members of the family are grown as ornamental plants for their flowers, and some are important ornamental crops for the cut flower industry. Some examples are Chrysanthemum, Gerbera, Calendula, Dendranthema, Argyranthemum, Dahlia, Tagetes, Zinnia, and many others. Many species of this family possess medicinal properties and are used as traditional antiparasitic medicine. Members of the family are also commonly featured in medical and phytochemical journals because the sesquiterpene lactone compounds contained within them are an important cause of allergic contact dermatitis. Allergy to these compounds is the leading cause of allergic contact dermatitis in florists in the US. Pollen from ragweed Ambrosia is among the main causes of so-called hay fever in the United States. Asteraceae are also used for some industrial purposes. French Marigold (Tagetes patula) is common in commercial poultry feeds and its oil is extracted for uses in cola and the cigarette industry. The genera Chrysanthemum, Pulicaria, Tagetes, and Tanacetum contain species with useful insecticidal properties. Parthenium argentatum (guayule) is a source of hypoallergenic latex. Several members of the family are copious nectar producers and are useful for evaluating pollinator populations during their bloom. Centaurea (knapweed), Helianthus annuus (domestic sunflower), and some species of Solidago (goldenrod) are major "honey plants" for beekeepers. Solidago produces relatively high protein pollen, which helps honey bees over winter.
Biology and health sciences
Asterales
null
957
https://en.wikipedia.org/wiki/Apiaceae
Apiaceae
Apiaceae () or Umbelliferae is a family of mostly aromatic flowering plants named after the type genus Apium, and commonly known as the celery, carrot or parsley family, or simply as umbellifers. It is the 16th-largest family of flowering plants, with more than 3,800 species in about 446 genera, including such well-known, and economically important plants as ajwain, angelica, anise, asafoetida, caraway, carrot, celery, chervil, coriander, cumin, dill, fennel, lovage, cow parsley, parsley, parsnip and sea holly, as well as silphium, a plant whose exact identity is unclear and may be extinct. The family Apiaceae includes a significant number of phototoxic species, such as giant hogweed, and a smaller number of highly poisonous species, such as poison hemlock, water hemlock, spotted cowbane, fool's parsley, and various species of water dropwort. Description Most Apiaceae are annual, biennial or perennial herbs (frequently with the leaves aggregated toward the base), though a minority are woody shrubs or small trees such as Bupleurum fruticosum. Their leaves are of variable size, and alternately arranged, or with the upper leaves becoming nearly opposite. The leaves may be petiolate or sessile. There are no stipules but the petioles are frequently sheathing, and the leaves may be perfoliate. The leaf blade is usually dissected, ternate, or pinnatifid, but simple, and entire in some genera, e.g. Bupleurum. Commonly, their leaves emit a marked smell when crushed, aromatic to fetid, but absent in some species. The defining characteristic of this family is the inflorescence, the flowers nearly always aggregated in terminal umbels, that may be simple or more commonly compound, often umbelliform cymes. The flowers are usually perfect (hermaphroditic), and actinomorphic, but there may be zygomorphic flowers at the edge of the umbel, as in carrot (Daucus carota) and coriander, with petals of unequal size, the ones pointing outward from the umbel larger than the ones pointing inward. Some are andromonoecious, polygamomonoecious, or even dioecious (as in Acronema), with a distinct calyx, and corolla, but the calyx is often highly reduced, to the point of being undetectable in many species, while the corolla can be white, yellow, pink or purple. The flowers are nearly perfectly pentamerous, with five petals and five stamens. There is often variation in the functionality of the stamens even within a single inflorescence. Some flowers are functionally staminate (where a pistil may be present but has no ovules capable of being fertilized) while others are functionally pistillate (where stamens are present but their anthers do not produce viable pollen). Pollination of one flower by the pollen of a different flower of the same plant (geitonogamy) is common. The gynoecium consists of two carpels fused into a single, bicarpellate pistil with an inferior ovary. Stylopodia support two styles, and secrete nectar, attracting pollinators like flies, mosquitoes, gnats, beetles, moths, and bees. The fruit is a schizocarp consisting of two fused carpels that separate at maturity into two mericarps, each containing a single seed. The fruits of many species are dispersed by wind but others such as those of Daucus spp., are covered in bristles, which may be hooked in sanicle Sanicula europaea and thus catch in the fur of animals. The seeds have an oily endosperm and often contain essential oils, containing aromatic compounds that are responsible for the flavour of commercially important umbelliferous seed such as anise, cumin and coriander. The shape and details of the ornamentation of the ripe fruits are important for identification to species level. Taxonomy Apiaceae was first described by John Lindley in 1836. The name is derived from the type genus Apium, which was originally used by Pliny the Elder circa 50 AD for a celery-like plant. The alternative name for the family, Umbelliferae, derives from the inflorescence being generally in the form of a compound umbel. The family was one of the first to be recognized as a distinct group in Jacques Daleschamps' 1586 Historia generalis plantarum. With Robert Morison's 1672 Plantarum umbelliferarum distribution nova it became the first group of plants for which a systematic study was published. The family is solidly placed within the Apiales order in the APG III system. It is closely related to Araliaceae and the boundaries between these families remain unclear. Traditionally groups within the family have been delimited largely based on fruit morphology, and the results from this have not been congruent with the more recent molecular phylogenetic analyses. The subfamilial and tribal classification for the family is currently in a state of flux, with many of the groups being found to be grossly paraphyletic or polyphyletic. Classification and phylogeny Prior to molecular phylogenetic studies, the family was subdivided primarily based on fruit characteristics. Molecular phylogenetic analyses from the mid-1990s onwards have shown that fruit characters evolved in parallel many times, so that using them in classification resulted in units that were not monophyletic. In 2004, it was proposed that Apiaceae should be divided into four subfamilies: Apioideae Seem. Azorelloideae G.M.Plunkett & Lowry Mackinlayoideae G.M.Plunkett & Lowry Saniculoideae Burnett Apioideae is by far the largest subfamily with about 90% of the genera. Most subsequent studies have supported this division, although leaving some genera unplaced. A 2021 study suggested the relationships shown in the following cladogram. The Platysace clade and the genera Klotzschia and Hermas fell outside the four subfamilies. It was suggested that they could be accommodated in subfamilies of their own. Phlyctidocarpa was formerly placed in the subfamily Apioideae, but if kept there makes Apioideae paraphyletic. It could be placed in an enlarged Saniculoideae, or restored to Apioideae if the latter were expanded to include Saniculoideae. The subfamilies can be further divided into tribes and clades, with many clades falling outside formally recognized tribes. Genera The number of genera accepted by sources varies. , Plants of the World Online (PoWO) accepted 444 genera, while GRIN Taxonomy accepted 462. The PoWO genera are not a subset of those in GRIN; for example, Haloselinum is accepted by PoWO but not by GRIN, while Halosciastrum is accepted by GRIN but not by PoWO, which treats it as a synonym of Angelica. The Angiosperm Phylogeny Website had an "approximate list" of 446 genera. Ecology The black swallowtail butterfly, Papilio polyxenes, uses the family Apiaceae for food and host plants for oviposition. The 22-spot ladybird is also commonly found eating mildew on these plants. Uses Many members of this family are cultivated for various purposes. Parsnip (Pastinaca sativa), carrot (Daucus carota) and Hamburg parsley (Petroselinum crispum) produce tap roots that are large enough to be useful as food. Many species produce essential oils in their leaves or fruits and as a result are flavourful aromatic herbs. Examples are parsley (Petroselinum crispum), coriander (Coriandrum sativum), culantro, and dill (Anethum graveolens). The seeds may be used in cuisine, as with coriander (Coriandrum sativum), fennel (Foeniculum vulgare), cumin (Cuminum cyminum), and caraway (Carum carvi). Other notable cultivated Apiaceae include chervil (Anthriscus cerefolium), angelica (Angelica spp.), celery (Apium graveolens), arracacha (Arracacia xanthorrhiza), sea holly (Eryngium spp.), asafoetida (Ferula asafoetida), galbanum (Ferula gummosa), cicely (Myrrhis odorata), anise (Pimpinella anisum), lovage (Levisticum officinale), and hacquetia (Sanicula epipactis). Cultivation Generally, all members of this family are best cultivated in the cool-season garden; they may not grow at all if the soils are too warm. Almost every widely cultivated plant of this group is a considered useful as a companion plant. One reason is that the tiny flowers, clustered into umbels, are well suited for ladybugs, parasitic wasps, and predatory flies, which drink nectar when not reproducing. They then prey upon insect pests on nearby plants. Some of the members of this family considered "herbs" produce scents that are believed to mask the odours of nearby plants, thus making them harder for insect pests to find. Other uses The poisonous members of the Apiaceae have been used for a variety of purposes globally. The poisonous Oenanthe crocata has been used as an aid in suicides, and arrow poisons have been made from various other family species. Daucus carota has been used as coloring for butter. Dorema ammoniacum, Ferula galbaniflua, and Ferula moschata (sumbul) are sources of incense. The woody Azorella compacta Phil. has been used in South America for fuel. Toxicity Many species in the family Apiaceae produce phototoxic substances (called furanocoumarins) that sensitize human skin to sunlight. Contact with plant parts that contain furanocoumarins, followed by exposure to sunlight, may cause phytophotodermatitis, a serious skin inflammation. Phototoxic species include Ammi majus, Notobubon galbanum, the parsnip (Pastinaca sativa) and numerous species of the genus Heracleum, especially the giant hogweed (Heracleum mantegazzianum). Of all the plant species that have been reported to induce phytophotodermatitis, approximately half belong to the family Apiaceae. The family Apiaceae also includes a smaller number of poisonous species, including poison hemlock, water hemlock, spotted cowbane, fool's parsley, and various species of water dropwort. Some members of the family Apiaceae, including carrot, celery, fennel, parsley and parsnip, contain polyynes, an unusual class of organic compounds that exhibit cytotoxic effects.
Biology and health sciences
Apiales
null
958
https://en.wikipedia.org/wiki/Axon
Axon
An axon (from Greek ἄξων áxōn, axis) or nerve fiber (or nerve fibre: see spelling differences) is a long, slender projection of a nerve cell, or neuron, in vertebrates, that typically conducts electrical impulses known as action potentials away from the nerve cell body. The function of the axon is to transmit information to different neurons, muscles, and glands. In certain sensory neurons (pseudounipolar neurons), such as those for touch and warmth, the axons are called afferent nerve fibers and the electrical impulse travels along these from the periphery to the cell body and from the cell body to the spinal cord along another branch of the same axon. Axon dysfunction can be the cause of many inherited and acquired neurological disorders that affect both the peripheral and central neurons. Nerve fibers are classed into three typesgroup A nerve fibers, group B nerve fibers, and group C nerve fibers. Groups A and B are myelinated, and group C are unmyelinated. These groups include both sensory fibers and motor fibers. Another classification groups only the sensory fibers as Type I, Type II, Type III, and Type IV. An axon is one of two types of cytoplasmic protrusions from the cell body of a neuron; the other type is a dendrite. Axons are distinguished from dendrites by several features, including shape (dendrites often taper while axons usually maintain a constant radius), length (dendrites are restricted to a small region around the cell body while axons can be much longer), and function (dendrites receive signals whereas axons transmit them). Some types of neurons have no axon and transmit signals from their dendrites. In some species, axons can emanate from dendrites known as axon-carrying dendrites. No neuron ever has more than one axon; however in invertebrates such as insects or leeches the axon sometimes consists of several regions that function more or less independently of each other. Axons are covered by a membrane known as an axolemma; the cytoplasm of an axon is called axoplasm. Most axons branch, in some cases very profusely. The end branches of an axon are called telodendria. The swollen end of a telodendron is known as the axon terminal or end-foot which joins the dendrite or cell body of another neuron forming a synaptic connection. Axons usually make contact with other neurons at junctions called synapses but can also make contact with muscle or gland cells. In some circumstances, the axon of one neuron may form a synapse with the dendrites of the same neuron, resulting in an autapse. At a synapse, the membrane of the axon closely adjoins the membrane of the target cell, and special molecular structures serve to transmit electrical or electrochemical signals across the gap. Some synaptic junctions appear along the length of an axon as it extends; these are called en passant boutons ("in passing boutons") and can be in the hundreds or even the thousands along one axon. Other synapses appear as terminals at the ends of axonal branches. A single axon, with all its branches taken together, can target multiple parts of the brain and generate thousands of synaptic terminals. A bundle of axons make a nerve tract in the central nervous system, and a fascicle in the peripheral nervous system. In placental mammals the largest white matter tract in the brain is the corpus callosum, formed of some 200 million axons in the human brain. Anatomy Axons are the primary transmission lines of the nervous system, and as bundles they form nerves in the peripheral nervous system, or nerve tracts in the central nervous system (CNS). Some axons can extend up to one meter or more while others extend as little as one millimeter. The longest axons in the human body are those of the sciatic nerve, which run from the base of the spinal cord to the big toe of each foot. The diameter of axons is also variable. Most individual axons are microscopic in diameter (typically about one micrometer (μm) across). The largest mammalian axons can reach a diameter of up to 20 μm. The squid giant axon, which is specialized to conduct signals very rapidly, is close to 1 millimeter in diameter, the size of a small pencil lead. The numbers of axonal telodendria (the branching structures at the end of the axon) can also differ from one nerve fiber to the next. Axons in the CNS typically show multiple telodendria, with many synaptic end points. In comparison, the cerebellar granule cell axon is characterized by a single T-shaped branch node from which two parallel fibers extend. Elaborate branching allows for the simultaneous transmission of messages to a large number of target neurons within a single region of the brain. There are two types of axons in the nervous system: myelinated and unmyelinated axons. Myelin is a layer of a fatty insulating substance, which is formed by two types of glial cells: Schwann cells and oligodendrocytes. In the peripheral nervous system Schwann cells form the myelin sheath of a myelinated axon. Oligodendrocytes form the insulating myelin in the CNS. Along myelinated nerve fibers, gaps in the myelin sheath known as nodes of Ranvier occur at evenly spaced intervals. The myelination enables an especially rapid mode of electrical impulse propagation called saltatory conduction. The myelinated axons from the cortical neurons form the bulk of the neural tissue called white matter in the brain. The myelin gives the white appearance to the tissue in contrast to the grey matter of the cerebral cortex which contains the neuronal cell bodies. A similar arrangement is seen in the cerebellum. Bundles of myelinated axons make up the nerve tracts in the CNS, and where they cross the midline of the brain to connect opposite regions they are called commissures. The largest of these is the corpus callosum that connects the two cerebral hemispheres, and this has around 20 million axons. The structure of a neuron is seen to consist of two separate functional regions, or compartmentsthe cell body together with the dendrites as one region, and the axonal region as the other. Axonal region The axonal region or compartment, includes the axon hillock, the initial segment, the rest of the axon, and the axon telodendria, and axon terminals. It also includes the myelin sheath. The Nissl bodies that produce the neuronal proteins are absent in the axonal region. Proteins needed for the growth of the axon, and the removal of waste materials, need a framework for transport. This axonal transport is provided for in the axoplasm by arrangements of microtubules and type IV intermediate filaments known as neurofilaments. Axon hillock The axon hillock is the area formed from the cell body of the neuron as it extends to become the axon. It precedes the initial segment. The received action potentials that are summed in the neuron are transmitted to the axon hillock for the generation of an action potential from the initial segment. Axonal initial segment The axonal initial segment (AIS) is a structurally and functionally separate microdomain of the axon. One function of the initial segment is to separate the main part of an axon from the rest of the neuron; another function is to help initiate action potentials. Both of these functions support neuron cell polarity, in which dendrites (and, in some cases the soma) of a neuron receive input signals at the basal region, and at the apical region the neuron's axon provides output signals. The axon initial segment is unmyelinated and contains a specialized complex of proteins. It is between approximately 20 and 60 μm in length and functions as the site of action potential initiation. Both the position on the axon and the length of the AIS can change showing a degree of plasticity that can fine-tune the neuronal output. A longer AIS is associated with a greater excitability. Plasticity is also seen in the ability of the AIS to change its distribution and to maintain the activity of neural circuitry at a constant level. The AIS is highly specialized for the fast conduction of nerve impulses. This is achieved by a high concentration of voltage-gated sodium channels in the initial segment where the action potential is initiated. The ion channels are accompanied by a high number of cell adhesion molecules and scaffold proteins that anchor them to the cytoskeleton. Interactions with ankyrin-G are important as it is the major organizer in the AIS. In other cases as seen in rat studies an axon originates from a dendrite; such axons are said to have "dendritic origin". Some axons with dendritic origin similarly have a "proximal" initial segment that starts directly at the axon origin, while others have a "distal" initial segment, discernibly separated from the axon origin. In many species some of the neurons have axons that emanate from the dendrite and not from the cell body, and these are known as axon-carrying dendrites. In many cases, an axon originates at an axon hillock on the soma; such axons are said to have "somatic origin". Some axons with somatic origin have a "proximal" initial segment adjacent the axon hillock, while others have a "distal" initial segment, separated from the soma by an extended axon hillock. Axonal transport The axoplasm is the equivalent of cytoplasm in the cell. Microtubules form in the axoplasm at the axon hillock. They are arranged along the length of the axon, in overlapping sections, and all point in the same directiontowards the axon terminals. This is noted by the positive endings of the microtubules. This overlapping arrangement provides the routes for the transport of different materials from the cell body. Studies on the axoplasm has shown the movement of numerous vesicles of all sizes to be seen along cytoskeletal filamentsthe microtubules, and neurofilaments, in both directions between the axon and its terminals and the cell body. Outgoing anterograde transport from the cell body along the axon, carries mitochondria and membrane proteins needed for growth to the axon terminal. Ingoing retrograde transport carries cell waste materials from the axon terminal to the cell body. Outgoing and ingoing tracks use different sets of motor proteins. Outgoing transport is provided by kinesin, and ingoing return traffic is provided by dynein. Dynein is minus-end directed. There are many forms of kinesin and dynein motor proteins, and each is thought to carry a different cargo. The studies on transport in the axon led to the naming of kinesin. Myelination In the nervous system, axons may be myelinated, or unmyelinated. This is the provision of an insulating layer, called a myelin sheath. The myelin membrane is unique in its relatively high lipid to protein ratio. In the peripheral nervous system axons are myelinated by glial cells known as Schwann cells. In the central nervous system the myelin sheath is provided by another type of glial cell, the oligodendrocyte. Schwann cells myelinate a single axon. An oligodendrocyte can myelinate up to 50 axons. The composition of myelin is different in the two types. In the CNS the major myelin protein is proteolipid protein, and in the PNS it is myelin basic protein. Nodes of Ranvier Nodes of Ranvier (also known as myelin sheath gaps) are short unmyelinated segments of a myelinated axon, which are found periodically interspersed between segments of the myelin sheath. Therefore, at the point of the node of Ranvier, the axon is reduced in diameter. These nodes are areas where action potentials can be generated. In saltatory conduction, electrical currents produced at each node of Ranvier are conducted with little attenuation to the next node in line, where they remain strong enough to generate another action potential. Thus in a myelinated axon, action potentials effectively "jump" from node to node, bypassing the myelinated stretches in between, resulting in a propagation speed much faster than even the fastest unmyelinated axon can sustain. Axon terminals An axon can divide into many branches called telodendria (Greek for 'end of tree'). At the end of each telodendron is an axon terminal (also called a terminal bouton or synaptic bouton, or end-foot). Axon terminals contain synaptic vesicles that store the neurotransmitter for release at the synapse. This makes multiple synaptic connections with other neurons possible. Sometimes the axon of a neuron may synapse onto dendrites of the same neuron, when it is known as an autapse. Some synaptic junctions appear along the length of an axon as it extends; these are called en passant boutons ("in passing boutons") and can be in the hundreds or even the thousands along one axon. Axonal varicosities In the normally developed brain, along the shaft of some axons are located pre-synaptic boutons also known as axonal varicosities and these have been found in regions of the hippocampus that function in the release of neurotransmitters. However, axonal varicosities are also present in neurodegenerative diseases where they interfere with the conduction of an action potential. Axonal varicosities are also the hallmark of traumatic brain injuries. Axonal damage is usually to the axon cytoskeleton disrupting transport. As a consequence protein accumulations such as amyloid-beta precursor protein can build up in a swelling resulting in a number of varicosities along the axon. Action potentials Most axons carry signals in the form of action potentials, which are discrete electrochemical impulses that travel rapidly along an axon, starting at the cell body and terminating at points where the axon makes synaptic contact with target cells. The defining characteristic of an action potential is that it is "all-or-nothing"every action potential that an axon generates has essentially the same size and shape. This all-or-nothing characteristic allows action potentials to be transmitted from one end of a long axon to the other without any reduction in size. There are, however, some types of neurons with short axons that carry graded electrochemical signals, of variable amplitude. When an action potential reaches a presynaptic terminal, it activates the synaptic transmission process. The first step is rapid opening of calcium ion channels in the membrane of the axon, allowing calcium ions to flow inward across the membrane. The resulting increase in intracellular calcium concentration causes synaptic vesicles (tiny containers enclosed by a lipid membrane) filled with a neurotransmitter chemical to fuse with the axon's membrane and empty their contents into the extracellular space. The neurotransmitter is released from the presynaptic nerve through exocytosis. The neurotransmitter chemical then diffuses across to receptors located on the membrane of the target cell. The neurotransmitter binds to these receptors and activates them. Depending on the type of receptors that are activated, the effect on the target cell can be to excite the target cell, inhibit it, or alter its metabolism in some way. This entire sequence of events often takes place in less than a thousandth of a second. Afterward, inside the presynaptic terminal, a new set of vesicles is moved into position next to the membrane, ready to be released when the next action potential arrives. The action potential is the final electrical step in the integration of synaptic messages at the scale of the neuron. Extracellular recordings of action potential propagation in axons has been demonstrated in freely moving animals. While extracellular somatic action potentials have been used to study cellular activity in freely moving animals such as place cells, axonal activity in both white and gray matter can also be recorded. Extracellular recordings of axon action potential propagation is distinct from somatic action potentials in three ways: 1. The signal has a shorter peak-trough duration (~150μs) than of pyramidal cells (~500μs) or interneurons (~250μs). 2. The voltage change is triphasic. 3. Activity recorded on a tetrode is seen on only one of the four recording wires. In recordings from freely moving rats, axonal signals have been isolated in white matter tracts including the alveus and the corpus callosum as well hippocampal gray matter. In fact, the generation of action potentials in vivo is sequential in nature, and these sequential spikes constitute the digital codes in the neurons. Although previous studies indicate an axonal origin of a single spike evoked by short-term pulses, physiological signals in vivo trigger the initiation of sequential spikes at the cell bodies of the neurons. In addition to propagating action potentials to axonal terminals, the axon is able to amplify the action potentials, which makes sure a secure propagation of sequential action potentials toward the axonal terminal. In terms of molecular mechanisms, voltage-gated sodium channels in the axons possess lower threshold and shorter refractory period in response to short-term pulses. Development and growth Development The development of the axon to its target, is one of the six major stages in the overall development of the nervous system. Studies done on cultured hippocampal neurons suggest that neurons initially produce multiple neurites that are equivalent, yet only one of these neurites is destined to become the axon. It is unclear whether axon specification precedes axon elongation or vice versa, although recent evidence points to the latter. If an axon that is not fully developed is cut, the polarity can change and other neurites can potentially become the axon. This alteration of polarity only occurs when the axon is cut at least 10 μm shorter than the other neurites. After the incision is made, the longest neurite will become the future axon and all the other neurites, including the original axon, will turn into dendrites. Imposing an external force on a neurite, causing it to elongate, will make it become an axon. Nonetheless, axonal development is achieved through a complex interplay between extracellular signaling, intracellular signaling and cytoskeletal dynamics. Extracellular signaling The extracellular signals that propagate through the extracellular matrix surrounding neurons play a prominent role in axonal development. These signaling molecules include proteins, neurotrophic factors, and extracellular matrix and adhesion molecules. Netrin (also known as UNC-6) a secreted protein, functions in axon formation. When the UNC-5 netrin receptor is mutated, several neurites are irregularly projected out of neurons and finally a single axon is extended anteriorly. The neurotrophic factorsnerve growth factor (NGF), brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NTF3) are also involved in axon development and bind to Trk receptors. The ganglioside-converting enzyme plasma membrane ganglioside sialidase (PMGS), which is involved in the activation of TrkA at the tip of neutrites, is required for the elongation of axons. PMGS asymmetrically distributes to the tip of the neurite that is destined to become the future axon. Intracellular signaling During axonal development, the activity of PI3K is increased at the tip of destined axon. Disrupting the activity of PI3K inhibits axonal development. Activation of PI3K results in the production of phosphatidylinositol (3,4,5)-trisphosphate (PtdIns) which can cause significant elongation of a neurite, converting it into an axon. As such, the overexpression of phosphatases that dephosphorylate PtdIns leads into the failure of polarization. Cytoskeletal dynamics The neurite with the lowest actin filament content will become the axon. PGMS concentration and f-actin content are inversely correlated; when PGMS becomes enriched at the tip of a neurite, its f-actin content is substantially decreased. In addition, exposure to actin-depolimerizing drugs and toxin B (which inactivates Rho-signaling) causes the formation of multiple axons. Consequently, the interruption of the actin network in a growth cone will promote its neurite to become the axon. Growth Growing axons move through their environment via the growth cone, which is at the tip of the axon. The growth cone has a broad sheet-like extension called a lamellipodium which contain protrusions called filopodia. The filopodia are the mechanism by which the entire process adheres to surfaces and explores the surrounding environment. Actin plays a major role in the mobility of this system. Environments with high levels of cell adhesion molecules (CAMs) create an ideal environment for axonal growth. This seems to provide a "sticky" surface for axons to grow along. Examples of CAMs specific to neural systems include N-CAM, TAG-1an axonal glycoproteinand MAG, all of which are part of the immunoglobulin superfamily. Another set of molecules called extracellular matrix-adhesion molecules also provide a sticky substrate for axons to grow along. Examples of these molecules include laminin, fibronectin, tenascin, and perlecan. Some of these are surface bound to cells and thus act as short range attractants or repellents. Others are difusible ligands and thus can have long range effects. Cells called guidepost cells assist in the guidance of neuronal axon growth. These cells that help axon guidance, are typically other neurons that are sometimes immature. When the axon has completed its growth at its connection to the target, the diameter of the axon can increase by up to five times, depending on the speed of conduction required. It has also been discovered through research that if the axons of a neuron were damaged, as long as the soma (the cell body of a neuron) is not damaged, the axons would regenerate and remake the synaptic connections with neurons with the help of guidepost cells. This is also referred to as neuroregeneration. Nogo-A is a type of neurite outgrowth inhibitory component that is present in the central nervous system myelin membranes (found in an axon). It has a crucial role in restricting axonal regeneration in adult mammalian central nervous system. In recent studies, if Nogo-A is blocked and neutralized, it is possible to induce long-distance axonal regeneration which leads to enhancement of functional recovery in rats and mouse spinal cord. This has yet to be done on humans. A recent study has also found that macrophages activated through a specific inflammatory pathway activated by the Dectin-1 receptor are capable of promoting axon recovery, also however causing neurotoxicity in the neuron. Length regulation Axons vary largely in length from a few micrometers up to meters in some animals. This emphasizes that there must be a cellular length regulation mechanism allowing the neurons both to sense the length of their axons and to control their growth accordingly. It was discovered that motor proteins play an important role in regulating the length of axons. Based on this observation, researchers developed an explicit model for axonal growth describing how motor proteins could affect the axon length on the molecular level. These studies suggest that motor proteins carry signaling molecules from the soma to the growth cone and vice versa whose concentration oscillates in time with a length-dependent frequency. Classification The axons of neurons in the human peripheral nervous system can be classified based on their physical features and signal conduction properties. Axons were known to have different thicknesses (from 0.1 to 20 μm) and these differences were thought to relate to the speed at which an action potential could travel along the axonits conductance velocity. Erlanger and Gasser proved this hypothesis, and identified several types of nerve fiber, establishing a relationship between the diameter of an axon and its nerve conduction velocity. They published their findings in 1941 giving the first classification of axons. Axons are classified in two systems. The first one introduced by Erlanger and Gasser, grouped the fibers into three main groups using the letters A, B, and C. These groups, group A, group B, and group C include both the sensory fibers (afferents) and the motor fibers (efferents). The first group A, was subdivided into alpha, beta, gamma, and delta fibersAα, Aβ, Aγ, and Aδ. The motor neurons of the different motor fibers, were the lower motor neuronsalpha motor neuron, beta motor neuron, and gamma motor neuron having the Aα, Aβ, and Aγ nerve fibers, respectively. Later findings by other researchers identified two groups of Aa fibers that were sensory fibers. These were then introduced into a system (Lloyd classification) that only included sensory fibers (though some of these were mixed nerves and were also motor fibers). This system refers to the sensory groups as Types and uses Roman numerals: Type Ia, Type Ib, Type II, Type III, and Type IV. Motor Lower motor neurons have two kind of fibers: Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers and nociceptors and thermoreceptors by type III and IV sensory fibers. Autonomic The autonomic nervous system has two kinds of peripheral fibers: Clinical significance In order of degree of severity, injury to a nerve in the peripheral nervous system can be described as neurapraxia, axonotmesis, or neurotmesis. Concussion is considered a mild form of diffuse axonal injury. Axonal injury can also cause central chromatolysis. The dysfunction of axons in the nervous system is one of the major causes of many inherited and acquired neurological disorders that affect both peripheral and central neurons. When an axon is crushed, an active process of axonal degeneration takes place at the part of the axon furthest from the cell body. This degeneration takes place quickly following the injury, with the part of the axon being sealed off at the membranes and broken down by macrophages. This is known as Wallerian degeneration. Dying back of an axon can also take place in many neurodegenerative diseases, particularly when axonal transport is impaired, this is known as Wallerian-like degeneration. Studies suggest that the degeneration happens as a result of the axonal protein NMNAT2, being prevented from reaching all of the axon. Demyelination of axons causes the multitude of neurological symptoms found in the disease multiple sclerosis. Dysmyelination is the abnormal formation of the myelin sheath. This is implicated in several leukodystrophies, and also in schizophrenia. A severe traumatic brain injury can result in widespread lesions to nerve tracts damaging the axons in a condition known as diffuse axonal injury. This can lead to a persistent vegetative state. It has been shown in studies on the rat that axonal damage from a single mild traumatic brain injury, can leave a susceptibility to further damage, after repeated mild traumatic brain injuries. A nerve guidance conduit is an artificial means of guiding axon growth to enable neuroregeneration, and is one of the many treatments used for different kinds of nerve injury. Terminology Some general dictionaries define "nerve fiber" as any neuronal process, including both axons and dendrites. However, medical sources generally use "nerve fiber" to refer to the axon only. History German anatomist Otto Friedrich Karl Deiters is generally credited with the discovery of the axon by distinguishing it from the dendrites. Swiss Rüdolf Albert von Kölliker and German Robert Remak were the first to identify and characterize the axon initial segment. Kölliker named the axon in 1896. Louis-Antoine Ranvier was the first to describe the gaps or nodes found on axons and for this contribution these axonal features are now commonly referred to as the nodes of Ranvier. Santiago Ramón y Cajal, a Spanish anatomist, proposed that axons were the output components of neurons, describing their functionality. Joseph Erlanger and Herbert Gasser earlier developed the classification system for peripheral nerve fibers, based on axonal conduction velocity, myelination, fiber size etc. Alan Hodgkin and Andrew Huxley also employed the squid giant axon (1939) and by 1952 they had obtained a full quantitative description of the ionic basis of the action potential, leading to the formulation of the Hodgkin–Huxley model. Hodgkin and Huxley were awarded jointly the Nobel Prize for this work in 1963. The formulae detailing axonal conductance were extended to vertebrates in the Frankenhaeuser–Huxley equations. The understanding of the biochemical basis for action potential propagation has advanced further, and includes many details about individual ion channels. Other animals The axons in invertebrates have been extensively studied. The longfin inshore squid, often used as a model organism has the longest known axon. The giant squid has the largest axon known. Its size ranges from 0.5 (typically) to 1 mm in diameter and is used in the control of its jet propulsion system. The fastest recorded conduction speed of 210 m/s, is found in the ensheathed axons of some pelagic Penaeid shrimps and the usual range is between 90 and 200 meters/s (cf 100–120 m/s for the fastest myelinated vertebrate axon.) Additional images
Biology and health sciences
Nervous system
Biology
969
https://en.wikipedia.org/wiki/Ataxia
Ataxia
Ataxia (from Greek α- [a negative prefix] + -τάξις [order] = "lack of order") is a neurological sign consisting of lack of voluntary coordination of muscle movements that can include gait abnormality, speech changes, and abnormalities in eye movements, that indicates dysfunction of parts of the nervous system that coordinate movement, such as the cerebellum. These nervous system dysfunctions occur in several different patterns, with different results and different possible causes. Ataxia can be limited to one side of the body, which is referred to as hemiataxia. Friedreich's ataxia has gait abnormality as the most commonly presented symptom. Dystaxia is a mild degree of ataxia. Types Cerebellar The term cerebellar ataxia is used to indicate ataxia due to dysfunction of the cerebellum. The cerebellum is responsible for integrating a significant amount of neural information that is used to coordinate smoothly ongoing movements and to participate in motor planning. Although ataxia is not present with all cerebellar lesions, many conditions affecting the cerebellum do produce ataxia. People with cerebellar ataxia may have trouble regulating the force, range, direction, velocity, and rhythm of muscle contractions. This results in a characteristic type of irregular, uncoordinated movement that can manifest itself in many possible ways, such as asthenia, asynergy, delayed reaction time, and dyschronometria. Individuals with cerebellar ataxia could also display instability of gait, difficulty with eye movements, dysarthria, dysphagia, hypotonia, dysmetria, and dysdiadochokinesia. These deficits can vary depending on which cerebellar structures have been damaged, and whether the lesion is bi- or unilateral. People with cerebellar ataxia may initially present with poor balance, which could be demonstrated as an inability to stand on one leg or perform tandem gait. As the condition progresses, walking is characterized by a widened base and high stepping, as well as staggering and lurching from side to side. Turning is also problematic and could result in falls. As cerebellar ataxia becomes severe, great assistance and effort are needed to stand and walk. Dysarthria, an impairment with articulation, may also be present and is characterized by "scanning" speech that consists of slower rate, irregular rhythm, and variable volume. Also, slurring of speech, tremor of the voice, and ataxic respiration may occur. Cerebellar ataxia could result with incoordination of movement, particularly in the extremities. Overshooting (or hypermetria) occurs with finger-to-nose testing and heel to shin testing; thus, dysmetria is evident. Impairments with alternating movements (dysdiadochokinesia), as well as dysrhythmia, may also be displayed. Tremor of the head and trunk (titubation) may be seen in individuals with cerebellar ataxia. Dysmetria is thought to be caused by a deficit in the control of interaction torques in multijoint motion. Interaction torques are created at an associated joint when the primary joint is moved. For example, if a movement required reaching to touch a target in front of the body, flexion at the shoulder would create a torque at the elbow, while extension of the elbow would create a torque at the wrist. These torques increase as the speed of movement increases and must be compensated and adjusted for to create coordinated movement. This may, therefore, explain decreased coordination at higher movement velocities and accelerations. Dysfunction of the vestibulocerebellum (flocculonodular lobe) impairs balance and the control of eye movements. This presents itself with postural instability, in which the person tends to separate his/her feet upon standing, to gain a wider base and to avoid titubation (bodily oscillations tending to be forward-backward ones). The instability is, therefore, worsened when standing with the feet together, regardless of whether the eyes are open or closed. This is a negative Romberg's test, or more accurately, it denotes the individual's inability to carry out the test, because the individual feels unstable even with open eyes. Dysfunction of the spinocerebellum (vermis and associated areas near the midline) presents itself with a wide-based "drunken sailor" gait (called truncal ataxia), characterised by uncertain starts and stops, lateral deviations, and unequal steps. As a result of this gait impairment, falling is a concern in patients with ataxia. Studies examining falls in this population show that 74–93% of patients have fallen at least once in the past year and up to 60% admit to fear of falling. Dysfunction of the cerebrocerebellum (lateral hemispheres) presents as disturbances in carrying out voluntary, planned movements by the extremities (called appendicular ataxia). These include: Intention tremor (coarse trembling, accentuated over the execution of voluntary movements, possibly involving the head and eyes, as well as the limbs and torso) Peculiar writing abnormalities (large, unequal letters, irregular underlining) A peculiar pattern of dysarthria (slurred speech, sometimes characterised by explosive variations in voice intensity despite a regular rhythm) Inability to perform rapidly alternating movements, known as dysdiadochokinesia, occurs, and could involve rapidly switching from pronation to supination of the forearm. Movements become more irregular with increases of speed. Inability to judge distances or ranges of movement happens. This dysmetria is often seen as undershooting, hypometria, or overshooting, hypermetria, the required distance or range to reach a target. This is sometimes seen when a patient is asked to reach out and touch someone's finger or touch his or her own nose. The rebound phenomenon, also known as the loss of the check reflex, is also sometimes seen in patients with cerebellar ataxia, for example, when patients are flexing their elbows isometrically against a resistance. When the resistance is suddenly removed without warning, the patients' arms may swing up and even strike themselves. With an intact check reflex, the patients check and activate the opposing triceps to slow and stop the movement. Patients may exhibit a constellation of subtle to overt cognitive symptoms, which are gathered under the terminology of Schmahmann's syndrome. Sensory The term sensory ataxia is used to indicate ataxia due to loss of proprioception, the loss of sensitivity to the positions of joint and body parts. This is generally caused by dysfunction of the dorsal columns of the spinal cord, because they carry proprioceptive information up to the brain. In some cases, the cause of sensory ataxia may instead be dysfunction of the various parts of the brain that receive positional information, including the cerebellum, thalamus, and parietal lobes. Sensory ataxia presents itself with an unsteady "stomping" gait with heavy heel strikes, as well as a postural instability that is usually worsened when the lack of proprioceptive input cannot be compensated for by visual input, such as in poorly lit environments. Physicians can find evidence of sensory ataxia during physical examination by having patients stand with their feet together and eyes shut. In affected patients, this will cause the instability to worsen markedly, producing wide oscillations and possibly a fall; this is called a positive Romberg's test. Worsening of the finger-pointing test with the eyes closed is another feature of sensory ataxia. Also, when patients are standing with arms and hands extended toward the physician, if the eyes are closed, the patients' fingers tend to "fall down" and then be restored to the horizontal extended position by sudden muscular contractions (the "ataxic hand"). Vestibular The term vestibular ataxia is used to indicate ataxia due to dysfunction of the vestibular system, which in acute and unilateral cases is associated with prominent vertigo, nausea, and vomiting. In slow-onset, chronic bilateral cases of vestibular dysfunction, these characteristic manifestations may be absent, and dysequilibrium may be the sole presentation. Causes The three types of ataxia have overlapping causes, so can either coexist or occur in isolation. Cerebellar ataxia can have many causes despite normal neuroimaging. Focal lesions Any type of focal lesion of the central nervous system (such as stroke, brain tumor, multiple sclerosis, inflammatory [such as sarcoidosis], and "chronic lymphocytyc inflammation with pontine perivascular enhancement responsive to steroids syndrome" [CLIPPERS]) will cause the type of ataxia corresponding to the site of the lesion: cerebellar if in the cerebellum; sensory if in the dorsal spinal cord...to include cord compression by thickened ligamentum flavum or stenosis of the boney spinal canal...(and rarely in the thalamus or parietal lobe); or vestibular if in the vestibular system (including the vestibular areas of the cerebral cortex). Exogenous substances (metabolic ataxia) Exogenous substances that cause ataxia mainly do so because they have a depressant effect on central nervous system function. The most common example is ethanol (alcohol), which is capable of causing reversible cerebellar and vestibular ataxia. Chronic intake of ethanol causes atrophy of the cerebellum by oxidative and endoplasmic reticulum stresses induced by thiamine deficiency. Other examples include various prescription drugs (e.g. most antiepileptic drugs have cerebellar ataxia as a possible adverse effect), Lithium level over 1.5mEq/L, synthetic cannabinoid HU-211 ingestion and various other medical and recreational drugs (e.g. ketamine, PCP or dextromethorphan, all of which are NMDA receptor antagonists that produce a dissociative state at high doses). A further class of pharmaceuticals which can cause short term ataxia, especially in high doses, are benzodiazepines. Exposure to high levels of methylmercury, through consumption of fish with high mercury concentrations, is also a known cause of ataxia and other neurological disorders. Radiation poisoning Ataxia can be induced as a result of severe acute radiation poisoning with an absorbed dose of more than 30 grays. Furthermore, those with ataxia telangiectasia may have a high sensitivity towards gamma rays and x-rays. Vitamin B12 deficiency Vitamin B12 deficiency may cause, among several neurological abnormalities, overlapping cerebellar and sensory ataxia. Neuropsychological symptoms may include sense loss, difficulty in proprioception, poor balance, loss of sensation in the feet, changes in reflexes, dementia, and psychosis, which can be reversible with treatment. Complications may include a neurological complex known as subacute combined degeneration of spinal cord, and other neurological disorders. Hypothyroidism Symptoms of neurological dysfunction may be the presenting feature in some patients with hypothyroidism. These include reversible cerebellar ataxia, dementia, peripheral neuropathy, psychosis and coma. Most of the neurological complications improve completely after thyroid hormone replacement therapy. Causes of isolated sensory ataxia Peripheral neuropathies may cause generalised or localised sensory ataxia (e.g. a limb only) depending on the extent of the neuropathic involvement. Spinal disorders of various types may cause sensory ataxia from the lesioned level below, when they involve the dorsal columns. Non-hereditary cerebellar degeneration Non-hereditary causes of cerebellar degeneration include chronic alcohol use disorder, head injury, paraneoplastic and non-paraneoplastic autoimmune ataxia, high-altitude cerebral edema, celiac disease, normal-pressure hydrocephalus, and infectious or post-infectious cerebellitis. Hereditary ataxias Ataxia may depend on hereditary disorders consisting of degeneration of the cerebellum or of the spine; most cases feature both to some extent, and therefore present with overlapping cerebellar and sensory ataxia, even though one is often more evident than the other. Hereditary disorders causing ataxia include autosomal dominant ones such as spinocerebellar ataxia, episodic ataxia, and dentatorubropallidoluysian atrophy, as well as autosomal recessive disorders such as Friedreich's ataxia (sensory and cerebellar, with the former predominating) and Niemann–Pick disease, ataxia–telangiectasia (sensory and cerebellar, with the latter predominating), autosomal recessive spinocerebellar ataxia-14 and abetalipoproteinaemia. An example of X-linked ataxic condition is the rare fragile X-associated tremor/ataxia syndrome or FXTAS. Arnold–Chiari malformation (congenital ataxia) Arnold–Chiari malformation is a malformation of the brain. It consists of a downward displacement of the cerebellar tonsils and the medulla through the foramen magnum, sometimes causing hydrocephalus as a result of obstruction of cerebrospinal fluid outflow. Succinic semialdehyde dehydrogenase deficiency Succinic semialdehyde dehydrogenase deficiency is an autosomal-recessive gene disorder where mutations in the ALDH5A1 gene results in the accumulation of gamma-Hydroxybutyric acid (GHB) in the body. GHB accumulates in the nervous system and can cause ataxia as well as other neurological dysfunction. Wilson's disease Wilson's disease is an autosomal-recessive gene disorder whereby an alteration of the ATP7B gene results in an inability to properly excrete copper from the body. Copper accumulates in the liver and raises the toxicity levels in the nervous system causing demyelination of the nerves. This can cause ataxia as well as other neurological and organ impairments. Gluten ataxia Gluten ataxia is an autoimmune disease derived from celiac disease, which is triggered by the ingestion of gluten. Early diagnosis and treatment with a gluten-free diet can improve ataxia and prevent its progression. The effectiveness of the treatment depends on the elapsed time from the onset of the ataxia until diagnosis, because the death of neurons in the cerebellum as a result of gluten exposure is irreversible. It accounts for 40% of ataxias of unknown origin and 15% of all ataxias. Less than 10% of people with gluten ataxia present any gastrointestinal symptom and only about 40% have intestinal damage. This entity is classified into primary auto-immune cerebellar ataxias (PACA). There is a continuum between presymptomatic ataxia and immune ataxias with clinical deficits. Potassium pump Malfunction of the sodium-potassium pump may be a factor in some ataxias. The - pump has been shown to control and set the intrinsic activity mode of cerebellar Purkinje neurons. This suggests that the pump might not simply be a homeostatic, "housekeeping" molecule for ionic gradients; but could be a computational element in the cerebellum and the brain. Indeed, an ouabain block of - pumps in the cerebellum of a live mouse results in it displaying ataxia and dystonia. Ataxia is observed for lower ouabain concentrations, dystonia is observed at higher ouabain concentrations. Cerebellar ataxia associated with anti-GAD antibodies Antibodies against the enzyme glutamic acid decarboxylase (GAD: enzyme changing glutamate into GABA) cause cerebellar deficits. The antibodies impair motor learning and cause behavioral deficits. GAD antibodies related ataxia is part of the group called immune-mediated cerebellar ataxias. The antibodies induce a synaptopathy. The cerebellum is particularly vulnerable to autoimmune disorders. Cerebellar circuitry has capacities to compensate and restore function thanks to cerebellar reserve, gathering multiple forms of plasticity. LTDpathies gather immune disorders targeting long-term depression (LTD), a form of plasticity. Diagnosis Imaging studies – A CT scan or MRI of the brain might help determine potential causes. An MRI can sometimes show shrinkage of the cerebellum and other brain structures in people with ataxia. It may also show other treatable findings, such as a blood clot or benign tumour, that could be pressing on the cerebellum. Lumbar puncture (spinal tap) – A needle is inserted into the lower back (lumbar region) between two lumbar vertebrae to obtain a sample of cerebrospinal fluid for testing. Genetic testing – Determines whether the mutation that causes one of the hereditary ataxic conditions is present. Tests are available for many but not all of the hereditary ataxias. Treatment The treatment of ataxia and its effectiveness depend on the underlying cause. Treatment may limit or reduce the effects of ataxia, but it is unlikely to eliminate them entirely. Recovery tends to be better in individuals with a single focal injury (such as stroke or a benign tumour), compared to those who have a neurological degenerative condition. A review of the management of degenerative ataxia was published in 2009. A small number of rare conditions presenting with prominent cerebellar ataxia are amenable to specific treatment and recognition of these disorders is critical. Diseases include vitamin E deficiency, abetalipoproteinemia, cerebrotendinous xanthomatosis, Niemann–Pick type C disease, Refsum's disease, glucose transporter type 1 deficiency, episodic ataxia type 2, gluten ataxia, glutamic acid decarboxylase ataxia. Novel therapies target the RNA defects associated with cerebellar disorders, using in particular anti-sense oligonucleotides. The movement disorders associated with ataxia can be managed by pharmacological treatments and through physical therapy and occupational therapy to reduce disability. Some drug treatments that have been used to control ataxia include: 5-hydroxytryptophan (5-HTP), idebenone, amantadine, physostigmine, L-carnitine or derivatives, trimethoprim/sulfamethoxazole, vigabatrin, phosphatidylcholine, acetazolamide, 4-aminopyridine, buspirone, and a combination of coenzyme Q10 and vitamin E. Physical therapy requires a focus on adapting activity and facilitating motor learning for retraining specific functional motor patterns. A recent systematic review suggested that physical therapy is effective, but there is only moderate evidence to support this conclusion. The most commonly used physical therapy interventions for cerebellar ataxia are vestibular habituation, Frenkel exercises, proprioceptive neuromuscular facilitation (PNF), and balance training; however, therapy is often highly individualized and gait and coordination training are large components of therapy. Current research suggests that, if a person is able to walk with or without a mobility aid, physical therapy should include an exercise program addressing five components: static balance, dynamic balance, trunk-limb coordination, stairs, and contracture prevention. Once the physical therapist determines that the individual is able to safely perform parts of the program independently, it is important that the individual be prescribed and regularly engage in a supplementary home exercise program that incorporates these components to further improve long term outcomes. These outcomes include balance tasks, gait, and individual activities of daily living. While the improvements are attributed primarily to changes in the brain and not just the hip or ankle joints, it is still unknown whether the improvements are due to adaptations in the cerebellum or compensation by other areas of the brain. Decomposition, simplification, or slowing of multijoint movement may also be an effective strategy that therapists may use to improve function in patients with ataxia. Training likely needs to be intense and focused—as indicated by one study performed with stroke patients experiencing limb ataxia who underwent intensive upper limb retraining. Their therapy consisted of constraint-induced movement therapy which resulted in improvements of their arm function. Treatment should likely include strategies to manage difficulties with everyday activities such as walking. Gait aids (such as a cane or walker) can be provided to decrease the risk of falls associated with impairment of balance or poor coordination. Severe ataxia may eventually lead to the need for a wheelchair. To obtain better results, possible coexisting motor deficits need to be addressed in addition to those induced by ataxia. For example, muscle weakness and decreased endurance could lead to increasing fatigue and poorer movement patterns. There are several assessment tools available to therapists and health care professionals working with patients with ataxia. The International Cooperative Ataxia Rating Scale (ICARS) is one of the most widely used and has been proven to have very high reliability and validity. Other tools that assess motor function, balance and coordination are also highly valuable to help the therapist track the progress of their patient, as well as to quantify the patient's functionality. These tests include, but are not limited to: The Berg Balance Scale Tandem Walking (to test for Tandem gaitability) Scale for the Assessment and Rating of Ataxia (SARA) tapping tests – The person must quickly and repeatedly tap their arm or leg while the therapist monitors the amount of dysdiadochokinesia. finger-nose testing – This test has several variations including finger-to-therapist's finger, finger-to-finger, and alternate nose-to-finger. Other uses The term "ataxia" is sometimes used in a broader sense to indicate lack of coordination in some physiological process. Examples include optic ataxia (lack of coordination between visual inputs and hand movements, resulting in inability to reach and grab objects) and ataxic respiration (lack of coordination in respiratory movements, usually due to dysfunction of the respiratory centres in the medulla oblongata). Optic ataxia may be caused by lesions to the posterior parietal cortex, which is responsible for combining and expressing positional information and relating it to movement. Outputs of the posterior parietal cortex include the spinal cord, brain stem motor pathways, pre-motor and pre-frontal cortex, basal ganglia and the cerebellum. Some neurons in the posterior parietal cortex are modulated by intention. Optic ataxia is usually part of Balint's syndrome, but can be seen in isolation with injuries to the superior parietal lobule, as it represents a disconnection between visual-association cortex and the frontal premotor and motor cortex.
Biology and health sciences
Symptoms and signs
Health
991
https://en.wikipedia.org/wiki/Absolute%20value
Absolute value
In mathematics, the absolute value or modulus of a real number , is the non-negative value without regard to its sign. Namely, if is a positive number, and if is negative (in which case negating makes positive), and For example, the absolute value of 3 and the absolute value of −3 is The absolute value of a number may be thought of as its distance from zero. Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts. Terminology and notation In 1806, Jean-Robert Argand introduced the term module, meaning unit of measure in French, specifically for the complex absolute value, and it was borrowed into English in 1866 as the Latin equivalent modulus. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English. The notation , with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude. In programming languages and computational software packages, the absolute value of is generally represented by abs(x), or a similar expression. The vertical bar notation also appears in a number of other mathematical contexts: for example, when applied to a set, it denotes its cardinality; when applied to a matrix, it denotes its determinant. Vertical bars denote the absolute value only for algebraic objects for which the notion of an absolute value is defined, notably an element of a normed division algebra, for example a real number, a complex number, or a quaternion. A closely related but distinct notation is the use of vertical bars for either the Euclidean norm or sup norm of a vector although double vertical bars with subscripts respectively) are a more common and less ambiguous notation. Definition and properties Real numbers For any the absolute value or modulus is denoted , with a vertical bar on each side of the quantity, and is defined as The absolute value is thus always either a positive number or zero, but never negative. When itself is negative then its absolute value is necessarily positive From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers (their absolute difference) is the distance between them. The notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below). Since the square root symbol represents the unique positive square root, when applied to a positive number, it follows that This is equivalent to the definition above, and may be used as an alternative definition of the absolute value of real numbers. The absolute value has the following four fundamental properties (, are real numbers), that are used for generalization of this notion to other domains: Non-negativity, positive definiteness, and multiplicativity are readily apparent from the definition. To see that subadditivity holds, first note that with its sign chosen to make the result positive. Now, since it follows that, whichever of is the value one has for all Consequently, , as desired. Some additional useful properties are given below. These are either immediate consequences of the definition or implied by the four fundamental properties above. Two other useful properties concerning inequalities are: These relations may be used to solve inequalities involving absolute values. For example: The absolute value, as "distance from zero", is used to define the absolute difference between arbitrary real numbers, the standard metric on the real numbers. Complex numbers Since the complex numbers are not ordered, the definition given at the top for the real absolute value cannot be directly applied to complex numbers. However, the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin. This can be computed using the Pythagorean theorem: for any complex number where and are real numbers, the absolute value or modulus is and is defined by the Pythagorean addition of and , where and denote the real and imaginary parts respectively. When the is zero, this coincides with the definition of the absolute value of the When a complex number is expressed in its polar form its absolute value Since the product of any complex number and its with the same absolute value, is always the non-negative real number the absolute value of a complex number is the square root which is therefore called the absolute square or squared modulus This generalizes the alternative definition for reals: The complex absolute value shares the four fundamental properties given above for the real absolute value. The identity is a special case of multiplicativity that is often useful by itself. Absolute value function The real absolute value function is continuous everywhere. It is differentiable everywhere except for . It is monotonically decreasing on the interval and monotonically increasing on the interval . Since a real number and its opposite have the same absolute value, it is an even function, and is hence not invertible. The real absolute value function is a piecewise linear, convex function. For both real and complex numbers the absolute value function is idempotent (meaning that the absolute value of any absolute value is itself). Relationship to the sign function The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions: or and for , Relationship to the max and min functions Let , then and Derivative The real absolute value function has a derivative for every , but is not differentiable at . Its derivative for is given by the step function: The real absolute value function is an example of a continuous function that achieves a global minimum where the derivative does not exist. The subdifferential of  at  is the interval . The complex absolute value function is continuous everywhere but complex differentiable nowhere because it violates the Cauchy–Riemann equations. The second derivative of  with respect to  is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function. Antiderivative The antiderivative (indefinite integral) of the real absolute value function is where is an arbitrary constant of integration. This is not a complex antiderivative because complex antiderivatives can only exist for complex-differentiable (holomorphic) functions, which the complex absolute value function is not. Derivatives of compositions The following two formulae are special cases of the chain rule: if the absolute value is inside a function, and if another function is inside the absolute value. In the first case, the derivative is always discontinuous at in the first case and where in the second case. Distance The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them. The standard Euclidean distance between two points and in Euclidean -space is defined as: This can be seen as a generalisation, since for and real, i.e. in a 1-space, according to the alternative definition of the absolute value, and for and complex numbers, i.e. in a 2-space, {| |- | | |- | | |- | | |} The above shows that the "absolute value"-distance, for real and complex numbers, agrees with the standard Euclidean distance, which they inherit as a result of considering them as one and two-dimensional Euclidean spaces, respectively. The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows: A real valued function on a set is called a metric (or a distance function) on , if it satisfies the following four axioms: {| |- |style="width:250px" | |Non-negativity |- | |Identity of indiscernibles |- | |Symmetry |- | |Triangle inequality |} Generalizations Ordered rings The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if  is an element of an ordered ring R, then the absolute value of , denoted by , is defined to be: where is the additive inverse of , 0 is the additive identity, and < and ≥ have the usual meaning with respect to the ordering in the ring. Fields The four fundamental properties of the absolute value for real numbers can be used to generalise the notion of absolute value to an arbitrary field, as follows. A real-valued function  on a field  is called an absolute value (also a modulus, magnitude, value, or valuation) if it satisfies the following four axioms: {| cellpadding=10 |- | |Non-negativity |- | |Positive-definiteness |- | |Multiplicativity |- | |Subadditivity or the triangle inequality |} Where 0 denotes the additive identity of . It follows from positive-definiteness and multiplicativity that , where 1 denotes the multiplicative identity of . The real and complex absolute values defined above are examples of absolute values for an arbitrary field. If is an absolute value on , then the function  on , defined by , is a metric and the following are equivalent: satisfies the ultrametric inequality for all , , in . is bounded in R. for every . for all . for all . An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean. Vector spaces Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space. A real-valued function on a vector space  over a field , represented as , is called an absolute value, but more usually a norm, if it satisfies the following axioms: For all  in , and , in , {| cellpadding=10 |- | |Non-negativity |- | |Positive-definiteness |- | |Absolute homogeneity or positive scalability |- | |Subadditivity or the triangle inequality |} The norm of a vector is also called its length or magnitude. In the case of Euclidean space , the function defined by is a norm called the Euclidean norm. When the real numbers are considered as the one-dimensional vector space , the absolute value is a norm, and is the -norm (see Lp space) for any . In fact the absolute value is the "only" norm on , in the sense that, for every norm on , . The complex absolute value is a special case of the norm in an inner product space, which is identical to the Euclidean norm when the complex plane is identified as the Euclidean plane . Composition algebras Every composition algebra A has an involution x → x* called its conjugation. The product in A of an element x and its conjugate x* is written N(x) = x x* and called the norm of x. The real numbers , complex numbers , and quaternions are all composition algebras with norms given by definite quadratic forms. The absolute value in these division algebras is given by the square root of the composition algebra norm. In general the norm of a composition algebra may be a quadratic form that is not definite and has null vectors. However, as in the case of division algebras, when an element x has a non-zero norm, then x has a multiplicative inverse given by x*/N(x).
Mathematics
Specific functions
null
994
https://en.wikipedia.org/wiki/Arecales
Arecales
Arecales is an order of flowering plants. The order has been widely named as such only for the past few decades; until then, the accepted name for the order including these plants was Principes. The order includes palms and relatives. Taxonomy The APG IV system of 2016 places Dasypogonaceae in this order, after studies showing Dasypogonaceae as sister to Arecaceae. However, this decision has been called into question. Historical taxonomical systems The Cronquist system of 1981 assigned the order to the subclass Arecidae in the class Liliopsida (= monocotyledons). The Thorne system (1992) and the Dahlgren system assigned the order to the superorder Areciflorae, also called Arecanae in the subclass Liliidae (= monocotyledons), with the single family Arecaceae. The APG II system of 2003 recognised the order and placed it in the clade commelinids in the monocots and uses this circumscription: order Arecales family Arecaceae, alternative name Palmae This was unchanged from the APG system of 1998, although it used the spelling "commelinoids" instead of commelinids. Principes In plant taxonomy, Principes is a botanical name, meaning "the first". It was used in the Engler system for an order in the Monocotyledones and later in the Kubitzki system. This order included one family only, the Palmae (alternate name Arecaceae). As the rules for botanical nomenclature provide for the use of such descriptive botanical names above the rank of family it is quite allowed to use this name even today, but in practice most systems prefer the name Arecales. Following this, Principes became the name of the journal of the International Palm Society, becoming Palms in 1999.
Biology and health sciences
Arecales (inc. Palms)
Plants
1004
https://en.wikipedia.org/wiki/April
April
April is the fourth month of the year in the Gregorian and Julian calendars. Its length is 30 days. April is commonly associated with the season of spring in the Northern Hemisphere, and autumn in the Southern Hemisphere, where it is the seasonal equivalent to October in the Northern Hemisphere and vice versa. History The Romans gave this month the Latin name Aprilis but the derivation of this name is uncertain. The traditional etymology is from the verb aperire, "to open," in allusion to its being the season when trees and flowers begin to "open," which is supported by comparison with the modern Greek use of άνοιξη (ánixi) (opening) for spring. Since some of the Roman months were named in honor of divinities, and as April was sacred to the goddess Venus, her Veneralia being held on the first day, it has been suggested that Aprilis was originally her month Aphrilis, from her equivalent Greek goddess name Aphrodite (Aphros), or the Etruscan name Apru. Jacob Grimm suggests the name of a hypothetical god or hero, Aper or Aprus. April was the second month of the earliest Roman calendar, before Ianuarius and Februarius were added by King Numa Pompilius about 700 BC. It became the fourth month of the calendar year (the year when twelve months are displayed in order) during the time of the decemvirs about 450 BC, when it was 29 days long. The 30th day was added back during the reform of the calendar undertaken by Julius Caesar in the mid-40s BC, which produced the Julian calendar. The Anglo-Saxons called April ēastre-monaþ. The Venerable Bede says in The Reckoning of Time that this month ēastre is the root of the word Easter. He further states that the month was named after a goddess Eostre whose feast was in that month. It is also attested by Einhard in his work Vita Karoli Magni. St George's day is the twenty-third of the month; and St Mark's Eve, with its superstition that the ghosts of those who are doomed to die within the year will be seen to pass into the church, falls on the twenty-fourth. In China the symbolic ploughing of the earth by the emperor and princes of the blood took place in their third month, which frequently corresponds to April. In Finnish, April is huhtikuu, meaning slash-and-burn moon, when gymnosperms for beat and burn clearing of farmland were felled. In Slovene, the most established traditional name is mali traven, the month when plants start growing. It was first written in 1466 in the Škofja Loka manuscript. The month April originally had 30 days; Numa Pompilius made it 29 days long; finally, Julius Caesar's calendar reform made it 30 days long again, which was not changed in the calendar revision of Augustus Caesar in 8 BC. In Ancient Rome, the festival of Cerealia was held for seven days from mid-to-late April, but exact dates are still being determined. Feriae Latinae was also held in April, with the date varying. Other ancient Roman observances include Veneralia (April 1), Megalesia (April 10–16), Fordicidia (April 15), Parilia (April 21), Vinalia Urbana (April 23), Robigalia (April 25), and Serapia (April 25). Floralia was held April 27 during the Republican era, or April 28 on the Julian calendar, and lasted until May 3. However, these dates do not correspond to the modern Gregorian calendar. The Lyrids meteor shower appears on April 16 – April 26 each year, with the peak generally occurring on April 22. The Eta Aquariids meteor shower also appears in April. It is visible from April 21 to May 20 each year, with peak activity on or around May 6. The Pi Puppids appear on April 23, but only in years around the parent comet's perihelion date. The Virginids also shower at various dates in April. The "Days of April" (journées d'avril) is a name assigned in French history to a series of insurrections at Lyons, Paris and elsewhere, against the government of Louis Philippe in 1834, which led to violent repressive measures, and to a famous trial known as the procès d'avril. Symbols April's birthstone is the diamond. The birth flower is the common daisy (Bellis perennis) or the sweet pea. The zodiac signs are Aries (until April 19) and Taurus (April 20 onward). Observances This list does not necessarily imply official status or general observance. Month-long In Catholic, Protestant and Orthodox tradition, April is the Month of the Resurrection of the Lord. April and March are the months in which the moveable Feast of Easter Sunday is celebrated. National Pet Month (United Kingdom) United States Arab American Heritage Month Autism Awareness Month Cancer Control Month Community College Awareness Month Confederate History Month (Alabama, Florida, Georgia, Louisiana, Mississippi, Texas, Virginia) Financial Literacy Month Jazz Appreciation Month Mathematics and Statistics Awareness Month Month of the Military Child National Poetry Month National Poetry Writing Month Occupational Therapy Month National Prevent Child Abuse Month National Volunteer Month Parkinson's Disease Awareness Month Rosacea Awareness Month Sexual Assault Awareness Month United States food months Fresh Florida Tomato Month National Food Month National Grilled Cheese Month National Pecan Month National Soft Pretzel Month National Soyfoods Month Non-Gregorian (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable Variable; 2021 dates shown Youth Homelessness Matters Day National Health Day (Kiribati): April 6 Oral, Head and Neck Cancer Awareness Week (United States): April 13–19 National Park Week (United States): April 18–26 Crime Victims' Rights Week (United States): April 19–25 National Volunteer Week: April 19–25 European Immunization Week: April 20–26 Day of Silence (United States): April 24 Pay It Forward Day: April 28 (International observance) Denim Day: April 29 (International observance) Day of Dialogue (United States) Vaccination Week In The Americas See: List of movable Western Christian observances See: List of movable Eastern Christian observances First Wednesday National Day of Hope (United States) First Saturday Ulcinj Municipality Day (Ulcinj, Montenegro) First Sunday Daylight saving time ends (Australia and New Zealand) Geologists Day (former Soviet Union countries) Kanamara Matsuri (Kawasaki, Japan) Opening Day (United States) First full week National Library Week (United States) National Library Workers Day (United States) (Tuesday of National Library week, April 9 in 2024) National Bookmobile Day (Wednesday of National Library week, April 10 in 2024) National Public Health Week (United States) National Public Safety Telecommunicators Week (United States) Second Wednesday International Day of Pink Second Thursday National Former Prisoner of War Recognition Day (United States) Second Friday Fast and Prayer Day (Liberia) Air Force Day (Russia) Kamakura Matsuri at Tsurugaoka Hachiman (Kamakura, Japan), lasts until third Sunday. Second Sunday Children's Day (Peru) Week of April 14 Pan-American Week (United States) Third Wednesday Administrative Professionals' Day (New Zealand) Third Thursday National High Five Day (United States) Third Saturday Record Store Day (International observance) Last full week of April Administrative Professionals Week (Malaysia, North America) World Immunization Week Week of April 23 Canada Book Week (Canada) Week of the New Moon National Dark-Sky Week (United States) Third Monday Patriots' Day (Massachusetts, Maine, United States) Queen's Official Birthday (Saint Helena, Ascension and Tristan da Cunha) Sechseläuten (Zürich, Switzerland) Wednesday of last full week of April Administrative Professionals' Day (Hong Kong, North America) First Thursday after April 18 First Day of Summer (Iceland) Fourth Thursday Take Our Daughters And Sons To Work Day (United States) Last Friday Arbor Day (United States) Día de la Chupina (Rosario, Argentina) Last Friday in April to first Sunday in May Arbour Week in Ontario Last Saturday Children's Day (Colombia) National Rebuilding Day (United States) National Sense of Smell Day (United States) World Tai Chi and Qigong Day Last Sunday Flag Day (Åland, Finland) Turkmen Racing Horse Festival (Turkmenistan) April 27 (April 26 if April 27 is a Sunday) Koningsdag (Netherlands) Last Monday Confederate Memorial Day (Alabama, Georgia (U.S. state), and Mississippi, United States) Last Wednesday International Noise Awareness Day Fixed April 1 April Fools' Day Arbor Day (Tanzania) Civil Service Day (Thailand) Cyprus National Day (Cyprus) Edible Book Day Fossil Fools Day Kha b-Nisan (Assyrian people) National Civil Service Day (Thailand) Odisha Day (Odisha, India) Start of Testicular Cancer Awareness week (United States), April 1–7 Season for Nonviolence January 30 – April 4 April 2 International Children's Book Day (International observance) Malvinas Day (Argentina) National Peanut Butter and Jelly Day (United States) Thai Heritage Conservation Day (Thailand) Unity of Peoples of Russia and Belarus Day (Belarus) World Autism Awareness Day (International observance) April 3 April 4 Children's Day (Hong Kong, Taiwan) Independence Day (Senegal) International Day for Mine Awareness and Assistance in Mine Action Peace Day (Angola) April 5 Children's Day (Palestinian territories) National Caramel Day (United States) Sikmogil (South Korea) April 6 Chakri Day (Thailand) National Beer Day (United Kingdom) New Beer's Eve (United States) Tartan Day (United States & Canada) April 7 Flag Day (Slovenia) Genocide Memorial Day (Rwanda), and its related observance: International Day of Reflection on the 1994 Rwanda Genocide (United Nations) Motherhood and Beauty Day (Armenia) National Beer Day (United States) Sheikh Abeid Amani Karume Day (Tanzania) Women's Day (Mozambique) World Health Day (International observance) April 8 Buddha's Birthday (Japan only, other countries follow different calendars) Feast of the First Day of the Writing of the Book of the Law (Thelema) International Romani Day (International observance) April 9 Anniversary of the German Invasion of Denmark (Denmark) Baghdad Liberation Day (Iraqi Kurdistan) Constitution Day (Kosovo) Day of National Unity (Georgia) Day of the Finnish Language (Finland) Day of Valor or Araw ng Kagitingan (Philippines) Feast of the Second Day of the Writing of the Book of the Law (Thelema) International Banshtai Tsai Day Martyr's Day (Tunisia) National Former Prisoner of War Recognition Day (United States) Remembrance for Haakon Sigurdsson (The Troth) Vimy Ridge Day (Canada) April 10 Day of the Builder (Azerbaijan) Feast of the Third Day of the Writing of the Book of the Law (Thelema) Siblings Day (International observance) April 11 Juan Santamaría Day, anniversary of his death in the Second Battle of Rivas. (Costa Rica) International Louie Louie Day National Cheese Fondue Day (United States) World Parkinson's Day April 12 Children's Day (Bolivia and Haiti) Commemoration of first human in space by Yuri Gagarin: Cosmonautics Day (Russia) International Day of Human Space Flight Yuri's Night (International observance) Halifax Day (North Carolina) National Grilled Cheese Sandwich Day (United States) National Redemption Day (Liberia) April 13 Jefferson's Birthday (United States) Katyn Memorial Day (Poland) Teacher's Day (Ecuador) First day of Thingyan (Myanmar) (April 13–16) Unfairly Prosecuted Persons Day (Slovakia) April 14 ʔabusibaree (Okinawa Islands, Japan) Ambedkar Jayanti (India) Black Day (South Korea) Commemoration of Anfal Genocide Against the Kurds (Iraqi Kurdistan) Dhivehi Language Day (Maldives) Day of Mologa (Yaroslavl Oblast, Russia) Day of the Georgian language (Georgia (country)) Season of Emancipation (April 14 to August 23) (Barbados) N'Ko Alphabet Day (Mande speakers) Pohela Boishakh (Bangladesh) Pana Sankranti (Odisha, India) Puthandu (Tamils) (India, Malaysia, Singapore, Sri Lanka) Second day of Songkran (Thailand) (Thailand) Pan American Day (several countries in the Americas) The first day of Takayama Spring Festival (Takayama, Gifu, Japan) Vaisakh (Punjab (region)), (India and Pakistan) Youth Day (Angola) April 15 Day of the Sun (North Korea). Hillsborough Disaster Memorial (Liverpool, England) Jackie Robinson Day (United States) Pohela Boishakh (West Bengal, India) (Note: celebrated on April 14 in Bangladesh) Last day of Songkran (Thailand) (Thailand) Tax Day, the official deadline for filing an individual tax return (or requesting an extension). (United States, Philippines) Universal Day of Culture World Art Day April 16 Birthday of José de Diego (Puerto Rico, United States) Birthday of Queen Margrethe II (Denmark) Emancipation Day (Washington, D.C., United States) Foursquare Day (International observance) Memorial Day for the Victims of the Holocaust (Hungary) National Healthcare Decisions Day (United States) Remembrance of Chemical Attack on Balisan and Sheikh Wasan (Iraqi Kurdistan) World Voice Day April 17 Evacuation Day (Syria) FAO Day (Iraq) Flag Day (American Samoa) Malbec World Day National Cheeseball Day (United States) National Espresso Day (Italy) Women's Day (Gabon) World Hemophilia Day April 18 Anniversary of the Victory over the Teutonic Knights in the Battle of the Ice, 1242 (Russia) Army Day (Iran) Coma Patients' Day (Poland) Friend's Day (Brazil) Independence Day (Zimbabwe) International Day For Monuments and Sites Invention Day (Japan) April 19 Army Day (Brazil) Beginning of the Independence Movement (Venezuela) Bicycle Day Dutch-American Friendship Day (United States) Holocaust Remembrance Day (Poland) Indigenous Peoples Day (Brazil) King Mswati III's birthday (Eswatini) Landing of the 33 Patriots Day (Uruguay) National Garlic Day (United States) National Rice Ball Day (United States) Primrose Day (United Kingdom) April 20 420 (cannabis culture) (International) UN Chinese Language Day (United Nations) April 21 Natale di Roma(Italy) A&M Day (Texas A&M University) Civil Service Day (India) Day of Local Self-Government (Russia) Grounation Day (Rastafari movement) Heroic Defense of Veracruz (Mexico) Kang Pan-sok's Birthday (North Korea) Kartini Day (Indonesia) Local Self Government Day (Russia) National Tree Planting Day (Kenya) San Jacinto Day (Texas) Queen's Official Birthday (Falkland Islands) Tiradentes' Day (Brazil) Vietnam Book Day (Vietnam) April 22 Discovery Day (Brazil) Earth Day (International observance) and its related observance: International Mother Earth Day Holocaust Remembrance Day (Serbia) National Jelly Bean Day (United States) April 23 Castile and León Day (Castile and León, Spain) German Beer Day (Germany) Independence Day (Conch Republic, Key West, Florida) International Pixel-Stained Technopeasant Day Khongjom Day (Manipur, India) National Sovereignty and Children's Day (Turkey and Northern Cyprus) Navy Day (China) St George's Day (England) and its related observances: Canada Book Day (Canada) La Diada de Sant Jordi (Catalonia, Spain) World Book Day UN English Language Day (United Nations) April 24 Armenian Genocide Remembrance Day (Armenia) Concord Day (Niger) Children's Day (Zambia) Democracy Day (Nepal) Fashion Revolution Day Flag Day (Ireland) International Sculpture Day Kapyong Day (Australia) Labour Safety Day (Bangladesh) National Panchayati Raj Day (India) National Pigs in a Blanket Day (United States) Republic Day (The Gambia) St Mark's Eve (Western Christianity) World Day for Laboratory Animals April 25 Anniversary of the First Cabinet of Kurdish Government (Iraqi Kurdistan) Anzac Day (Australia, New Zealand) Arbor Day (Germany) DNA Day Feast of Saint Mark (Western Christianity) Flag Day (Faroe Islands) Flag Day (Eswatini) Freedom Day (Portugal) Liberation Day (Italy) Major Rogation (Western Christianity) Military Foundation Day (North Korea) National Zucchini Bread Day (United States) Parental Alienation Awareness Day Red Hat Society Day Sinai Liberation Day (Egypt) World Malaria Day April 26 Chernobyl disaster related observances: Memorial Day of Radiation Accidents and Catastrophes (Russia) Day of Remembrance of the Chernobyl tragedy (Belarus) Confederate Memorial Day (Florida, United States) Hug A Friend Day Lesbian Visibility Day National Pretzel Day (United States) Old Permic Alphabet Day Union Day (Tanzania) World Intellectual Property Day April 27 Day of Russian Parliamentarism (Russia) Day of the Uprising Against the Occupying Forces (Slovenia) Flag Day (Moldova) Freedom Day (South Africa) UnFreedom Day Independence Day (Sierra Leone) Independence Day (Togo) National Day (Mayotte) National Day (Sierra Leone) National Prime Rib Day (United States) National Veterans' Day (Finland) April 28 Lawyers' Day (Orissa, India) Mujahideen Victory Day (Afghanistan) National Day (Sardinia, Italy) National Heroes Day (Barbados) Restoration of Sovereignty Day (Japan) Workers' Memorial Day and World Day for Safety and Health at Work (international) National Day of Mourning (Canada) April 29 Day of Remembrance for all Victims of Chemical Warfare (United Nations) International Dance Day (UNESCO) Princess Bedike's Birthday (Denmark) National Shrimp Scampi Day (United States) Shōwa Day, traditionally the start of the Golden Week holiday period, which is April 29 and May 3–5. (Japan) April 30 Armed Forces Day (Georgia (country)) Birthday of the King (Sweden) Camarón Day (French Foreign Legion) Children's Day (Mexico) Consumer Protection Day (Thailand) Honesty Day (United States) International Jazz Day (UNESCO) Martyr's Day (Pakistan) May Eve, the eve of the first day of summer in the Northern hemisphere (see May 1): Beltane begins at sunset in the Northern hemisphere, Samhain begins at sunset in the Southern hemisphere. (Neo-Druidic Wheel of the Year) Carodejnice (Czech Republic and Slovakia) Walpurgis Night (Central and Northern Europe) National Persian Gulf Day (Iran) Reunification Day (Vietnam) Russian State Fire Service Day (Russia) Tax Day (Canada) Teachers' Day (Paraguay)
Technology
Months
null
1005
https://en.wikipedia.org/wiki/August
August
August is the eighth month of the year in the Julian and Gregorian calendars. Its length is 31 days. In the Southern Hemisphere, August is the seasonal equivalent of February in the Northern Hemisphere. In the Northern Hemisphere, August falls in summer. In the Southern Hemisphere, the month falls during winter. In many European countries, August is the holiday month for most workers. Numerous religious holidays occurred during August in ancient Rome. Certain meteor showers take place in August. The Kappa Cygnids occur in August, with yearly dates varying. The Alpha Capricornids meteor shower occurs as early as July 10 and ends around August 10. The Southern Delta Aquariids occur from mid-July to mid-August, with the peak usually around July 28–29. The Perseids, a major meteor shower, typically takes place between July 17 and August 24, with the peak days varying yearly. The star cluster of Messier 30 is best observed around August. Among the aborigines of the Canary Islands, especially among the Guanches of Tenerife, the month of August received the name of Beñesmer or Beñesmen, which was also the harvest festival held that month. The month was originally named Sextilis in Latin because it was the 6th month in the original ten-month Roman calendar under Romulus in 753 BC, with March being the first month of the year. About 700 BC, it became the eighth month when January and February were added to the year before March by King Numa Pompilius, who also gave it 29 days. Julius Caesar added two days when he created the Julian calendar in , giving it its modern length of 31 days. In 8 BC, the month was renamed in honor of Emperor Augustus. According to a Senatus consultum quoted by Macrobius, he chose this month because it was the time of several of his great triumphs, including the conquest of Egypt. Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis had 31 days before it was renamed. It was not chosen for its length. Symbols August's birthstones are the peridot, sardonyx, and spinel. Its birth flower is the gladiolus or poppy, meaning beauty, strength of character, love, marriage and family. The Western zodiac signs are Leo (until August 22) and Virgo (from August 23 onward). Observances This list does not necessarily imply official status or general observance. Non-Gregorian: dates (All Baha'i, Islamic, and Jewish observances begin at sundown before the listed date and end at sundown on the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Month-long Women's Month (South Africa) American Adventures Month (celebrates vacationing in the Americas) Children's Eye Health and Safety Month Digestive Tract Paralysis (DTP) Month Get Ready for Kindergarten Month Happiness Happens Month Month of Philippine Languages or Buwan ng Wika (Philippines) Neurosurgery Outreach Month Psoriasis Awareness Month Spinal Muscular Atrophy Awareness Month What Will Be Your Legacy Month United States month-long National Black Business Month National Children's Vision and Learning Month National Immunization Awareness Month National Princess Peach Month National Water Quality Month National Win with Civility Month Food months in the United States National Catfish Month National Dippin' Dots Month Family Meals Month National Goat Cheese Month. National Panini Month Peach Month Sandwich Month Moveable Gregorian National Science Week (Australia)
Technology
Months
null
1014
https://en.wikipedia.org/wiki/Alcohol%20%28chemistry%29
Alcohol (chemistry)
In chemistry, an alcohol (), is a type of organic compound that carries at least one hydroxyl () functional group bound to a saturated carbon atom. Alcohols range from the simple, like methanol and ethanol, to complex, like sugars and cholesterol. The presence of an OH group strongly modifies the properties of hydrocarbons, conferring hydrophilic (water-loving) properties. The OH group provides a site at which many reactions can occur. History The flammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of alcohol, even despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (–873 CE) and to al-Fārābī (–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., alcohol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century, it had become a widely known substance among Western European chemists. The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated fractional distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine). Nomenclature Etymology The word "alcohol" derives from the Arabic kohl (), a powder used as an eyeliner. The first part of the word () is the Arabic definite article, equivalent to the in English. The second part of the word () has several antecedents in Semitic languages, ultimately deriving from the Akkadian (), meaning stibnite or antimony. Like its antecedents in Arabic and older languages, the term alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide . It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. Later the meaning of alcohol was extended to distilled substances in general, and then narrowed again to ethanol, when "spirits" was a synonym for hard liquor. Paracelsus and Libavius both used the term alcohol to denote a fine powder, the latter speaking of an alcohol derived from antimony. At the same time Paracelsus uses the word for a volatile liquid; alcool or alcool vini occurs often in his writings. Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre." The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium." By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "". Johnson (1657) glosses alcohol vini as "." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850. The term ethanol was invented in 1892, blending "ethane" with the "-ol" ending of "alcohol", which was generalized as a libfix. The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic drinks. The suffix -ol appears in the International Union of Pure and Applied Chemistry (IUPAC) chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, some compounds that contain hydroxyl functional groups have trivial names that do not include the suffix -ol or the prefix hydroxy-, e.g. the sugars glucose and sucrose. Systematic names IUPAC nomenclature is used in scientific publications, and in writings where precise identification of the substance is important. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane". When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for , propan-2-ol for . If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used, e.g., as in 1-hydroxy-2-propanone (). Compounds having more than one hydroxy group are called polyols. They are named using suffixes -diol, -triol, etc., following a list of the position numbers of the hydroxyl groups, as in propane-1,2-diol for CH3CH(OH)CH2OH (propylene glycol). In cases where the hydroxy group is bonded to an sp2 carbon on an aromatic ring, the molecule is classified separately as a phenol and is named using the IUPAC rules for naming phenols. Phenols have distinct properties and are not classified as alcohols. Common names In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix. In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, can be named trimethylcarbinol. Primary, secondary, and tertiary Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. The respective numeric shorthands 1°, 2°, and 3° are sometimes used in informal settings. The primary alcohols have general formulas . The simplest primary alcohol is methanol (), for which R = H, and the next is ethanol, for which , the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (). For the tertiary alcohols, the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is . In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups. Examples Applications Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols: methanol, mainly for the production of formaldehyde and as a fuel additive ethanol, mainly for alcoholic beverages, fuel additive, solvent, and to sterilize hospital instruments. 1-propanol, 1-butanol, and isobutyl alcohol for use as a solvent and precursor to solvents C6–C11 alcohols used for plasticizers, e.g. in polyvinylchloride fatty alcohol (C12–C18), precursors to detergents Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally. Toxicity With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols, and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Ethanol is less acutely toxic. All alcohols are mild skin irritants. Methanol and ethylene glycol are more toxic than other simple alcohols. Their metabolism is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way, methanol will be excreted intact in urine. Physical properties In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. 1-Butanol, with a four-carbon chain, is moderately soluble. Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether. Occurrence in nature Alcohols occur widely in nature, as derivatives of glucose such as cellulose and hemicellulose, and in phenols and their derivatives such as lignin. Starting from biomass, 180 billion tons/y of complex carbohydrates (sugar polymers) are produced commercially (as of 2014). Many other alcohols are pervasive in organisms, as manifested in other sugars such as fructose and sucrose, in polyols such as glycerol, and in some amino acids such as serine. Simple alcohols like methanol, ethanol, and propanol occur in modest quantities in nature, and are industrially synthesized in large quantities for use as chemical precursors, fuels, and solvents. Production Hydroxylation Many alcohols are produced by hydroxylation, i.e., the installation of a hydroxy group using oxygen or a related oxidant. Hydroxylation is the means by which the body processes many poisons, converting lipophilic compounds into hydrophilic derivatives that are more readily excreted. Enzymes called hydroxylases and oxidases facilitate these conversions. Many industrial alcohols, such as cyclohexanol for the production of nylon, are produced by hydroxylation. Ziegler and oxo processes In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis. An idealized synthesis of 1-octanol is shown: Al(C2H5)3 + 9 C2H4 -> Al(C8H17)3 Al(C8H17)3 + 3O + 3 H2O -> 3 HOC8H17 + Al(OH)3 The process generates a range of alcohols that are separated by distillation. Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol: RCH=CH2 + H2 + CO -> RCH2CH2CHO RCH2CH2CHO + 3 H2 -> RCH2CH2CH2OH Such processes give fatty alcohols, which are useful for detergents. Hydration reactions Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration uses ethylene (ethylene hydration) or other alkenes from cracking of fractions of distilled crude oil. Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide. Fermentation Ethanol is obtained by fermentation of glucose (which is often obtained from starch) in the presence of yeast. Carbon dioxide is cogenerated. Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above . The bacterium Clostridium acetobutylicum can feed on cellulose (also an alcohol) to produce butanol on an industrial scale. Substitution Primary alkyl halides react with aqueous NaOH or KOH to give alcohols in nucleophilic aliphatic substitution. Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead. Grignard reagents react with carbonyl groups to give secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki-Hiyama reaction. Reduction Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction using aluminium isopropoxide is the Meerwein-Ponndorf-Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters. Hydrolysis Alkenes engage in an acid catalyzed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. Formation of a secondary alcohol via alkene reduction and hydration is shown: The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with N-bromosuccinimide and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed. Reactions Deprotonation With aqueous pKa values of around 16–19, alcohols are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula (where R is an alkyl and M is a metal). 2 R-OH + 2 NaH -> 2 R-O-Na + 2 H2 2 R-OH + 2 Na -> 2 R-O-Na + H2 The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water. In DMSO, alcohols (and water) have a pKa of around 29–32. As a consequence, alkoxides (and hydroxide) are powerful bases and nucleophiles (e.g., for the Williamson ether synthesis) in this solvent. In particular, or in DMSO can be used to generate significant equilibrium concentrations of acetylide ions through the deprotonation of alkynes (see Favorskii reaction). Nucleophilic substitution Tertiary alcohols react with hydrochloric acid to produce tertiary alkyl chloride. Primary and secondary alcohols are converted to the corresponding chlorides using thionyl chloride and various phosphorus chloride reagents. Primary and secondary alcohols, likewise, convert to alkyl bromides using phosphorus tribromide, for example: 3 R-OH + PBr3 -> 3 RBr + H3PO3 In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction. Dehydration Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol: Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols are eliminated easily at just above room temperature, but primary alcohols require a higher temperature. This is a diagram of acid catalyzed dehydration of ethanol to produce ethylene: A more controlled elimination reaction requires the formation of the xanthate ester. Protonolysis Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes. Esterification Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid: R-OH + R'-CO2H -> R'-CO2R + H2O Other types of ester are prepared in a similar manner−for example, tosyl (tosylate) esters are made by reaction of the alcohol with 4-toluenesulfonyl chloride in pyridine. Oxidation Primary alcohols () can be oxidized either to aldehydes () or to carboxylic acids (). The oxidation of secondary alcohols () normally terminates at the ketone () stage. Tertiary alcohols () are resistant to oxidation. The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate () by reaction with water before it can be further oxidized to the carboxylic acid. Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess-Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent.
Physical sciences
Carbon–oxygen bond
null
1032
https://en.wikipedia.org/wiki/Abscess
Abscess
An abscess is a collection of pus that has built up within the tissue of the body, usually caused by bacterial infection. Signs and symptoms of abscesses include redness, pain, warmth, and swelling. The swelling may feel fluid-filled when pressed. The area of redness often extends beyond the swelling. Carbuncles and boils are types of abscess that often involve hair follicles, with carbuncles being larger. A cyst is related to an abscess, but it contains a material other than pus, and a cyst has a clearly defined wall. Abscesses can also form internally on internal organs and after surgery. They are usually caused by a bacterial infection. Often many different types of bacteria are involved in a single infection. In many areas of the world, the most common bacteria present is methicillin-resistant Staphylococcus aureus. Rarely, parasites can cause abscesses; this is more common in the developing world. Diagnosis of a skin abscess is usually made based on what it looks like and is confirmed by cutting it open. Ultrasound imaging may be useful in cases in which the diagnosis is not clear. In abscesses around the anus, computer tomography (CT) may be important to look for deeper infection. Standard treatment for most skin or soft tissue abscesses is cutting it open and drainage. There appears to be some benefit from also using antibiotics. A small amount of evidence supports not packing the cavity that remains with gauze after drainage. Closing this cavity right after draining it rather than leaving it open may speed healing without increasing the risk of the abscess returning. Sucking out the pus with a needle is often not sufficient. Skin abscesses are common and have become more common in recent years. Risk factors include intravenous drug use, with rates reported as high as 65% among users. In 2005, 3.2 million people went to American emergency departments for abscesses. In Australia, around 13,000 people were hospitalized in 2008 with the condition. Signs and symptoms Abscesses may occur in any kind of tissue but most frequently within the skin surface (where they may be superficial pustules known as boils or deep skin abscesses), in the lungs, brain, teeth, kidneys, and tonsils. Major complications may include spreading of the abscess material to adjacent or remote tissues, and extensive regional tissue death (gangrene). The main symptoms and signs of a skin abscess are redness, heat, swelling, pain, and loss of function. There may also be high temperature (fever) and chills. If superficial, abscesses may be fluctuant when palpated; this wave-like motion is caused by movement of the pus inside the abscess. An internal abscess is more difficult to identify and depend on the location of the abscess and the type of infection. General signs include pain in the affected area, a high temperature, and generally feeling unwell. Internal abscesses rarely heal themselves, so prompt medical attention is indicated if such an abscess is suspected. An abscess can potentially be fatal depending on where it is located. Causes Risk factors for abscess formation include intravenous drug use. Another possible risk factor is a prior history of disc herniation or other spinal abnormality, though this has not been proven. Abscesses are caused by bacterial infection, parasites, or foreign substances. Bacterial infection is the most common cause, particularly Staphylococcus aureus. The more invasive methicillin-resistant Staphylococcus aureus (MRSA) may also be a source of infection, though is much rarer. Among spinal subdural abscesses, methicillin-sensitive Staphylococcus aureus is the most common organism involved. Rarely parasites can cause abscesses and this is more common in the developing world. Specific parasites known to do this include dracunculiasis and myiasis. Anorectal abscess Anorectal abscesses can be caused by non-specific obstruction and ensuing infection of the glandular crypts inside of the anus or rectum. Other causes include cancer, trauma, or inflammatory bowel diseases. Incisional abscess An incisional abscess is one that develops as a complication secondary to a surgical incision. It presents as redness and warmth at the margins of the incision with purulent drainage from it. If the diagnosis is uncertain, the wound should be aspirated with a needle, with aspiration of pus confirming the diagnosis and availing for Gram stain and bacterial culture. Internal abscess Abscesses can form inside the body. The cause can be from trauma, surgery, an infection, or a pre-existing condition. Pathophysiology An abscess is a defensive reaction of the tissue to prevent the spread of infectious materials to other parts of the body. Organisms or foreign materials destroy the local cells, which results in the release of cytokines. The cytokines trigger an inflammatory response, which draws large numbers of white blood cells to the area and increases the regional blood flow. The final structure of the abscess is an abscess wall, or capsule, that is formed by the adjacent healthy cells in an attempt to keep the pus from infecting neighboring structures. However, such encapsulation tends to prevent immune cells from attacking bacteria in the pus, or from reaching the causative organism or foreign object. Diagnosis An abscess is a localized collection of pus (purulent inflammatory tissue) caused by suppuration buried in a tissue, an organ, or a confined space, lined by the pyogenic membrane. Ultrasound imaging can help in a diagnosis. Classification Abscesses may be classified as either skin abscesses or internal abscesses. Skin abscesses are common; internal abscesses tend to be harder to diagnose, and more serious. Skin abscesses are also called cutaneous or subcutaneous abscesses. IV drug use For those with a history of intravenous drug use, an X-ray is recommended before treatment to verify that no needle fragments are present. If there is also a fever present in this population, infectious endocarditis should be considered. Differential Abscesses should be differentiated from empyemas, which are accumulations of pus in a preexisting, rather than a newly formed, anatomical cavity. Other conditions that can cause similar symptoms include: cellulitis, a sebaceous cyst, and necrotising fasciitis. Cellulitis typically also has an erythematous reaction, but does not confer any purulent drainage. Treatment The standard treatment for an uncomplicated skin or soft tissue abscess is the act of opening and draining. There does not appear to be any benefit from also using antibiotics in most cases. A small amount of evidence did not find a benefit from packing the abscess with gauze. Incision and drainage The abscess should be inspected to identify if foreign objects are a cause, which may require their removal. If foreign objects are not the cause, incising and draining the abscess is standard treatment. Antibiotics Most people who have an uncomplicated skin abscess should not use antibiotics. Antibiotics in addition to standard incision and drainage is recommended in persons with severe abscesses, many sites of infection, rapid disease progression, the presence of cellulitis, symptoms indicating bacterial illness throughout the body, or a health condition causing immunosuppression. People who are very young or very old may also need antibiotics. If the abscess does not heal only with incision and drainage, or if the abscess is in a place that is difficult to drain such as the face, hands, or genitals, then antibiotics may be indicated. In those cases of abscess which do require antibiotic treatment, Staphylococcus aureus bacteria is a common cause and an anti-staphylococcus antibiotic such as flucloxacillin or dicloxacillin is used. The Infectious Diseases Society of America advises that the draining of an abscess is not enough to address community-acquired methicillin-resistant Staphylococcus aureus (MRSA), and in those cases, traditional antibiotics may be ineffective. Alternative antibiotics effective against community-acquired MRSA often include clindamycin, doxycycline, minocycline, and trimethoprim-sulfamethoxazole. The American College of Emergency Physicians advises that typical cases of abscess from MRSA get no benefit from having antibiotic treatment in addition to the standard treatment. Culturing the wound is not needed if standard follow-up care can be provided after the incision and drainage. Performing a wound culture is unnecessary because it rarely gives information which can be used to guide treatment. Packing In North America, after drainage, an abscess cavity is usually packed, often with special iodoform-treated cloth. This is done to absorb and neutralize any remaining exudate as well as to promote draining and prevent premature closure. Prolonged draining is thought to promote healing. The hypothesis is that though the heart's pumping action can deliver immune and regenerative cells to the edge of an injury, an abscess is by definition a void in which no blood vessels are present. Packing is thought to provide a wicking action that continuously draws beneficial factors and cells from the body into the void that must be healed. Discharge is then absorbed by cutaneous bandages and further wicking promoted by changing these bandages regularly. However, evidence from emergency medicine literature reports that packing wounds after draining, especially smaller wounds, causes pain to the person and does not decrease the rate of recurrence, nor bring faster healing, or fewer physician visits. Loop drainage More recently, several North American hospitals have opted for less-invasive loop drainage over standard drainage and wound packing. In one study of 143 pediatric outcomes, a failure rate of 1.4% was reported in the loop group versus 10.5% in the packing group (P<.030), while a separate study reported a 5.5% failure rate among the loop group. Primary closure Closing an abscess immediately after draining it appears to speed healing without increasing the risk of recurrence. This may not apply to anorectal abscesses as while they may heal faster, there may be a higher rate of recurrence than those left open. Appendiceal abscess Appendiceal abscess are complications of appendicitis where there is an infected mass on the appendix. This condition is estimated to occur in 2–10% of appendicitis cases and is usually treated by surgical removal of the appendix (appendicectomy). Prognosis Even without treatment, skin abscesses rarely result in death, as they will naturally break through the skin. Other types of abscess are more dangerous. Brain abscesses may be fatal if untreated. When treated, the mortality rate reduces to 5–10%, but is higher if the abscess ruptures. Epidemiology Skin abscesses are common and have become more common in recent years. Risk factors include intravenous drug use, with rates reported as high as 65% among users. In 2005, in the United States 3.2 million people went to the emergency department for an abscess. In Australia around 13,000 people were hospitalized in 2008 for the disease. Society and culture The Latin medical aphorism "ubi pus, ibi evacua" expresses "where there is pus, there evacuate it" and is classical advice in the culture of Western medicine. Needle exchange programmes often administer or provide referrals for abscess treatment to injection drug users as part of a harm reduction public health strategy. Etymology An abscess is so called "abscess" because there is an abscessus (a going away or departure) of portions of the animal tissue from each other to make room for the suppurated matter lodged between them. The word carbuncle is believed to have originated from the Latin: carbunculus, originally a small coal; diminutive of carbon-, carbo: charcoal or ember, but also a carbuncle stone, "precious stones of a red or fiery colour", usually garnets. Other types The following types of abscess are listed in the medical dictionary:
Biology and health sciences
Specific diseases
Health
1064
https://en.wikipedia.org/wiki/Almond
Almond
The almond (Prunus amygdalus, syn. Prunus dulcis) is a species of tree from the genus Prunus. Along with the peach, it is classified in the subgenus Amygdalus, distinguished from the other subgenera by corrugations on the shell (endocarp) surrounding the seed. The fruit of the almond is a drupe, consisting of an outer hull and a hard shell with the seed, which is not a true nut. Shelling almonds refers to removing the shell to reveal the seed. Almonds are sold shelled or unshelled. Blanched almonds are shelled almonds that have been treated with hot water to soften the seedcoat, which is then removed to reveal the white embryo. Once almonds are cleaned and processed, they can be stored for around a year if kept refrigerated; at higher temperatures they will become rancid more quickly. Almonds are used in many cuisines, often featuring prominently in desserts, such as marzipan. The almond tree prospers in a moderate Mediterranean climate with cool winter weather. It is rarely found wild in its original setting. Almonds were one of the earliest domesticated fruit trees, due to the ability to produce quality offspring entirely from seed, without using suckers and cuttings. Evidence of domesticated almonds in the Early Bronze Age has been found in the archeological sites of the Middle East, and subsequently across the Mediterranean region and similar arid climates with cool winters. California produces about 80% of the world's almond supply. Due to high acreage and water demand for almond cultivation, and need for pesticides, California almond production may be unsustainable, especially during the persistent drought and heat from climate change in the 21st century. Droughts in California have caused some producers to leave the industry, leading to lower supply and increased prices. Description The almond is a deciduous tree growing to in height, with a trunk of up to in diameter. The young twigs are green at first, becoming purplish where exposed to sunlight, then grey in their second year. The leaves are long, with a serrated margin and a petiole. The fragrant flowers are white to pale pink, diameter with five petals, produced singly or in pairs and appearing before the leaves in early spring. Almond trees thrive in Mediterranean climates with warm, dry summers and mild, wet winters. The optimal temperature for their growth is between and the tree buds have a chilling requirement of 200 to 700 hours below to break dormancy. Almonds begin bearing an economic crop in the third year after planting. Trees reach full bearing five to six years after planting. The fruit matures in the autumn, 7–8 months after flowering. The almond fruit is long. It is not a nut but a drupe. The outer covering, consisting of an outer exocarp, or skin, and mesocarp, or flesh, fleshy in other members of Prunus such as the plum and cherry, is instead a thick, leathery, grey-green coat (with a downy exterior), called the hull. Inside the hull is a woody endocarp which forms a reticulated, hard shell (like the outside of a peach pit) called the pyrena. Inside the shell is the edible seed, commonly called a nut. Generally, one seed is present, but occasionally two occur. After the fruit matures, the hull splits and separates from the shell, and an abscission layer forms between the stem and the fruit so that the fruit can fall from the tree. During harvest, mechanised tree shakers are used to expedite fruits falling to the ground for collection. Taxonomy Sweet and bitter almonds The seeds of Prunus dulcis var. dulcis are predominantly sweet but some individual trees produce seeds that are somewhat more bitter. The genetic basis for bitterness involves a single gene, the bitter flavour furthermore being recessive, both aspects making this trait easier to domesticate. The fruits from Prunus dulcis var. amara are always bitter, as are the kernels from other species of genus Prunus, such as apricot, peach and cherry (although to a lesser extent). The bitter almond is slightly broader and shorter than the sweet almond and contains about 50% of the fixed oil that occurs in sweet almonds. It also contains the enzyme emulsin which, in the presence of water, acts on the two soluble glucosides amygdalin and prunasin yielding glucose, cyanide and the essential oil of bitter almonds, which is nearly pure benzaldehyde, the chemical causing the bitter flavour. Bitter almonds may yield 4–9 milligrams of hydrogen cyanide per almond and contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. The origin of cyanide content in bitter almonds is via the enzymatic hydrolysis of amygdalin. P450 monooxygenases are involved in the amygdalin biosynthetic pathway. A point mutation in a bHLH transcription factor prevents transcription of the two cytochrome P450 genes, resulting in the sweet kernel trait. Etymology The word almond is a loanword from Old French or , descended from Late Latin , , modified from Classical Latin , which is in turn borrowed from Ancient Greek () (cf. amygdala, an almond-shaped portion of the brain). Late Old English had amygdales 'almonds'. The adjective amygdaloid (literally 'like an almond, almond-like') is used to describe objects which are roughly almond-shaped, particularly a shape which is part way between a triangle and an ellipse. For example, the amygdala of the brain uses a direct borrowing of the Greek term . Origin and distribution The precise origin of the almond is controversial due to estimates for its emergence across wide geographic regions. Sources indicate that its origins were in Central Asia between Iran, Turkmenistan, Tajikistan, Kurdistan, Afghanistan, and Iraq, or in an eastern Asian subregion between Mongolia and Uzbekistan. In other assessments, both botanical and archaeological evidence indicates that almonds originated and were first cultivated in West Asia, particularly in countries of the Levant. Other estimates specified Iran and Anatolia (present day Turkey) as origin locations of the almond, with botanical evidence for Iran as the main origin centre. The wild form of domesticated almond also grew in parts of the Levant. Almond cultivation was spread by humans centuries ago along the shores of the Mediterranean Sea into northern Africa and southern Europe, and more recently to other world regions, notably California. Selection of the sweet type from the many bitter types in the wild marked the beginning of almond domestication. The wild ancestor of the almond used to breed the domesticated species is unknown. The species Prunus fenzliana may be the most likely wild ancestor of the almond, in part because it is native to Armenia and western Azerbaijan, where it was apparently domesticated. Wild almond species were grown by early farmers, "at first unintentionally in the garbage heaps, and later intentionally in their orchards". Cultivation Almonds were one of the earliest domesticated fruit trees, due to "the ability of the grower to raise attractive almonds from seed. Thus, in spite of the fact that this plant does not lend itself to propagation from suckers or from cuttings, it could have been domesticated even before the introduction of grafting". Domesticated almonds appear in the Early Bronze Age (3000–2000 BC), such as the archaeological sites of Numeira (Jordan), or possibly earlier. Another well-known archaeological example of the almond is the fruit found in Tutankhamun's tomb in Egypt (c. 1325 BC), probably imported from the Levant. An article on almond tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. Of the European countries that the Royal Botanic Garden Edinburgh reported as cultivating almonds, Germany is the northernmost, though the domesticated form can be found as far north as Iceland. Varieties Almond trees are small to medium-sized but commercial cultivars can be grafted onto a different root-stock to produce smaller trees. Varieties include: – originates in the 1800s. A large tree that produces large, smooth, thin-shelled almonds with 60–65% edible kernel per nut. Requires pollination from other almond varieties for good nut production. – originates in Italy. Has thicker, hairier shells with only 32% of edible kernel per nut. The thicker shell gives some protection from pests such as the navel orangeworm. Does not require pollination by other almond varieties. Mariana – used as a rootstock to result in smaller trees Breeding Breeding programmes have found the high shell-seal trait. Pollination The most widely planted varieties of almond are self-incompatible; hence these trees require pollen from a tree with different genetic characters to produce seeds. Almond orchards therefore must grow mixtures of almond varieties. In addition, the pollen is transferred from flower to flower by insects; therefore commercial growers must ensure there are enough insects to perform this task. The large scale of almond production in the U.S. creates a significant problem of providing enough pollinating insects. Additional pollinating insects are therefore brought to the trees. The pollination of California's almonds is the largest annual managed pollination event in the world, with over 1 million hives (nearly half of all beehives in the US) being brought to the almond orchards each February. Much of the supply of bees is managed by pollination brokers, who contract with migratory beekeepers from at least 49 states for the event. This business was heavily affected by colony collapse disorder at the turn of the 21st century, causing a nationwide shortage of honey bees and increasing the price of insect pollination. To partially protect almond growers from these costs, researchers at the Agricultural Research Service, part of the United States Department of Agriculture (USDA), developed self-pollinating almond trees that combine this character with quality characters such as a flavour and yield. Self-pollinating almond varieties exist, but they lack some commercial characters. However, through natural hybridisation between different almond varieties, a new variety that was self-pollinating with a high yield of commercial quality nuts was produced. Diseases Almond trees can be attacked by an array of damaging microbes, fungal pathogens, plant viruses, and bacteria. Pests Pavement ants (Tetramorium caespitum), southern fire ants (Solenopsis xyloni), and thief ants (Solenopsis molesta) are seed predators. Bryobia rubrioculus mites are most known for their damage to this crop. Sustainability Almond production in California is concentrated mainly in the Central Valley, where the mild climate, rich soil, abundant sunshine and water supply make for ideal growing conditions. Due to the persistent droughts in California in the early 21st century, it became more difficult to raise almonds in a sustainable manner. The issue is complex because of the high amount of water needed to produce almonds: a single almond requires roughly of water to grow properly. Regulations related to water supplies are changing so some growers have destroyed their current almond orchards to replace with either younger trees or a different crop such as pistachio that needs less water. Sustainability strategies implemented by the Almond Board of California and almond farmers include: tree and soil health, and other farming practices minimizing dust production during the harvest bee health irrigation guidelines for farmers food safety use of waste biomass as coproducts with a goal to achieve zero waste use of solar energy during processing job development support of scientific research to investigate potential health benefits of consuming almonds international education about sustainability practices Production In 2022, world production of almonds was 3.6 million tonnes, led by the United States (table). Secondary producers were Australia and Spain. United States In the United States, production is concentrated in California where and six different almond varieties were under cultivation in 2017, with a yield of of shelled almonds. California production is marked by a period of intense pollination during late winter by rented commercial bees transported by truck across the U.S. to almond groves, requiring more than half of the total U.S. commercial honeybee population. The value of total U.S. exports of shelled almonds in 2016 was $3.2 billion. All commercially grown almonds sold as food in the U.S. are sweet cultivars. The U.S. Food and Drug Administration reported in 2010 that some fractions of imported sweet almonds were contaminated with bitter almonds, which contain cyanide. Australia Australia is the largest almond production region in the Southern Hemisphere. Most of the almond orchards are located along the Murray River corridor in New South Wales, Victoria, and South Australia. Spain Spain has diverse commercial cultivars of almonds grown in Catalonia, Valencia, Murcia, Andalusia, and Aragón regions, and the Balearic Islands. Production in 2016 declined 2% nationally compared to 2015 production data. The almond cultivar 'Marcona' is recognisably different from other almonds and is marketed by name. The kernel is short, round, relatively sweet, and delicate in texture. Its origin is unknown and has been grown in Spain for a long time; the tree is very productive, and the shell of the nut is hard. Toxicity Bitter almonds contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. Extract of bitter almond was once used medicinally but even in small doses, effects are severe or lethal, especially in children; the cyanide must be removed before consumption. The acute oral lethal dose of cyanide for adult humans is reported to be of body weight (approximately 50 bitter almonds), so that for children consuming 5–10 bitter almonds may be fatal. Symptoms of eating such almonds include vertigo and other typical cyanide poisoning effects. Almonds may cause allergy or intolerance. Cross-reactivity is common with peach allergens (lipid transfer proteins) and tree nut allergens. Symptoms range from local signs and symptoms (e.g., oral allergy syndrome, contact urticaria) to systemic signs and symptoms including anaphylaxis (e.g., urticaria, angioedema, gastrointestinal and respiratory symptoms). Almonds are susceptible to aflatoxin-producing moulds. Aflatoxins are potent carcinogenic chemicals produced by moulds such as Aspergillus flavus and Aspergillus parasiticus. The mould contamination may occur from soil, previously infested almonds, and almond pests such as navel-orange worm. High levels of mould growth typically appear as grey to black filament-like growth. It is unsafe to eat mould-infected tree nuts. Some countries have strict limits on allowable levels of aflatoxin contamination of almonds and require adequate testing before the nuts can be marketed to their citizens. The European Union, for example, introduced a requirement since 2007 that all almond shipments to the EU be tested for aflatoxin. If aflatoxin does not meet the strict safety regulations, the entire consignment may be reprocessed to eliminate the aflatoxin or it must be destroyed. Breeding programs have found the trait. High shell-seal provides resistance against these Aspergillus species and so against the development of their toxins. Mandatory pasteurisation in California After tracing cases of salmonellosis to almonds, the USDA approved a proposal by the Almond Board of California to pasteurise almonds sold to the public. After publishing the rule in March 2007, the almond pasteurisation program became mandatory for California companies effective 1 September 2007. Raw, untreated California almonds have not been available in the U.S. since then. California almonds labeled "raw" must be steam-pasteurised or chemically treated with propylene oxide (PPO). This does not apply to imported almonds or almonds sold from the grower directly to the consumer in small quantities. The treatment also is not required for raw almonds sold for export outside of North America. The Almond Board of California states: "PPO residue dissipates after treatment". The U.S. Environmental Protection Agency has reported: "Propylene oxide has been detected in fumigated food products; consumption of contaminated food is another possible route of exposure". PPO is classified as Group 2B ("possibly carcinogenic to humans"). The USDA-approved marketing order was challenged in court by organic farmers organised by the Cornucopia Institute, a Wisconsin-based farm policy research group which filed a lawsuit in September 2008. According to the institute, this almond marketing order has imposed significant financial burdens on small-scale and organic growers and damaged domestic almond markets. A federal judge dismissed the lawsuit in early 2009 on procedural grounds. In August 2010, a federal appeals court ruled that the farmers have a right to appeal the USDA regulation. In March 2013, the court vacated the suit on the basis that the objections should have been raised in 2007 when the regulation was first proposed. Uses Nutrition Almonds are 4% water, 22% carbohydrates, 21% protein, and 50% fat. In a reference amount, almonds supply of food energy. The almond is a nutritionally dense food, providing a rich source (20% or more of the Daily Value, DV) of the B vitamins riboflavin and niacin, vitamin E, and the essential minerals calcium, copper, iron, magnesium, manganese, phosphorus, and zinc. Almonds are a moderate source (10–19% DV) of the B vitamins thiamine, vitamin B6, and folate, choline, and the essential mineral potassium. They also contain substantial dietary fibre, the monounsaturated fat, oleic acid, and the polyunsaturated fat, linoleic acid. Typical of nuts and seeds, almonds are a source of phytosterols such as beta-sitosterol, stigmasterol, campesterol, sitostanol, and campestanol. Health Almonds are included as a good source of protein among recommended healthy foods by the U.S. Department of Agriculture (USDA). A 2016 review of clinical research indicated that regular consumption of almonds may reduce the risk of heart disease by lowering blood levels of LDL cholesterol. Culinary While the almond is often eaten on its own, raw or toasted, it is also a component of various dishes. Almonds are available in many forms, such as whole, slivered, and ground into flour. Almond pieces around in size, called "nibs", are used for special purposes such as decoration. Almonds are a common addition to breakfast muesli or oatmeal. Colomba di Pasqua is the Easter counterpart of the two well-known Italian Christmas desserts, panettone and pandoro Desserts A wide range of classic sweets feature almonds as a central ingredient. Marzipan was developed in the Middle Ages. Since the 19th century almonds have been used to make bread, almond butter, cakes and puddings, candied confections, almond cream-filled pastries, nougat, cookies (macaroons, biscotti and qurabiya), and cakes (financiers, Esterházy torte), and other sweets and desserts. The young, developing fruit of the almond tree can be eaten whole (green almonds) when they are still green and fleshy on the outside and the inner shell has not yet hardened. The fruit is somewhat sour, but is a popular snack in parts of the Middle East, eaten dipped in salt to balance the sour taste. Also in the Middle East they are often eaten with dates. They are available only from mid-April to mid-June in the Northern Hemisphere; pickling or brining extends the fruit's shelf life. Marzipan Marzipan, a smooth, sweetened almond paste, is used in a number of elegant cakes and desserts. Princess cake is covered by marzipan (similar to fondant), as is Battenberg cake. In Sicily, sponge cake is covered with marzipan to make cassatella di sant'Agata and cassata siciliana, and marzipan is dyed and crafted into realistic fruit shapes to make frutta martorana. The Andalusian Christmas pastry pan de Cádiz is filled with marzipan and candied fruit. World cuisines In French cuisine, alternating layers of almond and hazelnut meringue are used to make the dessert dacquoise. Pithivier is one of many almond cream-filled pastries. In Germany, Easter bread called Deutsches Osterbrot is baked with raisins and almonds. In Greece almond flour is used to make amygdalopita, a glyka tapsiou dessert cake baking in a tray. Almonds are used for kourabiedes, a Greek version of the traditional quarabiya almond biscuits. A soft drink known as soumada is made from almonds in various regions. In Saudi Arabia, almonds are a typical embellishment for the rice dish kabsa. In Iran, green almonds are dipped in sea salt and eaten as snacks on street markets; they are called chaqale bâdam. Candied almonds called noghl are served alongside tea and coffee. Also, sweet almonds are used to prepare special food for babies, named harire badam. Almonds are added to some foods, cookies, and desserts, or are used to decorate foods. People in Iran consume roasted nuts for special events, for example, during New Year (Nowruz) parties. In Italy, colomba di Pasqua is a traditional Easter cake made with almonds. Bitter almonds are the base for amaretti cookies, a common dessert. Almonds are also a common choice as the nuts to include in torrone. In Morocco, almonds in the form of sweet almond paste are the main ingredient in pastry fillings, and several other desserts. Fried blanched whole almonds are also used to decorate sweet tajines such as lamb with prunes. Southwestern Berber regions of Essaouira and Souss are also known for amlou, a spread made of almond paste, argan oil, and honey. Almond paste is also mixed with toasted flour and among others, honey, olive oil or butter, anise, fennel, sesame seeds, and cinnamon to make sellou (also called zamita in Meknes or slilou in Marrakech), a sweet snack known for its long shelf life and high nutritive value. In Indian cuisine, almonds are the base ingredients of pasanda-style and Mughlai curries. Badam halva is a sweet made from almonds with added colouring. Almond flakes are added to many sweets (such as sohan barfi), and are usually visible sticking to the outer surface. Almonds form the base of various drinks which are supposed to have cooling properties. Almond sherbet or sherbet-e-badaam, is a common summer drink. Almonds are also sold as a snack with added salt. In Israel almonds are used as a topping for tahini cookies or eaten as a snack. In Spain Marcona almonds are usually toasted in oil and lightly salted. They are used by Spanish confectioners to prepare a sweet called turrón. In Arabian cuisine, almonds are commonly used as garnishing for Mansaf. In British cuisine, almonds are used for dessert items such as Bakewell tart and Battenberg cake. Milk Almonds can be processed into a milk substitute called almond milk; the nut's soft texture, mild flavour, and light colouring (when skinned) make for an efficient analog to dairy, and a soy-free choice for lactose intolerant people and vegans. Raw, blanched, and lightly toasted almonds work well for different production techniques, some of which are similar to that of soy milk and some of which use no heat, resulting in raw milk. Almond milk, along with almond butter and almond oil, are versatile products used in both sweet and savoury dishes. In Moroccan cuisine, sharbat billooz, a common beverage, is made by blending blanched almonds with milk, sugar and other flavourings. Flour and skins Almond flour or ground almond meal combined with sugar or honey as marzipan is often used as a gluten-free alternative to wheat flour in cooking and baking. Almonds contain polyphenols in their skins consisting of flavonols, flavan-3-ols, hydroxybenzoic acids and flavanones analogous to those of certain fruits and vegetables. These phenolic compounds and almond skin prebiotic dietary fibre have commercial interest as food additives or dietary supplements. Syrup Historically, almond syrup was an emulsion of sweet and bitter almonds, usually made with barley syrup (orgeat syrup) or in a syrup of orange flower water and sugar, often flavoured with a synthetic aroma of almonds. Orgeat syrup is an important ingredient in the Mai Tai and many other Tiki drinks. Due to the cyanide found in bitter almonds, modern syrups generally are produced only from sweet almonds. Such syrup products do not contain significant levels of hydrocyanic acid, so are generally considered safe for human consumption. Oils Almonds are a rich source of oil, with 50% of kernel dry mass as fat (whole almond nutrition table). In relation to total dry mass of the kernel, almond oil contains 32% monounsaturated oleic acid (an omega-9 fatty acid), 13% linoleic acid (a polyunsaturated omega-6 essential fatty acid), and 10% saturated fatty acid (mainly as palmitic acid). Linolenic acid, a polyunsaturated omega-3 fat, is not present (table). Almond oil is a rich source of vitamin E, providing 261% of the Daily Value per 100 millilitres. When almond oil is analyzed separately and expressed per 100 grams as a reference mass, the oil provides of food energy, 8 grams of saturated fat (81% of which is palmitic acid), 70 grams of oleic acid, and 17 grams of linoleic acid (oil table). Oleum amygdalae, the fixed oil, is prepared from either sweet or bitter almonds, and is a glyceryl oleate with a slight odour and a nutty taste. It is almost insoluble in alcohol but readily soluble in chloroform or ether. Almond oil is obtained from the dried kernel of almonds. Sweet almond oil is used as a carrier oil in aromatherapy and cosmetics while bitter almond oil, containing benzaldehyde, is used as a food flavouring and in perfume. In culture The almond is highly revered in some cultures. The tree originated in the Middle East. In the Bible, the almond is mentioned ten times, beginning with Genesis 43:11, where it is described as "among the best of fruits". In Numbers 17, Levi is chosen from the other tribes of Israel by Aaron's rod, which brought forth almond flowers. The almond blossom supplied a model for the menorah which stood in the Holy Temple, "Three cups, shaped like almond blossoms, were on one branch, with a knob and a flower; and three cups, shaped like almond blossoms, were on the other … on the candlestick itself were four cups, shaped like almond blossoms, with its knobs and flowers" (Exodus 25:33–34; 37:19–20). Many Sephardic Jews give five almonds to each guest before special occasions like weddings. Similarly, Christian symbolism often uses almond branches as a symbol of the virgin birth of Jesus; paintings and icons often include almond-shaped haloes encircling the Christ Child and as a symbol of Mary. The word "luz", which appears in Genesis 30:37, sometimes translated as "hazel", may actually be derived from the Aramaic name for almond (Luz), and is translated as such in the New International Version and other versions of the Bible. The Arabic name for almond is لوز "lauz" or "lūz". In some parts of the Levant and North Africa, it is pronounced "loz", which is very close to its Aramaic origin. The Entrance of the flower (La entrada de la flor) is an event celebrated on 1 February in Torrent, Spain, in which the clavarios and members of the Confrerie of the Mother of God deliver a branch of the first-blooming almond-tree to the Virgin.
Biology and health sciences
Rosales
null
1134
https://en.wikipedia.org/wiki/Analysis
Analysis
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Science and technology Chemistry The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity. Computer science Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols Object-oriented analysis and design – à la Booch Program analysis (computer science) – the process of automatically analysing the behavior of computer programs Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks Static code analysis – the analysis of computer software that is performed without actually executing programs built from that Structured systems analysis and design methodology – à la Yourdon Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing Worst-case execution time – determines the longest time that a piece of software can take to run Engineering Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design. Mathematics Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors. Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows: The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it." The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions. James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884): The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem. Psychotherapy Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes Transactional analysis Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior. Signal processing Finite element analysis – a computer simulation technique used in engineering analysis Independent component analysis Link quality analysis – the analysis of signal quality Path quality analysis Fourier analysis Statistics In statistics, the term analysis may refer to any method used for data analysis. Among the many such methods, some are: Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors) Meta-analysis – combines the results of several studies that address a set of related research hypotheses Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met Spatial analysis – the study of entities using geometric or geographic properties Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals Business Financial statement analysis – the analysis of the accounts and the economic prospects of a firm Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization Business analysis – involves identifying the needs and determining the solutions to business problems Price analysis – involves the breakdown of a price to a unit figure Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand Sum-of-the-parts analysis – method of valuation of a multi-divisional company Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior Economics Agroecosystem analysis Input–output model if applied to a region, is called Regional Impact Multiplier System Government Intelligence The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis. Policy Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation. Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions Humanities and social sciences Linguistics Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues. Literature Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects. Music Musical analysis – a process attempting to answer the question "How does this music work?" Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you. Schenkerian analysis Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition. Philosophy Philosophical analysis – a general term for the techniques used by philosophers Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer. Analysis is the name of a prominent journal in philosophy. Other Aura analysis – a pseudoscientific technique in which supporters of the method claim that the body's aura, or energy field is analysed Bowling analysis – Analysis of the performance of cricket players Lithic analysis – the analysis of stone tools using basic scientific techniques Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered. Protocol analysis – a means for extracting persons' thoughts while they are performing a task
Physical sciences
Science basics
Basics and measurement
1140
https://en.wikipedia.org/wiki/Amplitude%20modulation
Amplitude modulation
Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the amplitude (signal strength) of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation. AM was the earliest modulation method used for transmitting audio in radio broadcasting. It was developed during the first quarter of the 20th century beginning with Roberto Landell de Moura and Reginald Fessenden's radiotelephone experiments in 1900. This original form of AM is sometimes called double-sideband amplitude modulation (DSBAM), because the standard method produces sidebands on either side of the carrier frequency. Single-sideband modulation uses bandpass filters to eliminate one of the sidebands and possibly the carrier signal, which improves the ratio of message power to total transmission power, reduces power handling requirements of line repeaters, and permits better bandwidth utilization of the transmission medium. AM remains in use in many forms of communication in addition to AM broadcasting: shortwave radio, amateur radio, two-way radios, VHF aircraft radio, citizens band radio, and in computer modems in the form of quadrature amplitude modulation (QAM). Foundation In electronics, telecommunications and mechanics, modulation means varying some aspect of a continuous wave carrier signal with an information-bearing modulation waveform, such as an audio signal which represents sound, or a video signal which represents images. In this sense, the carrier wave, which has a much higher frequency than the message signal, carries the information. At the receiving station, the message signal is extracted from the modulated carrier by demodulation. In general form, a modulation process of a sinusoidal carrier wave may be described by the following equation: . A(t) represents the time-varying amplitude of the sinusoidal carrier wave and the cosine-term is the carrier at its angular frequency , and the instantaneous phase deviation . This description directly provides the two major groups of modulation, amplitude modulation and angle modulation. In angle modulation, the term A(t) is constant and the second term of the equation has a functional relationship to the modulating message signal. Angle modulation provides two methods of modulation, frequency modulation and phase modulation. In amplitude modulation, the angle term is held constant and the first term, A(t), of the equation has a functional relationship to the modulating message signal. The modulating message signal may be analog in nature, or it may be a digital signal, in which case the technique is generally called amplitude-shift keying. For example, in AM radio communication, a continuous wave radio-frequency signal has its amplitude modulated by an audio waveform before transmission. The message signal determines the envelope of the transmitted waveform. In the frequency domain, amplitude modulation produces a signal with power concentrated at the carrier frequency and two adjacent sidebands. Each sideband is equal in bandwidth to that of the modulating signal, and is a mirror image of the other. Standard AM is thus sometimes called "double-sideband amplitude modulation" (DSBAM). A disadvantage of all amplitude modulation techniques, not only standard AM, is that the receiver amplifies and detects noise and electromagnetic interference in equal proportion to the signal. Increasing the received signal-to-noise ratio, say, by a factor of 10 (a 10 decibel improvement), thus would require increasing the transmitter power by a factor of 10. This is in contrast to frequency modulation (FM) and digital radio where the effect of such noise following demodulation is strongly reduced so long as the received signal is well above the threshold for reception. For this reason AM broadcast is not favored for music and high fidelity broadcasting, but rather for voice communications and broadcasts (sports, news, talk radio etc.). AM is also inefficient in power usage; at least two-thirds of the power is concentrated in the carrier signal. The carrier signal contains none of the original information being transmitted (voice, video, data, etc.). However its presence provides a simple means of demodulation using envelope detection, providing a frequency and phase reference to extract the modulation from the sidebands. In some modulation systems based on AM, a lower transmitter power is required through partial or total elimination of the carrier component, however receivers for these signals are more complex because they must provide a precise carrier frequency reference signal (usually as shifted to the intermediate frequency) from a greatly reduced "pilot" carrier (in reduced-carrier transmission or DSB-RC) to use in the demodulation process. Even with the carrier eliminated in double-sideband suppressed-carrier transmission, carrier regeneration is possible using a Costas phase-locked loop. This does not work for single-sideband suppressed-carrier transmission (SSB-SC), leading to the characteristic "Donald Duck" sound from such receivers when slightly detuned. Single-sideband AM is nevertheless used widely in amateur radio and other voice communications because it has power and bandwidth efficiency (cutting the RF bandwidth in half compared to standard AM). On the other hand, in medium wave and short wave broadcasting, standard AM with the full carrier allows for reception using inexpensive receivers. The broadcaster absorbs the extra power cost to greatly increase potential audience. Shift keying A simple form of digital amplitude modulation which can be used for transmitting binary data is on–off keying, the simplest form of amplitude-shift keying, in which ones and zeros are represented by the presence or absence of a carrier. On–off keying is likewise used by radio amateurs to transmit Morse code where it is known as continuous wave (CW) operation, even though the transmission is not strictly "continuous". A more complex form of AM, quadrature amplitude modulation is now more commonly used with digital data, while making more efficient use of the available bandwidth. Analog telephony A simple form of amplitude modulation is the transmission of speech signals from a traditional analog telephone set using a common battery local loop. The direct current provided by the central office battery is a carrier with a frequency of 0 Hz. It is modulated by a microphone (transmitter) in the telephone set according to the acoustic signal from the speaker. The result is a varying amplitude direct current, whose AC-component is the speech signal extracted at the central office for transmission to another subscriber. Amplitude reference An additional function provided by the carrier in standard AM, but which is lost in either single or double-sideband suppressed-carrier transmission, is that it provides an amplitude reference. In the receiver, the automatic gain control (AGC) responds to the carrier so that the reproduced audio level stays in a fixed proportion to the original modulation. On the other hand, with suppressed-carrier transmissions there is no transmitted power during pauses in the modulation, so the AGC must respond to peaks of the transmitted power during peaks in the modulation. This typically involves a so-called fast attack, slow decay circuit which holds the AGC level for a second or more following such peaks, in between syllables or short pauses in the program. This is very acceptable for communications radios, where compression of the audio aids intelligibility. However it is absolutely undesired for music or normal broadcast programming, where a faithful reproduction of the original program, including its varying modulation levels, is expected. ITU type designations In 1982, the International Telecommunication Union (ITU) designated the types of amplitude modulation: History Amplitude modulation was used in experiments of multiplex telegraph and telephone transmission in the late 1800s. However, the practical development of this technology is identified with the period between 1900 and 1920 of radiotelephone transmission, that is, the effort to send audio signals by radio waves. The first radio transmitters, called spark gap transmitters, transmitted information by wireless telegraphy, using pulses of the carrier wave to spell out text messages in Morse code. They could not transmit audio because the carrier consisted of strings of damped waves, pulses of radio waves that declined to zero, and sounded like a buzz in receivers. In effect they were already amplitude modulated. Continuous waves The first AM transmission was made by Canadian-born American researcher Reginald Fessenden on December 23, 1900 using a spark gap transmitter with a specially designed high frequency 10 kHz interrupter, over a distance of one mile (1.6 km) at Cobb Island, Maryland, US. His first transmitted words were, "Hello. One, two, three, four. Is it snowing where you are, Mr. Thiessen?". Though his words were "perfectly intelligible", the spark created a loud and unpleasant noise. Fessenden was a significant figure in the development of AM radio. He was one of the first researchers to realize, from experiments like the above, that the existing technology for producing radio waves, the spark transmitter, was not usable for amplitude modulation, and that a new kind of transmitter, one that produced sinusoidal continuous waves, was needed. This was a radical idea at the time, because experts believed the impulsive spark was necessary to produce radio frequency waves, and Fessenden was ridiculed. He invented and helped develop one of the first continuous wave transmitters – the Alexanderson alternator, with which he made what is considered the first AM public entertainment broadcast on Christmas Eve, 1906. He also discovered the principle on which AM is based, heterodyning, and invented one of the first detectors able to rectify and receive AM, the electrolytic detector or "liquid baretter", in 1902. Other radio detectors invented for wireless telegraphy, such as the Fleming valve (1904) and the crystal detector (1906) also proved able to rectify AM signals, so the technological hurdle was generating AM waves; receiving them was not a problem. Early technologies Early experiments in AM radio transmission, conducted by Fessenden, Valdemar Poulsen, Ernst Ruhmer, Quirino Majorana, Charles Herrold, and Lee de Forest, were hampered by the lack of a technology for amplification. The first practical continuous wave AM transmitters were based on either the huge, expensive Alexanderson alternator, developed 1906–1910, or versions of the Poulsen arc transmitter (arc converter), invented in 1903. The modifications necessary to transmit AM were clumsy and resulted in very low quality audio. Modulation was usually accomplished by a carbon microphone inserted directly in the antenna or ground wire; its varying resistance varied the current to the antenna. The limited power handling ability of the microphone severely limited the power of the first radiotelephones; many of the microphones were water-cooled. Vacuum tubes The 1912 discovery of the amplifying ability of the Audion tube, invented in 1906 by Lee de Forest, solved these problems. The vacuum tube feedback oscillator, invented in 1912 by Edwin Armstrong and Alexander Meissner, was a cheap source of continuous waves and could be easily modulated to make an AM transmitter. Modulation did not have to be done at the output but could be applied to the signal before the final amplifier tube, so the microphone or other audio source didn't have to modulate a high-power radio signal. Wartime research greatly advanced the art of AM modulation, and after the war the availability of cheap tubes sparked a great increase in the number of radio stations experimenting with AM transmission of news or music. The vacuum tube was responsible for the rise of AM broadcasting around 1920, the first electronic mass communication medium. Amplitude modulation was virtually the only type used for radio broadcasting until FM broadcasting began after World War II. At the same time as AM radio began, telephone companies such as AT&T were developing the other large application for AM: sending multiple telephone calls through a single wire by modulating them on separate carrier frequencies, called frequency division multiplexing. Single-sideband In 1915, John Renshaw Carson formulated the first mathematical description of amplitude modulation, showing that a signal and carrier frequency combined in a nonlinear device creates a sideband on both sides of the carrier frequency. Passing the modulated signal through another nonlinear device can extract the original baseband signal. His analysis also showed that only one sideband was necessary to transmit the audio signal, and Carson patented single-sideband modulation (SSB) on 1 December 1915. This advanced variant of amplitude modulation was adopted by AT&T for longwave transatlantic telephone service beginning 7 January 1927. After WW-II, it was developed for military aircraft communication. Analysis The carrier wave (sine wave) of frequency fc and amplitude A is expressed by . The message signal, such as an audio signal that is used for modulating the carrier, is m(t), and has a frequency fm, much lower than fc: , where m is the amplitude sensitivity, M is the amplitude of modulation. If m < 1, (1 + m(t)/A) is always positive for undermodulation. If m > 1 then overmodulation occurs and reconstruction of message signal from the transmitted signal would lead in loss of original signal. Amplitude modulation results when the carrier c(t) is multiplied by the positive quantity (1 + m(t)/A): In this simple case m is identical to the modulation index, discussed below. With m = 0.5 the amplitude modulated signal y(t) thus corresponds to the top graph (labelled "50% Modulation") in figure 4. Using prosthaphaeresis identities, y(t) can be shown to be the sum of three sine waves: Therefore, the modulated signal has three components: the carrier wave c(t) which is unchanged in frequency, and two sidebands with frequencies slightly above and below the carrier frequency fc. Spectrum A useful modulation signal m(t) is usually more complex than a single sine wave, as treated above. However, by the principle of Fourier decomposition, m(t) can be expressed as the sum of a set of sine waves of various frequencies, amplitudes, and phases. Carrying out the multiplication of 1 + m(t) with c(t) as above, the result consists of a sum of sine waves. Again, the carrier c(t) is present unchanged, but each frequency component of m at fi has two sidebands at frequencies fc + fi and fc – fi. The collection of the former frequencies above the carrier frequency is known as the upper sideband, and those below constitute the lower sideband. The modulation m(t) may be considered to consist of an equal mix of positive and negative frequency components, as shown in the top of figure 2. One can view the sidebands as that modulation m(t) having simply been shifted in frequency by fc as depicted at the bottom right of figure 2. The short-term spectrum of modulation, changing as it would for a human voice for instance, the frequency content (horizontal axis) may be plotted as a function of time (vertical axis), as in figure 3. It can again be seen that as the modulation frequency content varies, an upper sideband is generated according to those frequencies shifted above the carrier frequency, and the same content mirror-imaged in the lower sideband below the carrier frequency. At all times, the carrier itself remains constant, and of greater power than the total sideband power. Power and spectrum efficiency The RF bandwidth of an AM transmission (refer to figure 2, but only considering positive frequencies) is twice the bandwidth of the modulating (or "baseband") signal, since the upper and lower sidebands around the carrier frequency each have a bandwidth as wide as the highest modulating frequency. Although the bandwidth of an AM signal is narrower than one using frequency modulation (FM), it is twice as wide as single-sideband techniques; it thus may be viewed as spectrally inefficient. Within a frequency band, only half as many transmissions (or "channels") can thus be accommodated. For this reason analog television employs a variant of single-sideband (known as vestigial sideband, somewhat of a compromise in terms of bandwidth) in order to reduce the required channel spacing. Another improvement over standard AM is obtained through reduction or suppression of the carrier component of the modulated spectrum. In figure 2 this is the spike in between the sidebands; even with full (100%) sine wave modulation, the power in the carrier component is twice that in the sidebands, yet it carries no unique information. Thus there is a great advantage in efficiency in reducing or totally suppressing the carrier, either in conjunction with elimination of one sideband (single-sideband suppressed-carrier transmission) or with both sidebands remaining (double sideband suppressed carrier). While these suppressed carrier transmissions are efficient in terms of transmitter power, they require more sophisticated receivers employing synchronous detection and regeneration of the carrier frequency. For that reason, standard AM continues to be widely used, especially in broadcast transmission, to allow for the use of inexpensive receivers using envelope detection. Even (analog) television, with a (largely) suppressed lower sideband, includes sufficient carrier power for use of envelope detection. But for communications systems where both transmitters and receivers can be optimized, suppression of both one sideband and the carrier represent a net advantage and are frequently employed. A technique used widely in broadcast AM transmitters is an application of the Hapburg carrier, first proposed in the 1930s but impractical with the technology then available. During periods of low modulation the carrier power would be reduced and would return to full power during periods of high modulation levels. This has the effect of reducing the overall power demand of the transmitter and is most effective on speech type programmes. Various trade names are used for its implementation by the transmitter manufacturers from the late 80's onwards. Modulation index The AM modulation index is a measure based on the ratio of the modulation excursions of the RF signal to the level of the unmodulated carrier. It is thus defined as: where and are the modulation amplitude and carrier amplitude, respectively; the modulation amplitude is the peak (positive or negative) change in the RF amplitude from its unmodulated value. Modulation index is normally expressed as a percentage, and may be displayed on a meter connected to an AM transmitter. So if , carrier amplitude varies by 50% above (and below) its unmodulated level, as is shown in the first waveform, below. For , it varies by 100% as shown in the illustration below it. With 100% modulation the wave amplitude sometimes reaches zero, and this represents full modulation using standard AM and is often a target (in order to obtain the highest possible signal-to-noise ratio) but mustn't be exceeded. Increasing the modulating signal beyond that point, known as overmodulation, causes a standard AM modulator (see below) to fail, as the negative excursions of the wave envelope cannot become less than zero, resulting in distortion ("clipping") of the received modulation. Transmitters typically incorporate a limiter circuit to avoid overmodulation, and/or a compressor circuit (especially for voice communications) in order to still approach 100% modulation for maximum intelligibility above the noise. Such circuits are sometimes referred to as a vogad. However it is possible to talk about a modulation index exceeding 100%, without introducing distortion, in the case of double-sideband reduced-carrier transmission. In that case, negative excursions beyond zero entail a reversal of the carrier phase, as shown in the third waveform below. This cannot be produced using the efficient high-level (output stage) modulation techniques (see below) which are widely used especially in high power broadcast transmitters. Rather, a special modulator produces such a waveform at a low level followed by a linear amplifier. What's more, a standard AM receiver using an envelope detector is incapable of properly demodulating such a signal. Rather, synchronous detection is required. Thus double-sideband transmission is generally not referred to as "AM" even though it generates an identical RF waveform as standard AM as long as the modulation index is below 100%. Such systems more often attempt a radical reduction of the carrier level compared to the sidebands (where the useful information is present) to the point of double-sideband suppressed-carrier transmission where the carrier is (ideally) reduced to zero. In all such cases the term "modulation index" loses its value as it refers to the ratio of the modulation amplitude to a rather small (or zero) remaining carrier amplitude. Modulation methods Modulation circuit designs may be classified as low- or high-level (depending on whether they modulate in a low-power domain—followed by amplification for transmission—or in the high-power domain of the transmitted signal). Low-level generation In modern radio systems, modulated signals are generated via digital signal processing (DSP). With DSP many types of AM are possible with software control (including DSB with carrier, SSB suppressed-carrier and independent sideband, or ISB). Calculated digital samples are converted to voltages with a digital-to-analog converter, typically at a frequency less than the desired RF-output frequency. The analog signal must then be shifted in frequency and linearly amplified to the desired frequency and power level (linear amplification must be used to prevent modulation distortion). This low-level method for AM is used in many Amateur Radio transceivers. AM may also be generated at a low level, using analog methods described in the next section. High-level generation High-power AM transmitters (such as those used for AM broadcasting) are based on high-efficiency class-D and class-E power amplifier stages, modulated by varying the supply voltage. Older designs (for broadcast and amateur radio) also generate AM by controlling the gain of the transmitter's final amplifier (generally class-C, for efficiency). The following types are for vacuum tube transmitters (but similar options are available with transistors): Plate modulation In plate modulation, the plate voltage of the RF amplifier is modulated with the audio signal. The audio power requirement is 50 percent of the RF-carrier power. Heising (constant-current) modulation RF amplifier plate voltage is fed through a choke (high-value inductor). The AM modulation tube plate is fed through the same inductor, so the modulator tube diverts current from the RF amplifier. The choke acts as a constant current source in the audio range. This system has a low power efficiency. Control grid modulation The operating bias and gain of the final RF amplifier can be controlled by varying the voltage of the control grid. This method requires little audio power, but care must be taken to reduce distortion. Clamp tube (screen grid) modulation The screen-grid bias may be controlled through a clamp tube, which reduces voltage according to the modulation signal. It is difficult to approach 100-percent modulation while maintaining low distortion with this system. Doherty modulation One tube provides the power under carrier conditions and another operates only for positive modulation peaks. Overall efficiency is good, and distortion is low. Outphasing modulation Two tubes are operated in parallel, but partially out of phase with each other. As they are differentially phase modulated their combined amplitude is greater or smaller. Efficiency is good and distortion low when properly adjusted. Pulse-width modulation (PWM) or pulse-duration modulation (PDM) A highly efficient high voltage power supply is applied to the tube plate. The output voltage of this supply is varied at an audio rate to follow the program. This system was pioneered by Hilmer Swanson and has a number of variations, all of which achieve high efficiency and sound quality. Digital methods The Harris Corporation obtained a patent for synthesizing a modulated high-power carrier wave from a set of digitally selected low-power amplifiers, running in phase at the same carrier frequency. The input signal is sampled by a conventional audio analog-to-digital converter (ADC), and fed to a digital exciter, which modulates overall transmitter output power by switching a series of low-power solid-state RF amplifiers on and off. The combined output drives the antenna system. Demodulation methods The simplest form of AM demodulator consists of a diode which is configured to act as envelope detector. Another type of demodulator, the product detector, can provide better-quality demodulation with additional circuit complexity.
Technology
Telecommunications
null
1144
https://en.wikipedia.org/wiki/Ardipithecus
Ardipithecus
Ardipithecus is a genus of an extinct hominine that lived during the Late Miocene and Early Pliocene epochs in the Afar Depression, Ethiopia. Originally described as one of the earliest ancestors of humans after they diverged from the chimpanzees, the relation of this genus to human ancestors and whether it is a hominin is now a matter of debate. Two fossil species are described in the literature: A. ramidus, which lived about 4.4 million years ago during the early Pliocene, and A. kadabba, dated to approximately 5.6 million years ago (late Miocene). Initial behavioral analysis indicated that Ardipithecus could be very similar to chimpanzees; however, more recent analysis based on canine size and lack of canine sexual dimorphism indicates that Ardipithecus was characterised by reduced aggression, and that they more closely resemble bonobos. Some analyses describe Australopithecus as being sister to Ardipithecus ramidus specifically. This means that Australopithecus is distinctly more closely related to Ardipithecus ramidus than Ardipithecus kadabba. Cladistically, then, Australopithecus (and eventually Homo sapiens) indeed emerged within the Ardipithecus lineage, and this lineage is not literally extinct. Ardipithecus ramidus A. ramidus was named in September 1994. The first fossil found was dated to 4.4 million years ago on the basis of its stratigraphic position between two volcanic strata: the basal Gaala Tuff Complex (G.A.T.C.) and the Daam Aatu Basaltic Tuff (D.A.B.T.). The name Ardipithecus ramidus stems mostly from the Afar language, in which Ardi means "ground/floor" and ramid means "root". The pithecus portion of the name is from the Greek word for "ape". Like most hominids, but unlike all previously recognized hominins, it had a grasping hallux or big toe adapted for locomotion in the trees. It is not confirmed how many other features of its skeleton reflect adaptation to bipedalism on the ground as well. Like later hominins, Ardipithecus had reduced canine teeth and reduced canine sexual dimorphism. In 1992–1993 a research team headed by Tim White discovered the first A. ramidus fossils—seventeen fragments including skull, mandible, teeth and arm bones—from the Afar Depression in the Middle Awash river valley of Ethiopia. More fragments were recovered in 1994, amounting to 45% of the total skeleton. This fossil was originally described as a species of Australopithecus, but White and his colleagues later published a note in the same journal renaming the fossil under a new genus, Ardipithecus. Between 1999 and 2003, a multidisciplinary team led by Sileshi Semaw discovered bones and teeth of nine A. ramidus individuals at As Duma in the Gona area of Ethiopia's Afar Region. The fossils were dated to between 4.35 and 4.45 million years old. Ardipithecus ramidus had a small brain, measuring between 300 and 350 cm3. This is slightly smaller than a modern bonobo or female chimpanzee brain, but much smaller than the brain of australopithecines like Lucy (~400 to 550 cm3) and roughly 20% the size of the modern Homo sapiens brain. Like common chimpanzees, A. ramidus was much more prognathic than modern humans. The teeth of A. ramidus lacked the specialization of other apes, and suggest that it was a generalized omnivore and frugivore (fruit eater) with a diet that did not depend heavily on foliage, fibrous plant material (roots, tubers, etc.), or hard and or abrasive food. The size of the upper canine tooth in A. ramidus males was not distinctly different from that of females. Their upper canines were less sharp than those of modern common chimpanzees in part because of this decreased upper canine size, as larger upper canines can be honed through wear against teeth in the lower mouth. The features of the upper canine in A. ramidus contrast with the sexual dimorphism observed in common chimpanzees, where males have significantly larger and sharper upper canine teeth than females. Of the living apes, bonobos have the smallest canine sexual dimorphism, although still greater than that displayed by A. ramidus. The less pronounced nature of the upper canine teeth in A. ramidus has been used to infer aspects of the social behavior of the species and more ancestral hominids. In particular, it has been used to suggest that the last common ancestor of hominids and African apes was characterized by relatively little aggression between males and between groups. This is markedly different from social patterns in common chimpanzees, among which intermale and intergroup aggression are typically high. Researchers in a 2009 study said that this condition "compromises the living chimpanzee as a behavioral model for the ancestral hominid condition." Bonobo canine size and canine sexual dimorphism more closely resembles that of A. ramidus, and as a result, bonobos are now suggested as a behavioural model. A. ramidus existed more recently than the most recent common ancestor of humans and chimpanzees (CLCA or Pan-Homo LCA) and thus is not fully representative of that common ancestor. Nevertheless, it is in some ways unlike chimpanzees, suggesting that the common ancestor differs from the modern chimpanzee. After the chimpanzee and human lineages diverged, both underwent substantial evolutionary change. Chimp feet are specialized for grasping trees; A. ramidus feet are better suited for walking. The canine teeth of A. ramidus are smaller, and equal in size between males and females, which suggests reduced male-to-male conflict, increased pair-bonding, and increased parental investment. "Thus, fundamental reproductive and social behavioral changes probably occurred in hominids long before they had enlarged brains and began to use stone tools," the research team concluded. Ardi On October 1, 2009, paleontologists formally announced the discovery of the relatively complete A. ramidus fossil skeleton first unearthed in 1994. The fossil is the remains of a small-brained female, nicknamed "Ardi", and includes most of the skull and teeth, as well as the pelvis, hands, and feet. It was discovered in Ethiopia's harsh Afar desert at a site called Aramis in the Middle Awash region. Radiometric dating of the layers of volcanic ash encasing the deposits suggest that Ardi lived about 4.3 to 4.5 million years ago. This date, however, has been questioned by others. Fleagle and Kappelman suggest that the region in which Ardi was found is difficult to date radiometrically, and they argue that Ardi should be dated at 3.9 million years. The fossil is regarded by its describers as shedding light on a stage of human evolution about which little was known, more than a million years before Lucy (Australopithecus afarensis), the iconic early human ancestor candidate who lived 3.2 million years ago, and was discovered in 1974 just away from Ardi's discovery site. However, because the "Ardi" skeleton is no more than 200,000 years older than the earliest fossils of Australopithecus, and may in fact be younger than they are, some researchers doubt that it can represent a direct ancestor of Australopithecus. Some researchers infer from the form of her pelvis and limbs and the presence of her abductable hallux, that "Ardi" was a facultative biped: bipedal when moving on the ground, but quadrupedal when moving about in tree branches. A. ramidus had a more primitive walking ability than later hominids, and could not walk or run for long distances. The teeth suggest omnivory, and are more generalised than those of modern apes. Ardipithecus kadabba Ardipithecus kadabba is "known only from teeth and bits and pieces of skeletal bones", and is dated to approximately 5.6 million years ago. It has been described as a "probable chronospecies" (i.e. ancestor) of A. ramidus. Although originally considered a subspecies of A. ramidus, in 2004 anthropologists Yohannes Haile-Selassie, Gen Suwa, and Tim D. White published an article elevating A. kadabba to species level on the basis of newly discovered teeth from Ethiopia. These teeth show "primitive morphology and wear pattern" which demonstrate that A. kadabba is a distinct species from A. ramidus. The specific name comes from the Afar word for "basal family ancestor". Classification Due to several shared characteristics with chimpanzees, its closeness to ape divergence period, and due to its fossil incompleteness, the exact position of Ardipithecus in the fossil record is a subject of controversy. Primatologist Esteban Sarmiento had systematically compared and concluded that there is not sufficient anatomical evidence to support an exclusively human lineage. Sarmiento noted that Ardipithecus does not share any characteristics exclusive to humans, and some of its characteristics (those in the wrist and basicranium) suggest it diverged from humans prior to the human–gorilla last common ancestor. His comparative (narrow allometry) study in 2011 on the molar and body segment lengths (which included living primates of similar body size) noted that some dimensions including short upper limbs, and metacarpals are reminiscent of humans, but other dimensions such as long toes and relative molar surface area are great ape-like. Sarmiento concluded that such length measures can change back and forth during evolution and are not very good indicators of relatedness (homoplasy). However, some later studies still argue for its classification in the human lineage. In 2014, it was reported that the hand bones of Ardipithecus, Australopithecus sediba and A. afarensis have the third metacarpal styloid process, which is absent in other apes. Unique brain organisations (such as lateral shift of the carotid foramina, mediolateral abbreviation of the lateral tympanic, and a shortened, trapezoidal basioccipital element) in Ardipithecus are also found only in the Australopithecus and Homo. Comparison of the tooth root morphology with those of the earlier Sahelanthropus also indicated strong resemblance, also pointing to inclusion to the human line. Evolutionary tree according to a 2019 study: Paleobiology The Ardipithecus length measures are good indicators of function and together with dental isotope data and the fauna and flora from the fossil site indicate Ardipithecus was mainly a terrestrial quadruped collecting a large portion of its food on the ground. Its arboreal behaviors would have been limited and suspension from branches solely from the upper limbs rare. A comparative study in 2013 on carbon and oxygen stable isotopes within modern and fossil tooth enamel revealed that Ardipithecus fed both arboreally (on trees) and on the ground in a more open habitat, unlike chimpanzees. In 2015, Australian anthropologists Gary Clark and Maciej Henneberg said that Ardipithecus adults have a facial anatomy more similar to chimpanzee subadults than adults, with a less-projecting face and smaller canines (large canines in primate males are used to compete within mating hierarchies), and attributed this to a decrease in craniofacial growth in favour of brain growth. This is only seen in humans, so they argued that the species may show the first trend towards human social, parenting and sexual psychology. Previously, it was assumed that such ancient human ancestors behaved much like chimps, but this is no longer considered to be a viable comparison. This view has yet to be corroborated by more detailed studies of the growth of A. ramidus. The study also provides support for Stephen Jay Gould's theory in Ontogeny and Phylogeny that the paedomorphic (childlike) form of early hominin craniofacial morphology results from dissociation of growth trajectories. Clark and Henneberg also argued that such shortening of the skull—which may have caused a descension of the larynx—as well as lordosis—allowing better movement of the larynx—increased vocal ability, significantly pushing back the origin of language to well before the evolution of Homo. They argued that self domestication was aided by the development of vocalization, living in a pro-social society. They conceded that chimps and A. ramidus likely had the same vocal capabilities, but said that A. ramidus made use of more complex vocalizations, and vocalized at the same level as a human infant due to selective pressure to become more social. This would have allowed their society to become more complex. They also noted that the base of the skull stopped growing with the brain by the end of juvenility, whereas in chimps it continues growing with the rest of the body into adulthood; and considered this evidence of a switch from a gross skeletal anatomy trajectory to a neurological development trajectory due to selective pressure for sociability. Nonetheless, their conclusions are highly speculative. According to Scott Simpson, the Gona Project's physical anthropologist, the fossil evidence from the Middle Awash indicates that both A. kadabba and A. ramidus lived in "a mosaic of woodland and grasslands with lakes, swamps and springs nearby," but further research is needed to determine which habitat Ardipithecus at Gona preferred.
Biology and health sciences
Australopithecines
Biology
1146
https://en.wikipedia.org/wiki/Assembly%20line
Assembly line
An assembly line, often called progressive assembly, is a manufacturing process where the unfinished product moves in a direct line from workstation to workstation, with parts added in sequence until the final product is completed. By mechanically moving parts to workstations and transferring the unfinished product from one workstation to another, a finished product can be assembled faster and with less labor than having workers carry parts to a stationary product. Assembly lines are common methods of assembling complex items such as automobiles and other transportation equipment, household appliances and electronic goods. Workers in charge of the works of assembly line are called assemblers. Concepts Assembly lines are designed for the sequential organization of workers, tools or machines, and parts. The motion of workers is minimized to the extent possible. All parts or assemblies are handled either by conveyors or motorized vehicles such as forklifts, or gravity, with no manual trucking. Heavy lifting is done by machines such as overhead cranes or forklifts. Each worker typically performs one simple operation unless job rotation strategies are applied. According to Henry Ford: Designing assembly lines is a well-established mathematical challenge, referred to as an assembly line balancing problem. In the simple assembly line balancing problem the aim is to assign a set of tasks that need to be performed on the workpiece to a sequence of workstations. Each task requires a given task duration for completion. The assignment of tasks to stations is typically limited by two constraints: (1) a precedence graph which indicates what other tasks need to be completed before a particular task can be initiated (e.g. not putting in a screw before drilling the hole) and (2) a cycle time which restricts the sum of task processing times which can be completed at each workstation before the work-piece is moved to the next station by the conveyor belt. Major planning problems for operating assembly lines include supply chain integration, inventory control and production scheduling. Simple example Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps); only one of these steps can be done at a time. In traditional production, only one car would be assembled at a time. If engine installation takes 20 minutes, hood installation takes five minutes, and wheels installation takes 10 minutes, then a car can be produced every 35 minutes. In an assembly line, car assembly is split between several stations, all working simultaneously. When a station is finished with a car, it passes it on to the next. By having three stations, three cars can be operated on at the same time, each at a different stage of assembly. After finishing its work on the first car, the engine installation crew can begin working on the second car. While the engine installation crew works on the second car, the first car can be moved to the hood station and fitted with a hood, then to the wheels station and be fitted with wheels. After the engine has been installed on the second car, the second car moves to the hood assembly. At the same time, the third car moves to the engine assembly. When the third car's engine has been mounted, it then can be moved to the hood station; meanwhile, subsequent cars (if any) can be moved to the engine installation station. Assuming no loss of time when moving a car from one station to another, the longest stage on the assembly line determines the throughput (20 minutes for the engine installation) so a car can be produced every 20 minutes, once the first car taking 35 minutes has been produced. History Before the Industrial Revolution, most manufactured products were made individually by hand. A single craftsman or team of craftsmen would create each part of a product. They would use their skills and tools such as files and knives to create the individual parts. They would then assemble them into the final product, making cut-and-try changes in the parts until they fit and could work together (craft production). Division of labor was practiced by Ancient Greeks, Chinese and other ancient civilizations. In Ancient Greece it was discussed by Plato and Xenophon. Adam Smith discussed the division of labour in the manufacture of pins at length in his book The Wealth of Nations (published in 1776). The Venetian Arsenal, dating to about 1104, operated similar to a production line. Ships moved down a canal and were fitted by the various shops they passed. At the peak of its efficiency in the early 16th century, the Arsenal employed some 16,000 people who could apparently produce nearly one ship each day and could fit out, arm, and provision a newly built galley with standardized parts on an assembly-line basis. Although the Arsenal lasted until the early Industrial Revolution, production line methods did not become common even then. Industrial Revolution The Industrial Revolution led to a proliferation of manufacturing and invention. Many industries, notably textiles, firearms, clocks and watches, horse-drawn vehicles, railway locomotives, sewing machines, and bicycles, saw expeditious improvement in materials handling, machining, and assembly during the 19th century, although modern concepts such as industrial engineering and logistics had not yet been named. The automatic flour mill built by Oliver Evans in 1785 was called the beginning of modern bulk material handling by Roe (1916). Evans's mill used a leather belt bucket elevator, screw conveyors, canvas belt conveyors, and other mechanical devices to completely automate the process of making flour. The innovation spread to other mills and breweries. Probably the earliest industrial example of a linear and continuous assembly process is the Portsmouth Block Mills, built between 1801 and 1803. Marc Isambard Brunel (father of Isambard Kingdom Brunel), with the help of Henry Maudslay and others, designed 22 types of machine tools to make the parts for the rigging blocks used by the Royal Navy. This factory was so successful that it remained in use until the 1960s, with the workshop still visible at HM Dockyard in Portsmouth, and still containing some of the original machinery. One of the earliest examples of an almost modern factory layout, designed for easy material handling, was the Bridgewater Foundry. The factory grounds were bordered by the Bridgewater Canal and the Liverpool and Manchester Railway. The buildings were arranged in a line with a railway for carrying the work going through the buildings. Cranes were used for lifting the heavy work, which sometimes weighed in the tens of tons. The work passed sequentially through to erection of framework and final assembly. The first flow assembly line was initiated at the factory of Richard Garrett & Sons, Leiston Works in Leiston in the English county of Suffolk for the manufacture of portable steam engines. The assembly line area was called 'The Long Shop' on account of its length and was fully operational by early 1853. The boiler was brought up from the foundry and put at the start of the line, and as it progressed through the building it would stop at various stages where new parts would be added. From the upper level, where other parts were made, the lighter parts would be lowered over a balcony and then fixed onto the machine on the ground level. When the machine reached the end of the shop, it would be completed. Interchangeable parts During the early 19th century, the development of machine tools such as the screw-cutting lathe, metal planer, and milling machine, and of toolpath control via jigs and fixtures, provided the prerequisites for the modern assembly line by making interchangeable parts a practical reality. Late 19th-century steam and electric conveyors Steam-powered conveyor lifts began being used for loading and unloading ships some time in the last quarter of the 19th century. Hounshell (1984) shows a sketch of an electric-powered conveyor moving cans through a filling line in a canning factory. The meatpacking industry of Chicago is believed to be one of the first industrial assembly lines (or disassembly lines) to be utilized in the United States starting in 1867. Workers would stand at fixed stations and a pulley system would bring the meat to each worker and they would complete one task. Henry Ford and others have written about the influence of this slaughterhouse practice on the later developments at Ford Motor Company. 20th century According to Domm, the implementation of mass production of an automobile via an assembly line may be credited to Ransom Olds, who used it to build the first mass-produced automobile, the Oldsmobile Curved Dash. Olds patented the assembly line concept, which he put to work in his Olds Motor Vehicle Company factory in 1901. At Ford Motor Company, the assembly line was introduced by William "Pa" Klann upon his return from visiting Swift & Company's slaughterhouse in Chicago and viewing what was referred to as the "disassembly line", where carcasses were butchered as they moved along a conveyor. The efficiency of one person removing the same piece over and over without moving to another station caught his attention. He reported the idea to Peter E. Martin, soon to be head of Ford production, who was doubtful at the time but encouraged him to proceed. Others at Ford have claimed to have put the idea forth to Henry Ford, but Pa Klann's slaughterhouse revelation is well documented in the archives at the Henry Ford Museum and elsewhere, making him an important contributor to the modern automated assembly line concept. Ford was appreciative, having visited the highly automated 40-acre Sears mail order handling facility around 1906. At Ford, the process was an evolution by trial and error of a team consisting primarily of Peter E. Martin, the factory superintendent; Charles E. Sorensen, Martin's assistant; Clarence W. Avery; C. Harold Wills, draftsman and toolmaker; Charles Ebender; and József Galamb. Some of the groundwork for such development had recently been laid by the intelligent layout of machine tool placement that Walter Flanders had been doing at Ford up to 1908. The moving assembly line was developed for the Ford Model T and began operation on October 7, 1913, at the Highland Park Ford Plant, and continued to evolve after that, using time and motion study. The assembly line, driven by conveyor belts, reduced production time for a Model T to just 93 minutes by dividing the process into 45 steps. Producing cars quicker than paint of the day could dry, it had an immense influence on the world. In 1922, Ford (through his ghostwriter Crowther) said of his 1913 assembly line: Charles E. Sorensen, in his 1956 memoir My Forty Years with Ford, presented a different version of development that was not so much about individual "inventors" as a gradual, logical development of industrial engineering: As a result of these developments in method, Ford's cars came off the line in three-minute intervals or six feet per minute. This was much faster than previous methods, increasing production by eight to one (requiring 12.5 man-hours before, 1 hour 33 minutes after), while using less manpower. It was so successful, paint became a bottleneck. Only japan black would dry fast enough, forcing the company to drop the variety of colours available before 1914, until fast-drying Duco lacquer was developed in 1926. The assembly line technique was an integral part of the diffusion of the automobile into American society. Decreased costs of production allowed the cost of the Model T to fall within the budget of the American middle class. In 1908, the price of a Model T was around $825, and by 1912 it had decreased to around $575. This price reduction is comparable to a reduction from $15,000 to $10,000 in dollar terms from the year 2000. In 1914, an assembly line worker could buy a Model T with four months' pay. Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism", and was copied by most major industries. The efficiency gains from the assembly line also coincided with the take-off of the United States. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods. In the automotive industry, its success was dominating, and quickly spread worldwide. Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany and Ford Japan 1925; in 1919, Vulcan (Southport, Lancashire) was the first native European manufacturer to adopt it. Soon, companies had to have assembly lines, or risk going broke by not being able to compete; by 1930, 250 companies which did not had disappeared. The massive demand for military hardware in World War II prompted assembly-line techniques in shipbuilding and aircraft production. Thousands of Liberty ships were built making extensive use of prefabrication, enabling ship assembly to be completed in weeks or even days. After having produced fewer than 3,000 planes for the United States Military in 1939, American aircraft manufacturers built over 300,000 planes in World War II. Vultee pioneered the use of the powered assembly line for aircraft manufacturing. Other companies quickly followed. As William S. Knudsen (having worked at Ford, GM and the National Defense Advisory Commission) observed, "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible." Improved working conditions In his 1922 autobiography, Henry Ford mentions several benefits of the assembly line including: Workers do not do any heavy lifting. No stooping or bending over. No special training was required. There are jobs that almost anyone can do. Provided employment to immigrants. The gains in productivity allowed Ford to increase worker pay from $1.50 per day to $5.00 per day once employees reached three years of service on the assembly line. Ford continued on to reduce the hourly work week while continuously lowering the Model T price. These goals appear altruistic; however, it has been argued that they were implemented by Ford in order to reduce high employee turnover: when the assembly line was introduced in 1913, it was discovered that "every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963" in order to counteract the natural distaste the assembly line seems to have inspired. Sociological problems Sociological work has explored the social alienation and boredom that many workers feel because of the repetition of doing the same specialized task all day long. Karl Marx expressed in his theory of alienation the belief that, in order to achieve job satisfaction, workers need to see themselves in the objects they have created, that products should be "mirrors in which workers see their reflected essential nature". Marx viewed labour as a chance for people to externalize facets of their personalities. Marxists argue that performing repetitive, specialized tasks causes a feeling of disconnection between what a worker does all day, who they really are, and what they would ideally be able to contribute to society. Furthermore, Marx views these specialised jobs as insecure, since the worker is expendable as soon as costs rise and technology can replace more expensive human labour. Since workers have to stand in the same place for hours and repeat the same motion hundreds of times per day, repetitive stress injuries are a possible pathology of occupational safety. Industrial noise also proved dangerous. When it was not too high, workers were often prohibited from talking. Charles Piaget, a skilled worker at the LIP factory, recalled that besides being prohibited from speaking, the semi-skilled workers had only 25 centimeters in which to move. Industrial ergonomics later tried to minimize physical trauma.
Technology
Basics_6
null
1158
https://en.wikipedia.org/wiki/Algebraic%20number
Algebraic number
An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, , is an algebraic number, because it is a root of the polynomial . That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number is algebraic because it is a root of . All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as and , are called transcendental numbers. The set of algebraic (complex) numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental. Similarly, the set of algebraic (real) numbers is countably infinite and has Lebesgue measure zero as a subset of the real numbers, and in that sense almost all real numbers are transcendental. Examples All rational numbers are algebraic. Any rational number, expressed as the quotient of an integer and a (non-zero) natural number , satisfies the above definition, because is the root of a non-zero polynomial, namely . Quadratic irrational numbers, irrational solutions of a quadratic polynomial with integer coefficients , , and , are algebraic numbers. If the quadratic polynomial is monic (), the roots are further qualified as quadratic integers. Gaussian integers, complex numbers for which both and are integers, are also quadratic integers. This is because and are the two roots of the quadratic . A constructible number can be constructed from a given unit length using a straightedge and compass. It includes all quadratic irrational roots, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (By designating cardinal directions for +1, −1, +, and −, complex numbers such as are considered constructible.) Any expression formed from algebraic numbers using any combination of the basic arithmetic operations and extraction of th roots gives another algebraic number. Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of th roots (such as the roots of ). That happens with many but not all polynomials of degree 5 or higher. Values of trigonometric functions of rational multiples of (except when undefined): for example, , , and satisfy . This polynomial is irreducible over the rationals and so the three cosines are conjugate algebraic numbers. Likewise, , , , and satisfy the irreducible polynomial , and so are conjugate algebraic integers. This is the equivalent of angles which, when measured in degrees, have rational numbers. Some but not all irrational numbers are algebraic: The numbers and are algebraic since they are roots of polynomials and , respectively. The golden ratio is algebraic since it is a root of the polynomial . The numbers and e are not algebraic numbers (see the Lindemann–Weierstrass theorem). Properties If a polynomial with rational coefficients is multiplied through by the least common denominator, the resulting polynomial with integer coefficients has the same roots. This shows that an algebraic number can be equivalently defined as a root of a polynomial with either integer or rational coefficients. Given an algebraic number, there is a unique monic polynomial with rational coefficients of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree , then the algebraic number is said to be of degree . For example, all rational numbers have degree 1, and an algebraic number of degree 2 is a quadratic irrational. The algebraic numbers are dense in the reals. This follows from the fact they contain the rational numbers, which are dense in the reals themselves. The set of algebraic numbers is countable, and therefore its Lebesgue measure as a subset of the complex numbers is 0 (essentially, the algebraic numbers take up no space in the complex numbers). That is to say, "almost all" real and complex numbers are transcendental. All algebraic numbers are computable and therefore definable and arithmetical. For real numbers and , the complex number is algebraic if and only if both and are algebraic. Degree of simple extensions of the rationals as a criterion to algebraicity For any , the simple extension of the rationals by , denoted by , is of finite degree if and only if is an algebraic number. The condition of finite degree means that there is a finite set in such that ; that is, every member in can be written as for some rational numbers (note that the set is fixed). Indeed, since the are themselves members of , each can be expressed as sums of products of rational numbers and powers of , and therefore this condition is equivalent to the requirement that for some finite , . The latter condition is equivalent to , itself a member of , being expressible as for some rationals , so or, equivalently, is a root of ; that is, an algebraic number with a minimal polynomial of degree not larger than . It can similarly be proven that for any finite set of algebraic numbers , ... , the field extension has a finite degree. Field The sum, difference, product, and quotient (if the denominator is nonzero) of two algebraic numbers is again algebraic: For any two algebraic numbers , , this follows directly from the fact that the simple extension , for being either , , or (for ) , is a linear subspace of the finite-degree field extension , and therefore has a finite degree itself, from which it follows (as shown above) that is algebraic. An alternative way of showing this is constructively, by using the resultant. Algebraic numbers thus form a field (sometimes denoted by , but that usually denotes the adele ring). Algebraic closure Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. That can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals and so it is called the algebraic closure of the rationals. That the field of algebraic numbers is algebraically closed can be proven as follows: Let be a root of a polynomial with coefficients that are algebraic numbers , , ... . The field extension then has a finite degree with respect to . The simple extension then has a finite degree with respect to (since all powers of can be expressed by powers of up to ). Therefore, also has a finite degree with respect to . Since is a linear subspace of , it must also have a finite degree with respect to , so must be an algebraic number. Related fields Numbers defined by radicals Any number that can be obtained from the integers using a finite number of additions, subtractions, multiplications, divisions, and taking (possibly complex) th roots where is a positive integer are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. These numbers are roots of polynomials of degree 5 or higher, a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). For example, the equation: has a unique real root, ≈ 1.1673, that cannot be expressed in terms of only radicals and arithmetic operations. Closed-form number Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include all algebraic numbers, but does include some simple transcendental numbers such as or ln 2. Algebraic integers An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are and Therefore, the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials for all . In this sense, algebraic integers are to algebraic numbers what integers are to rational numbers. The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name algebraic integer comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If is a number field, its ring of integers is the subring of algebraic integers in , and is frequently denoted as . These are the prototypical examples of Dedekind domains. Special classes Algebraic solution Gaussian integer Eisenstein integer Quadratic irrational number Fundamental unit Root of unity Gaussian period Pisot–Vijayaraghavan number Salem number
Mathematics
Basics
null
1164
https://en.wikipedia.org/wiki/Artificial%20intelligence
Artificial intelligence
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs. High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals. To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism throughout its history, followed by periods of disappointment and loss of funding, known as AI winters. Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture, and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology. Goals The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research. Reasoning and problem-solving Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem. Knowledge representation Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas. A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as objects, properties, categories, and relations between objects; situations, events, states, and time; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); and many other aspects and domains of knowledge. Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications. Planning and decision-making An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility. In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked. In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be. A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned. Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents. Learning Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input). In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good". Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization. Natural language processing Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering. Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure. Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning), transformers (a deep learning architecture using an attention mechanism), and others. In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications. Perception Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input. The field includes speech recognition, image classification, facial recognition, object recognition,object tracking, and robotic perception. Social intelligence Affective computing is a field that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction. However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject. General intelligence A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence. Techniques AI research uses a wide variety of techniques to accomplish the goals above. Search and optimization AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search. State space search State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal. Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position. Local search Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally. Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks, through the backpropagation algorithm. Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation. Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails). Logic Formal logic is used for reasoning and knowledge representation. Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies") and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys"). Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises). Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules. Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved. Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages. Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true. Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains. Probabilistic methods for uncertain reasoning Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design. Bayesian networks are a tool that can be used for reasoning (using the Bayesian inference algorithm), learning (using the expectation–maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). Classifiers and statistical learning methods The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s. The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Neural networks are also used as classifiers. Artificial neural networks An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers. Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function. In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. Perceptrons use only a single layer of neurons; deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object. Deep learning Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces. Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, and others. The reason that deep learning performs so well in so many applications is not known as of 2023. The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet. GPT Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text. Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text. Hardware and software In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Specialized programming languages such as Prolog were used in early AI research, but general-purpose programming languages like Python have become predominant. The transistor density in integrated circuits has been observed to roughly double every 18 months—a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster, a trend sometimes called Huang's law, named after Nvidia co-founder and CEO Jensen Huang. Applications AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO). Health and medicine The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients. For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria. In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold. Sexuality Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction, AI-integrated sex toys (e.g., teledildonics), AI-generated sexual education content, and AI agents that simulate sexual and romantic partners (e.g., Replika). AI is also used for the production of non-consensual deepfake pornography, raising significant ethical and legal concerns. AI technologies have also been used to attempt to identify online gender-based violence and online sexual grooming of minors. Games Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world. Other programs handle imperfect-information games, such as the poker-playing program Pluribus. DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games. In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map. In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning. In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions. Mathematics In mathematics, special forms of formal step-by-step reasoning are used. In contrast, LLMs such as GPT-4 Turbo, Gemini Ultra, Claude Opus, LLaMa-2 or Mistral Large are working with probabilistic models, which can produce wrong answers in the form of hallucinations. Therefore, they need not only a large database of mathematical problems to learn from but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections. A 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data. Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor, Alpha Geometry and Alpha Proof all from Google DeepMind, Llemma from eleuther or Julius. When natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks. Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics. Finance Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years. World Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation." Military Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. AI has been used in military operations in Iraq, Syria, Israel and Ukraine. Generative AI Agents Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. Other industry-specific tasks There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management. AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water. Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation. During the 2024 Indian elections, US$50 millions was spent on authorized AI-generated content, notably by creating deepfakes of allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages. Ethics AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of DeepMind hopes to "solve intelligence, and then use that to solve everything else". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning. Risks and harm Privacy and copyright Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright. AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency. Sensitive user data collected may include online activity records, geolocation data, video, or audio. For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them. Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy. AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy. Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'." Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work". Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI. Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors. Dominance by tech giants The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace. Power needs and environmental impacts In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use. This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation. Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms. A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all. In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers. In September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation. After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages. Taiwan aims to phase out nuclear power by 2025. On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban. Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI. Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI. On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center. According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors. Misinformation YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem . In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda. AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks. Algorithmic bias and fairness Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. The field of fairness studies how to prevent harms from algorithmic biases. On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity". Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon. COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data. A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender". Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work." Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive. Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women. There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws. At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed. Lack of transparency Many AI systems are so complex that their designers cannot explain how they reach their decisions. Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist. It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale. Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading. People who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used. DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems. Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output. LIME can locally approximate a model's outputs with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning. For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts. Bad actors and weaponized AI Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states. A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. Even when used in conventional warfare, they currently cannot reliably choose targets and could potentially kill an innocent person. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. By 2015, over fifty countries were reported to be researching battlefield robots. AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China. There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. Technological unemployment Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk". The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement. Existential risk It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race". This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways. First, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead." In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side". Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive. The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI. In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google." He notably mentioned risks of an AI takeover, and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI. In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Some other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier." While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors." Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests." Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction." In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine. However, after 2016, the study of current and future risks and possible solutions became a serious area of research. Ethical machines and alignment Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The field of machine ethics is also called computational morality, and was founded at an AAAI symposium in 2005. Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines. Open source Active organizations in the AI open-source community include Hugging Face, Google, EleutherAI and Meta. Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight, meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case. Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses. Frameworks Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas: Respect the dignity of individual people Connect with other people sincerely, openly, and inclusively Care for the wellbeing of everyone Protect social values, justice, and the public interest Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others; however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks. Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. Regulation The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important". In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI. History The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning. This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain". They developed several areas of research that would become part of AI, such as McCullouch and Pitts design for "artificial neurons" in 1943, and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible. The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s. Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had, however, underestimated the difficulty of the problem. In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, and began to look into "sub-symbolic" approaches. Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive. Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others. In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks. AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as the AI effect). However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s. Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. For many specific tasks, other methods were abandoned. Deep learning's success was based on both hardware improvements (faster computers, graphics processing units, cloud computing) and access to large amounts of data (including curated datasets, such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019. In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study. In the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months. It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI". About 800,000 "AI"-related U.S. job openings existed in 2022. According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies. Philosophy Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. Another major focus has been whether machines can be conscious, and the associated ethical implications. Many other topics in philosophy are relevant to AI, such as epistemology and free will. Rapid advancements have intensified public discussions on the philosophy and ethics of AI. Defining artificial intelligence Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks." Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons. AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence". McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world". Another AI founder, Marvin Minsky, similarly describes it as "the ability to solve hard problems". The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals. These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible. Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence. Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way". Evaluating approaches to AI No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers. Symbolic AI and its limits Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches. Neat vs. scruffy "Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, but eventually was seen as irrelevant. Modern AI has elements of both. Soft vs. hard computing Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks. Narrow vs. general AI AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively. Machine consciousness, sentience, and mind The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction. Consciousness David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like. Computationalism and functionalism Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle challenges this claim with his Chinese room argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind. AI welfare and rights It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. Future Superintelligence and the singularity A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity". However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. Transhumanism Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next step in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence. Decomputing Arguments for decomputing have been raised by Dan McQuillan (Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar to degrowth the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. Arguing that a different future is possible, in which distance between people is reduced and not increased to AI intermediaries. In fiction Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction. A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture. Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
Technology
Computing and information technology
null
1176
https://en.wikipedia.org/wiki/Antisymmetric%20relation
Antisymmetric relation
In mathematics, a binary relation on a set is antisymmetric if there is no pair of distinct elements of each of which is related by to the other. More formally, is antisymmetric precisely if for all or equivalently, The definition of antisymmetry says nothing about whether actually holds or not for any . An antisymmetric relation on a set may be reflexive (that is, for all ), irreflexive (that is, for no ), or neither reflexive nor irreflexive. A relation is asymmetric if and only if it is both antisymmetric and irreflexive. Examples The divisibility relation on the natural numbers is an important example of an antisymmetric relation. In this context, antisymmetry means that the only way each of two numbers can be divisible by the other is if the two are, in fact, the same number; equivalently, if and are distinct and is a factor of then cannot be a factor of For example, 12 is divisible by 4, but 4 is not divisible by 12. The usual order relation on the real numbers is antisymmetric: if for two real numbers and both inequalities and hold, then and must be equal. Similarly, the subset order on the subsets of any given set is antisymmetric: given two sets and if every element in also is in and every element in is also in then and must contain all the same elements and therefore be equal: A real-life example of a relation that is typically antisymmetric is "paid the restaurant bill of" (understood as restricted to a given occasion). Typically, some people pay their own bills, while others pay for their spouses or friends. As long as no two people pay each other's bills, the relation is antisymmetric. Properties Partial and total orders are antisymmetric by definition. A relation can be both symmetric and antisymmetric (in this case, it must be coreflexive), and there are relations which are neither symmetric nor antisymmetric (for example, the "preys on" relation on biological species). Antisymmetry is different from asymmetry: a relation is asymmetric if and only if it is antisymmetric and irreflexive.
Mathematics
Order theory
null
1181
https://en.wikipedia.org/wiki/Astrometry
Astrometry
Astrometry is a branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. It provides the kinematics and physical origin of the Solar System and this galaxy, the Milky Way. History The history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to the ancient Greek astronomer Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he also developed the brightness scale still in use today. Hipparchus compiled a catalogue with at least 850 stars and their positions. Hipparchus's successor, Ptolemy, included a catalogue of 1,022 stars in his work the Almagest, giving their location, coordinates, and brightness. In the 10th century, the Iranian astronomer Abd al-Rahman al-Sufi carried out observations on the stars and described their positions, magnitudes and star color; furthermore, he provided drawings for each constellation, which are depicted in his Book of Fixed Stars. Egyptian mathematician Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres. His observations on eclipses were still used centuries later in Canadian–American astronomer Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired French scholar Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within approximately 20 minutes of arc. In the 16th century, Danish astronomer Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more accurately than previously, with a precision of 15–35 arcsec. Ottoman scholar Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented. When telescopes became commonplace, setting circles sped measurements English astronomer James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis. His cataloguing of 3222 stars was refined in 1807 by German astronomer Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. In 1872, British astronomer William Huggins used spectroscopy to measure the radial velocity of several prominent stars, including Sirius. Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. Started in the late 19th century, the project Carte du Ciel to improve star mapping could not be finished but made photography a common technique for astrometry. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. This technology made astrometry less expensive, opening the field to an amateur audience. In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes. During its 4-year run, the positions, parallaxes, and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 stars to within 20-30 mas (milliarcseconds). Additional catalogues were compiled for the 23,882 double and multiple stars and 11,597 variable stars also analyzed during the Hipparcos mission. In 2013, the Gaia satellite was launched and improved the accuracy of Hipparcos. The precision was improved by a factor of 100 and enabled the mapping of a billion stars. Today, the catalogue most often used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions, magnitudes and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec. Applications Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is also fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions. It is instrumental for keeping time, in that UTC is essentially the atomic time synchronized to Earth's rotation by means of exact astronomical observations. Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way. Astrometry has also been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission (SIM PlanetQuest) (now cancelled) was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars. The European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can also be used to determine their mass. Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions. Also, astrometric results are used to determine the distribution of dark matter in the galaxy. Astronomers use astrometric techniques for the tracking of near-Earth objects. Astrometry is responsible for the detection of many record-breaking Solar System objects. To find such objects astrometrically, astronomers use telescopes to survey the sky and large-area cameras to take pictures at various determined intervals. By studying these images, they can detect Solar System objects by their movements relative to the background stars, which remain fixed. Once a movement per unit time is observed, astronomers compensate for the parallax caused by Earth's motion during this time and the heliocentric distance to this object is calculated. Using this distance and other photographs, more information about the object, including its orbital elements, can be obtained. Asteroid impact avoidance is among the purposes. Quaoar and Sedna are two trans-Neptunian dwarf planets discovered in this way by Michael E. Brown and others at Caltech using the Palomar Observatory's Samuel Oschin telescope of and the Palomar-Quest large-area CCD camera. The ability of astronomers to track the positions and movements of such celestial bodies is crucial to the understanding of the Solar System and its interrelated past, present, and future with others in the Universe. Statistics A fundamental aspect of astrometry is error correction. Various factors introduce errors into the measurement of stellar positions, including atmospheric conditions, imperfections in the instruments and errors by the observer or the measuring instruments. Many of these errors can be reduced by various techniques, such as through instrument improvements and compensations to the data. The results are then analyzed using statistical methods to compute data estimates and error ranges. Computer programs XParallax viu (Free application for Windows) Astrometrica (Application for Windows) Astrometry.net (Online blind astrometry)
Physical sciences
Astrometry
null
1187
https://en.wikipedia.org/wiki/Alloy
Alloy
An alloy is a mixture of chemical elements of which in most cases at least one is a metallic element, although it is also sometimes used for mixtures of elements; herein only metallic alloys are described. Most alloys are metallic and show good electrical conductivity, ductility, opacity, and luster, and may have properties that differ from those of the pure elements such as increased strength or hardness. In some cases, an alloy may reduce the overall cost of the material while preserving important properties. In other cases, the mixture imparts synergistic properties such as corrosion resistance or mechanical strength. In an alloy, the atoms are joined by metallic bonding rather than by covalent bonds typically found in chemical compounds. The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. An alloy may be a solid solution of metal elements (a single phase, where all metallic grains (crystals) are of the same composition) or a mixture of metallic phases (two or more solutions, forming a microstructure of different crystals within the metal). Examples of alloys include red gold (gold and copper), white gold (gold and silver), sterling silver (silver and copper), steel or silicon steel (iron with non-metallic carbon or silicon respectively), solder, brass, pewter, duralumin, bronze, and amalgams. Alloys are used in a wide variety of applications, from the steel alloys, used in everything from buildings to automobiles to surgical tools, to exotic titanium alloys used in the aerospace industry, to beryllium-copper alloys for non-sparking tools. Characteristics An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture. The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel. Like oil and water, a molten metal may not always mix with another element. For example, pure iron is almost completely insoluble with copper. Even when the constituents are soluble, each will usually have a saturation point, beyond which no more of the constituent can be added. Iron, for example, can hold a maximum of 6.67% carbon. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and forming a second phase that serves to reinforce the crystals internally. Some alloys, such as electrum—an alloy of silver and gold—occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements. An alloy is technically an impure metal, but when referring to alloys, the term impurities usually denotes undesirable elements. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy. Theory Alloying a metal is done by combining it with one or more other elements. The most common and oldest alloying process is performed by heating the base metal beyond its melting point and then dissolving the solutes into the molten liquid, which may be possible even if the melting point of the solute is far greater than that of the base. For example, in its liquid state, titanium is a very strong solvent capable of dissolving most metals and elements. In addition, it readily absorbs gases like oxygen and burns in the presence of nitrogen. This increases the chance of contamination from any contacting surface, and so must be melted in vacuum induction-heating and special, water-cooled, copper crucibles. However, some metals and solutes, such as iron and carbon, have very high melting-points and were impossible for ancient people to melt. Thus, alloying (in particular, interstitial alloying) may also be performed with one or more constituents in a gaseous state, such as found in a blast furnace to make pig iron (liquid-gas), nitriding, carbonitriding or other forms of case hardening (solid-gas), or the cementation process used to make blister steel (solid-gas). It may also be done with one, more, or all of the constituents in the solid state, such as found in ancient methods of pattern welding (solid-solid), shear steel (solid-solid), or crucible steel production (solid-liquid), mixing the elements via solid-state diffusion. By adding another element to a metal, differences in the size of the atoms create internal stresses in the lattice of the metallic crystals; stresses that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength, ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura. Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition. Heat treatment Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, et cetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel. The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (more brittle). While the high strength of steel results when diffusion and precipitation is prevented (forming martensite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle. In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. Wilm had been searching for a way to harden aluminium alloys for use in machine-gun cartridge cases. Knowing that aluminium-copper alloys were heat-treatable to some degree, Wilm tried quenching a ternary alloy of aluminium, copper, and the addition of magnesium, but was initially disappointed with the results. However, when Wilm retested it the next day he discovered that the alloy increased in hardness when left to age at room temperature, and far exceeded his expectations. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys used, becoming the primary building material for the first Zeppelins, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft. Mechanisms When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively. In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the interstitial sites between the atoms of the crystal matrix. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix. Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms. History and examples Meteoric iron The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work. Bronze and brass Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit. Native copper, however, was found worldwide, along with silver, gold, and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Around 10,000 years ago in the highlands of Anatolia (Turkey), humans learned to smelt metals such as copper and tin from ore. Around 2500 BC, people began alloying the two metals to form bronze, which was much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. For example, arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use. Amalgams Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for gilding objects such as armor and mirrors with precious metals. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver from their ores. Precious metals Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the King of Syracuse to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle. Pewter The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin is much too soft to use for most practical purposes. However, during the Bronze Age, tin was a rare metal in many parts of Europe and the Mediterranean, so it was often valued higher than gold. To make jewellery, cutlery, or other objects from tin, workers usually alloyed it with other metals to increase strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes were sometimes added individually in varying amounts, or added together, making a wide variety of objects, ranging from practical items such as dishes, surgical tools, candlesticks or funnels, to decorative items like ear rings and hair clips. The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East. The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines. Iron The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the ancient world. While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the absorption of carbon in this manner is extremely slow thus the penetration was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s. The introduction of the blast furnace to Europe in the Middle Ages meant that people could produce pig iron in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes to reduce carbon in liquid pig iron to create steel. Puddling had been used in China since the first century, and was introduced in Europe during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process led to the first large scale manufacture of steel. Steel is an alloy of iron and carbon, but the term alloy steel usually only refers to steels that contain other elements— like vanadium, molybdenum, or cobalt—in amounts sufficient to alter the properties of the base steel. Since ancient times, when steel was used primarily for tools and weapons, the methods of producing and working the metal were often closely guarded secrets. Even long after the Age of Enlightenment, the steel industry was very competitive and manufacturers went through great lengths to keep their processes confidential, resisting any attempts to scientifically analyze the material for fear it would reveal their methods. For example, the people of Sheffield, a center of steel production in England, were known to routinely bar visitors and tourists from entering town to deter industrial espionage. Thus, almost no metallurgical information existed about steel until 1860. Because of this lack of understanding, steel was not generally considered an alloy until the decades between 1930 and 1970 (primarily due to the work of scientists like William Chandler Roberts-Austen, Adolf Martens, and Edgar Bain), so "alloy steel" became the popular term for ternary and quaternary steel-alloys. After Benjamin Huntsman developed his crucible steel in 1740, he began experimenting with the addition of elements like manganese (in the form of a high-manganese pig-iron called spiegeleisen), which helped remove impurities such as phosphorus and oxygen; a process adopted by Bessemer and still used in modern steels (albeit in concentrations low enough to still be considered carbon steel). Afterward, many people began experimenting with various alloys of steel without much success. However, in 1882, Robert Hadfield, being a pioneer in steel metallurgy, took an interest and produced a steel alloy containing around 12% manganese. Called mangalloy, it exhibited extreme hardness and toughness, becoming the first commercially viable alloy-steel. Afterward, he created silicon steel, launching the search for other possible alloys of steel. Robert Forester Mushet found that by adding tungsten to steel it could produce a very hard edge that would resist losing its hardness at high temperatures. "R. Mushet's special steel" (RMS) became the first high-speed steel. Mushet's steel was quickly replaced by tungsten carbide steel, developed by Taylor and White in 1900, in which they doubled the tungsten content and added small amounts of chromium and vanadium, producing a superior steel for use in lathes and machining tools. In 1903, the Wright brothers used a chromium-nickel steel to make the crankshaft for their airplane engine, while in 1908 Henry Ford began using vanadium steels for parts like crankshafts and valves in his Model T Ford, due to their higher strength and resistance to high temperatures. In 1912, the Krupp Ironworks in Germany developed a rust-resistant steel by adding 21% chromium and 7% nickel, producing the first stainless steel. Others Due to their high reactivity, most metals were not discovered until the 19th century. A method for extracting aluminium from bauxite was proposed by Humphry Davy in 1807, using an electric arc. Although his attempts were unsuccessful, by 1855 the first sales of pure aluminium reached the market. However, as extractive metallurgy was still in its infancy, most aluminium extraction-processes produced unintended alloys contaminated with other elements found in the ore; the most abundant of which was copper. These aluminium-copper alloys (at the time termed "aluminum bronze") preceded pure aluminium, offering greater strength and hardness over the soft, pure metal, and to a slight degree were found to be heat treatable. However, due to their softness and limited hardenability these alloys found little practical use, and were more of a novelty, until the Wright brothers used an aluminium alloy to construct the first airplane engine in 1903. During the time between 1865 and 1910, processes for extracting many other metals were discovered, such as chromium, vanadium, tungsten, iridium, cobalt, and molybdenum, and various alloys were developed. Prior to 1910, research mainly consisted of private individuals tinkering in their own laboratories. However, as the aircraft and automotive industries began growing, research into alloys became an industrial effort in the years following 1910, as new magnesium alloys were developed for pistons and wheels in cars, and pot metal for levers and knobs, and aluminium alloys developed for airframes and aircraft skins were put into use. The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy.
Physical sciences
Chemistry
null
1196
https://en.wikipedia.org/wiki/Angle
Angle
In Euclidean geometry, an angle or plane angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection. Angles are also formed by the intersection of two planes; these are called dihedral angles. In any case, the resulting angle lies in a plane (spanned by the two rays or perpendicular to the line of plane-plane intersection). The magnitude of an angle is called an angular measure or simply "angle". Two different angles may have the same measure, as in an isosceles triangle. "Angle" also denotes the angular sector, the infinite region of the plane bounded by the sides of an angle. Angle of rotation is a measure conventionally defined as the ratio of a circular arc length to its radius, and may be a negative number. In the case of an ordinary angle, the arc is centered at the vertex and delimited by the sides. In the case of an angle of rotation, the arc is centered at the center of the rotation and delimited by any other point and its image after the rotation. History and etymology The word angle comes from the Latin word , meaning "corner". Cognate words include the Greek () meaning "crooked, curved" and the English word "ankle". Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow". Euclid defines a plane angle as the inclination to each other, in a plane, of two lines that meet each other and do not lie straight with respect to each other. According to the Neoplatonic metaphysician Proclus, an angle must be either a quality, a quantity, or a relationship. The first concept, angle as quality, was used by Eudemus of Rhodes, who regarded an angle as a deviation from a straight line; the second, angle as quantity, by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third: angle as a relationship. Identifying angles In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . ) as variables denoting the size of some angle (the symbol is typically not used for this purpose to avoid confusion with the constant denoted by that symbol). Lower case Roman letters (a, b, c, . . . ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex. See the figures in this article for examples. The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted or . Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A"). In other ways, an angle denoted as, say, might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see ). However, in many geometrical situations, it is evident from the context that the positive angle less than or equal to 180 degrees is meant, and in these cases, no ambiguity arises. Otherwise, to avoid ambiguity, specific conventions may be adopted so that, for instance, always refers to the anticlockwise (positive) angle from B to C about A and the anticlockwise (positive) angle from C to B about A. Types Individual angles There is some common terminology for angles, whose measure is always non-negative (see ): An angle equal to 0° or not turned is called a zero angle. An angle smaller than a right angle (less than 90°) is called an acute angle ("acute" meaning "sharp"). An angle equal to  turn (90° or radians) is called a right angle. Two lines that form a right angle are said to be normal, orthogonal, or perpendicular. An angle larger than a right angle and smaller than a straight angle (between 90° and 180°) is called an obtuse angle ("obtuse" meaning "blunt"). An angle equal to  turn (180° or radians) is called a straight angle. An angle larger than a straight angle but less than 1 turn (between 180° and 360°) is called a reflex angle. An angle equal to 1 turn (360° or 2 radians) is called a full angle, complete angle, round angle or perigon. An angle that is not a multiple of a right angle is called an oblique angle. The names, intervals, and measuring units are shown in the table below: Vertical and angle pairs When two straight lines intersect at a point, four angles are formed. Pairwise, these angles are named according to their location relative to each other. A transversal is a line that intersects a pair of (often parallel) lines and is associated with exterior angles, interior angles, alternate exterior angles, alternate interior angles, corresponding angles, and consecutive interior angles. Combining angle pairs The angle addition postulate states that if B is in the interior of angle AOC, then I.e., the measure of the angle AOC is the sum of the measure of angle AOB and the measure of angle BOC. Three special angle pairs involve the summation of angles: Polygon-related angles An angle that is part of a simple polygon is called an interior angle if it lies on the inside of that simple polygon. A simple concave polygon has at least one interior angle, that is, a reflex angle. In Euclidean geometry, the measures of the interior angles of a triangle add up to radians, 180°, or turn; the measures of the interior angles of a simple convex quadrilateral add up to 2 radians, 360°, or 1 turn. In general, the measures of the interior angles of a simple convex polygon with n sides add up to (n − 2) radians, or (n − 2)180 degrees, (n − 2)2 right angles, or (n − 2) turn. The supplement of an interior angle is called an exterior angle; that is, an interior angle and an exterior angle form a linear pair of angles. There are two exterior angles at each vertex of the polygon, each determined by extending one of the two sides of the polygon that meet at the vertex; these two angles are vertical and hence are equal. An exterior angle measures the amount of rotation one must make at a vertex to trace the polygon. If the corresponding interior angle is a reflex angle, the exterior angle should be considered negative. Even in a non-simple polygon, it may be possible to define the exterior angle. Still, one will have to pick an orientation of the plane (or surface) to decide the sign of the exterior angle measure. In Euclidean geometry, the sum of the exterior angles of a simple convex polygon, if only one of the two exterior angles is assumed at each vertex, will be one full turn (360°). The exterior angle here could be called a supplementary exterior angle. Exterior angles are commonly used in Logo Turtle programs when drawing regular polygons. In a triangle, the bisectors of two exterior angles and the bisector of the other interior angle are concurrent (meet at a single point). In a triangle, three intersection points, each of an external angle bisector with the opposite extended side, are collinear. In a triangle, three intersection points, two between an interior angle bisector and the opposite side, and the third between the other exterior angle bisector and the opposite side extended are collinear. Some authors use the name exterior angle of a simple polygon to mean the explement exterior angle (not supplement!) of the interior angle. This conflicts with the above usage. Plane-related angles The angle between two planes (such as two adjacent faces of a polyhedron) is called a dihedral angle. It may be defined as the acute angle between two lines normal to the planes. The angle between a plane and an intersecting straight line is complementary to the angle between the intersecting line and the normal to the plane. Measuring angles The size of a geometric angle is usually characterized by the magnitude of the smallest rotation that maps one of the rays into the other. Angles of the same size are said to be equal congruent or equal in measure. In some contexts, such as identifying a point on a circle or describing the orientation of an object in two dimensions relative to a reference orientation, angles that differ by an exact multiple of a full turn are effectively equivalent. In other contexts, such as identifying a point on a spiral curve or describing an object's cumulative rotation in two dimensions relative to a reference orientation, angles that differ by a non-zero multiple of a full turn are not equivalent. To measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g., with a pair of compasses. The ratio of the length s of the arc by the radius r of the circle is the number of radians in the angle: Conventionally, in mathematics and the SI, the radian is treated as being equal to the dimensionless unit 1, thus being normally omitted. The angle expressed by another angular unit may then be obtained by multiplying the angle by a suitable conversion constant of the form , where k is the measure of a complete turn expressed in the chosen unit (for example, for degrees or 400 grad for gradians): The value of thus defined is independent of the size of the circle: if the length of the radius is changed, then the arc length changes in the same proportion, so the ratio s/r is unaltered. Units Throughout history, angles have been measured in various units. These are known as angular units, with the most contemporary units being the degree ( ° ), the radian (rad), and the gradian (grad), though many others have been used throughout history. Most units of angular measurement are defined such that one turn (i.e., the angle subtended by the circumference of a circle at its centre) is equal to n units, for some whole number n. Two exceptions are the radian (and its decimal submultiples) and the diameter part. In the International System of Quantities, an angle is defined as a dimensionless quantity, and in particular, the radian unit is dimensionless. This convention impacts how angles are treated in dimensional analysis. The following table lists some units used to represent angles. Dimensional analysis Signed angles It is frequently helpful to impose a convention that allows positive and negative angular values to represent orientations and/or rotations in opposite directions or "sense" relative to some reference. In a two-dimensional Cartesian coordinate system, an angle is typically defined by its two sides, with its vertex at the origin. The initial side is on the positive x-axis, while the other side or terminal side is defined by the measure from the initial side in radians, degrees, or turns, with positive angles representing rotations toward the positive y-axis and negative angles representing rotations toward the negative y-axis. When Cartesian coordinates are represented by standard position, defined by the x-axis rightward and the y-axis upward, positive rotations are anticlockwise, and negative cycles are clockwise. In many contexts, an angle of −θ is effectively equivalent to an angle of "one full turn minus θ". For example, an orientation represented as −45° is effectively equal to an orientation defined as 360° − 45° or 315°. Although the final position is the same, a physical rotation (movement) of −45° is not the same as a rotation of 315° (for example, the rotation of a person holding a broom resting on a dusty floor would leave visually different traces of swept regions on the floor). In three-dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined in terms of an orientation, which is typically determined by a normal vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie. In navigation, bearings or azimuth are measured relative to north. By convention, viewed from above, bearing angles are positive clockwise, so a bearing of 45° corresponds to a north-east orientation. Negative bearings are not used in navigation, so a north-west orientation corresponds to a bearing of 315°. Equivalent angles Angles that have the same measure (i.e., the same magnitude) are said to be equal or congruent. An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g., all right angles are equal in measure). Two angles that share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles. The reference angle (sometimes called related angle) for any angle θ in standard position is the positive acute angle between the terminal side of θ and the x-axis (positive or negative). Procedurally, the magnitude of the reference angle for a given angle may determined by taking the angle's magnitude modulo turn, 180°, or radians, then stopping if the angle is acute, otherwise taking the supplementary angle, 180° minus the reduced magnitude. For example, an angle of 30 degrees is already a reference angle, and an angle of 150 degrees also has a reference angle of 30 degrees (180° − 150°). Angles of 210° and 510° correspond to a reference angle of 30 degrees as well (210° mod 180° = 30°, 510° mod 180° = 150° whose supplementary angle is 30°). Related quantities For an angular unit, it is definitional that the angle addition postulate holds. Some quantities related to angles where the angle addition postulate does not hold include: The slope or gradient is equal to the tangent of the angle; a gradient is often expressed as a percentage. For very small values (less than 5%), the slope of a line is approximately the measure in radians of its angle with the horizontal direction. The spread between two lines is defined in rational geometry as the square of the sine of the angle between the lines. As the sine of an angle and the sine of its supplementary angle are the same, any angle of rotation that maps one of the lines into the other leads to the same value for the spread between the lines. Although done rarely, one can report the direct results of trigonometric functions, such as the sine of the angle. Angles between curves The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. , on both sides, κυρτός, convex) or cissoidal (Gr. κισσός, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίς, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave. Bisecting and trisecting angles The ancient Greek mathematicians knew how to bisect an angle (divide it into two angles of equal measure) using only a compass and straightedge but could only trisect certain angles. In 1837, Pierre Wantzel showed that this construction could not be performed for most angles. Dot product and generalisations In the Euclidean space, the angle θ between two Euclidean vectors u and v is related to their dot product and their lengths by the formula This formula supplies an easy method to find the angle between two planes (or curved surfaces) from their normal vectors and between skew lines from their vector equations. Inner product To define angles in an abstract real inner product space, we replace the Euclidean dot product ( · ) by the inner product , i.e. In a complex inner product space, the expression for the cosine above may give non-real values, so it is replaced with or, more commonly, using the absolute value, with The latter definition ignores the direction of the vectors. It thus describes the angle between one-dimensional subspaces and spanned by the vectors and correspondingly. Angles between subspaces The definition of the angle between one-dimensional subspaces and given by in a Hilbert space can be extended to subspaces of finite dimensions. Given two subspaces , with , this leads to a definition of angles called canonical or principal angles between subspaces. Angles in Riemannian geometry In Riemannian geometry, the metric tensor is used to define the angle between two tangents. Where U and V are tangent vectors and gij are the components of the metric tensor G, Hyperbolic angle A hyperbolic angle is an argument of a hyperbolic function just as the circular angle is the argument of a circular function. The comparison can be visualized as the size of the openings of a hyperbolic sector and a circular sector since the areas of these sectors correspond to the angle magnitudes in each case. Unlike the circular angle, the hyperbolic angle is unbounded. When the circular and hyperbolic functions are viewed as infinite series in their angle argument, the circular ones are just alternating series forms of the hyperbolic functions. This comparison of the two series corresponding to functions of angles was described by Leonhard Euler in Introduction to the Analysis of the Infinite (1748). Angles in geography and astronomy In geography, the location of any point on the Earth can be identified using a geographic coordinate system. This system specifies the latitude and longitude of any location in terms of angles subtended at the center of the Earth, using the equator and (usually) the Greenwich meridian as references. In astronomy, a given point on the celestial sphere (that is, the apparent position of an astronomical object) can be identified using any of several astronomical coordinate systems, where the references vary according to the particular system. Astronomers measure the angular separation of two stars by imagining two lines through the center of the Earth, each intersecting one of the stars. The angle between those lines and the angular separation between the two stars can be measured. In both geography and astronomy, a sighting direction can be specified in terms of a vertical angle such as altitude /elevation with respect to the horizon as well as the azimuth with respect to north. Astronomers also measure objects' apparent size as an angular diameter. For example, the full moon has an angular diameter of approximately 0.5° when viewed from Earth. One could say, "The Moon's diameter subtends an angle of half a degree." The small-angle formula can convert such an angular measurement into a distance/size ratio. Other astronomical approximations include: 0.5° is the approximate diameter of the Sun and of the Moon as viewed from Earth. 1° is the approximate width of the little finger at arm's length. 10° is the approximate width of a closed fist at arm's length. 20° is the approximate width of a handspan at arm's length. These measurements depend on the individual subject, and the above should be treated as rough rule of thumb approximations only. In astronomy, right ascension and declination are usually measured in angular units, expressed in terms of time, based on a 24-hour day.
Mathematics
Geometry and topology
null
1198
https://en.wikipedia.org/wiki/Acoustics
Acoustics
Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries. Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics. History Etymology The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear". The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively. Early research in acoustics In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of musical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order. Aristotle (384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...", a very good expression of the nature of wave motion. On Things Heard, generally ascribed to Strato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound. In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics. In Book V of his (The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels (echea) of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes. During the Islamic golden age, Abū Rayhān al-Bīrūnī (973–1048) is believed to have postulated that the speed of sound was much slower than the speed of light. The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile, Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687). Age of Enlightenment and onward Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century by Euler (1707–1783), Lagrange (1736–1813), and d'Alembert (1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air. In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound (1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics. The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use. Definition Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects." The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations. The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, the acoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert. The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves. Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either a diffraction, interference or a reflection or a mix of the three. If several media are present, a refraction can also occur. Transduction processes are also of special importance to acoustics. Fundamental concepts Wave propagation: pressure levels In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels. Wave propagation: frequency Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon. The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes. Analytic instruments such as the spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character. Transduction in acoustics A transducer is a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers include loudspeakers, microphones, particle velocity sensors, hydrophones and sonar projectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity. The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself. Acoustician An acoustician is an expert in the science of sound. Education There are many types of acoustician, but they usually have a Bachelor's degree or higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such as physics or engineering. Much work in acoustics requires a good grounding in Mathematics and science. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g. hearing, psychoacoustics or neurophysiology) of speech, music and noise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g. underwater acoustics, architectural acoustics or structural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work in Acoustical Engineering. Some positions, such as Faculty (academic staff) require a Doctor of Philosophy. Subdisciplines Archaeoacoustics Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes. Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling. Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening. Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic. Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today. Aeroacoustics Aeroacoustics is the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge was applied in the 1920s and '30s to detect aircraft before radar was invented and is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important for understanding how wind musical instruments work. Acoustic signal processing Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus). Architectural acoustics Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building. It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment. Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems. Bioacoustics Bioacoustics is the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat. Electroacoustics This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories. Environmental noise and soundscapes Environmental acoustics is the study of noise and vibrations, and their impact on structures, objects, humans, and animals. The main aim of these studies is to reduce levels of environmental noise and vibration. Typical work and research within environmental acoustics concerns the development of models used in simulations, measurement techniques, noise mitigation strategies, and the development of standards and regulations. Research work now also has a focus on the positive use of sound in urban environments: soundscapes and tranquility. Examples of noise and vibration sources include railways, road traffic, aircraft, industrial equipment and recreational activities. Musical acoustics Musical acoustics is the study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music. Psychoacoustics Many studies have been conducted to identify the relationship between acoustics and cognition, or more commonly known as psychoacoustics, in which what one hears is a combination of perception and biological aspects. The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves. This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music. By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident. Speech Acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics. Structural Vibration and Dynamics Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control. There are several sub-disciplines found within this regime: Modal Analysis Material characterization Structural health monitoring Acoustic Metamaterials Friction Acoustics Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings. Ultrasonics Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, ultrasonic testing, material characterisation and underwater acoustics (sonar). Underwater acoustics Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics. Research Professional societies The Acoustical Society of America (ASA) Australian Acoustical Society (AAS) The European Acoustics Association (EAA) Institute of Electrical and Electronics Engineers (IEEE) Institute of Acoustics (IoA UK) The Audio Engineering Society (AES) American Society of Mechanical Engineers, Noise Control and Acoustics Division (ASME-NCAD) International Commission for Acoustics (ICA) American Institute of Aeronautics and Astronautics, Aeroacoustics (AIAA) International Computer Music Association (ICMA) Academic journals Acoustics | An Open Access Journal from MDPI Acoustics Today Acta Acustica united with Acustica Advances in Acoustics and Vibration Applied Acoustics Building Acoustics IEEE Transacions on Ultrasonics, Ferroelectrics, and Frequency Control Journal of the Acoustical Society of America (JASA) Journal of the Acoustical Society of America, Express Letters (JASA-EL) Journal of the Audio Engineering Society Journal of Sound and Vibration (JSV) Journal of Vibration and Acoustics American Society of Mechanical Engineers MDPI Acoustics Noise Control Engineering Journal SAE International Journal of Vehicle Dynamics, Stability and NVH Ultrasonics (journal) Ultrasonics Sonochemistry Wave Motion Conferences InterNoise NoiseCon Forum Acousticum SAE Noise and Vibration Conference and Exhibition
Physical sciences
Waves
null
1200
https://en.wikipedia.org/wiki/Atomic%20physics
Atomic physics
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Isolated atoms Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms. Electronic configuration Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved. If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon. There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes. Bohr Model of the Atom The Bohr model, proposed by Niels Bohr in 1913, is a revolutionary theory describing the structure of the hydrogen atom. It introduced the idea of quantized orbits for electrons, combining classical and quantum physics. Key Postulates of the Bohr Model 1.Electrons Move in Circular Orbits: • Electrons revolve around the nucleus in fixed, circular paths called orbits or energy levels. •These orbits are stable and do not radiate energy. 2.Quantization of Angular Momentum: •The angular momentum of an electron is quantized and given by: L = m_e v r = n\hbar, \quad n = 1, 2, 3, \dots where: • m_e : Mass of the electron. • v : Velocity of the electron. • r : Radius of the orbit. • \hbar : Reduced Planck’s constant ( \hbar = \frac{h}{2\pi} ). •n : Principal quantum number, representing the orbit. 3.Energy Levels: •Each orbit has a specific energy. The total energy of an electron in the  nth orbit is: E_n = -\frac{13.6}{n^2} \ \text{eV}, where  13.6 \ \text{eV}  is the ground-state energy of the hydrogen atom. 4.Emission or Absorption of Energy: •Electrons can transition between orbits by absorbing or emitting energy equal to the difference between the energy levels: \Delta E = E_f - E_i = h\nu, where: •h : Planck’s constant. • \nu : Frequency of emitted/absorbed radiation. • E_f, E_i : Final and initial energy levels. History and developments One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward. The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy. Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work. Beyond the well-known phenomena wich can be describe with regular quantum mechanics chaotic processes can occour which need different descriptions. Significant atomic physicists
Physical sciences
Atomic physics
null
1206
https://en.wikipedia.org/wiki/Atomic%20orbital
Atomic orbital
In quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus. Each orbital in an atom is characterized by a set of values of three quantum numbers , , and , which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of and orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, ) which describe their angular structure. An orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j". Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number , particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily. Electron properties With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties: Wave-like properties: Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency. The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: The number of electrons orbiting a nucleus can be only an integer. Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result. Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition. Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of is 2, and the value of is 1. For the second and third states, the value for is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction . A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous and , but would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below). Formal quantum mechanical definition Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbital Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions  which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as . The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as . Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. History The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics. Early models With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty. Modern conceptions and connections to the Heisenberg uncertainty principle Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum. In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names Orbital notation and subshells Orbitals have been given names, which are usually given in the form: where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number . For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively. The set of orbitals for a given n and is called a subshell, denoted . The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1. X-ray notation There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of  are even more closely related, and are said to comprise a "subshell". Quantum numbers Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the  shell has only orbitals with , and the  shell has only orbitals with , and . The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell , obtains the integer values in the range . The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist. Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'. Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = . Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be + or −. These values are also called "spin up" or "spin down" respectively. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin projection ms. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experimentwhere an atom is exposed to a magnetic fieldprovides one such example. Real orbitals Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting denote a complex orbital with quantum numbers , , and , the real orbitals may be defined by If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic . Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (but its absolute value is). Some real orbitals are given specific names beyond the simple designation. Orbitals with quantum number are called orbitals. With this one can already assign names to complex orbitals such as ; the first symbol is the quantum number, the second character is the symbol for that particular quantum number and the subscript is the quantum number. As an example of how the full orbital names are generated for real orbitals, one may calculate . From the table of spherical harmonics, with . Then Likewise . As a more complicated example: In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in . We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers. The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above. Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for so there does not seem be consensus on the naming of orbitals or higher according to this nomenclature. Shapes of orbitals Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture. Sometimes the function is graphed to show its phases, rather than which shows probability density but has no phase (which is lost when taking absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly graphs. The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave and modes; the projection of the orbital onto the xy plane has a resonant wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each there are two standing wave solutions and . If , the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers and . An orbital with azimuthal quantum number has radial nodal planes passing through the origin. For example, the s orbitals () are spherically symmetric and have no nodal planes, whereas the p orbitals () have a single nodal plane between the lobes. The number of nodal spheres equals , consistent with the restriction on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is . Loosely speaking, is energy, is analogous to eccentricity, and is orientation. In general, determines size and energy of the orbital for a given nucleus; as increases, the size of the orbital increases. The higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases. Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s orbitals () are shaped like spheres. For it is roughly a solid ball (densest at center and fades outward exponentially), but for , each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right). The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes. Four of the five d orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f orbitals, each with shapes more complex than those of the d orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem. Orbitals table This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics. * No elements with 6f, 7d or 7f electrons have been discovered yet. † Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1. ‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing). These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are combinations of two eigenstates. See comparison in the following picture: Qualitative understanding of shapes The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate . None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. Orbital energy In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher energy, but the difference decreases as increases. For high , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number . Thus, two electrons may occupy a single orbital, so long as they have different values of . Because takes one of only two values ( or ), at most two electrons can occupy each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr model, an  electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of  due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical  value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes. pp hybridization (conjectured) In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. Transitions between orbitals Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State , , and State , , and By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron.
Physical sciences
Atomic physics
null
1207
https://en.wikipedia.org/wiki/Amino%20acid
Amino acid
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life. Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence. Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows: The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules. History The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth. The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide". General structure 2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain". Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis. Chirality The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid, as well as a hydrogen atom. With the exception of glycine, for which the side chain is also a hydrogen atom, the α–carbon is stereogenic. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon. A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification. Side chains Polar charged side chains Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts. The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids. There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions. Polar uncharged side chains The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine. Hydrophobic side chains Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well. Special case side chains Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases. β- and γ-amino acids Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H). Zwitterions The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome. In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged". In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give . Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base. Isoelectric point For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2). For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa. Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains. Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point. Physicochemical properties The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues. Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids. Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic. Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed. Table of standard amino acid abbreviations and properties Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences". The one-letter notation was chosen by IUPAC-IUB based on the following rules: Initial letters are used where there is no ambuiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine, Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine, F PHenylalanine and R aRginine are assigned by being phonetically suggestive, W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W, K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed, D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group, N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine. Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons: In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases. Unk is sometimes used instead of Xaa, but is less standard. Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all. In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet). Occurrence and functions in biochemistry Proteinogenic amino acids Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes. Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms. Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code. Standard vs nonstandard amino acids The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code. The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element. N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids. Non-proteinogenic amino acids Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine). Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples: the carboxylation of glutamate allows for better binding of calcium cations, Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen. Hypusine in the translation initiation factor EIF5A, contains a modification of lysine. Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A. In mammalian nutrition Amino acids are not typical component of food: animals eat proteins. The protein is broken down into amino acids in the process of digestion. They are then used to synthesize new proteins, other biomolecules, or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis. Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. Semi-essential and conditionally essential amino acids, and juvenile requirements In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed. Non-protein functions Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides.In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples: Standard amino acids Tryptophan is a precursor of the neurotransmitter serotonin. Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines. Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism. Glycine is a precursor of porphyrins such as heme. Arginine is a precursor of nitric oxide. Ornithine and S-adenosylmethionine are precursors of polyamines. Aspartate, glycine, and glutamine are precursors of nucleotides. Roles for nonstandard amino acids Carnitine is used in lipid transport. gamma-aminobutyric acid is a neurotransmitter. 5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression. L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment, Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness. Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators. Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants. However, not all of the functions of other abundant nonstandard amino acids are known. Uses in industry Animal feed Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements. Food The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation. Chemical building blocks Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks. Amino acids are used in the synthesis of some cosmetics. Aspirational uses Fertilizer The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants. Biodegradable plastics Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor. Synthesis Chemical synthesis The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase. Biosynthesis In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too. Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline. Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene. Primordial synthesis The formation of amino acids and peptides is assumed to have preceded and perhaps induced the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life. In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids. According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved. Reactions Amino acids undergo the reactions expected of the constituent functional groups. Peptide bond formation As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus. However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione. In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening. The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates. The multiple side chains of amino acids can also undergo chemical reactions. Catabolism Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right. Complexation Amino acids are bidentate ligands, forming transition metal amino acid complexes. Chemical analysis The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
Biology and health sciences
Biochemistry and molecular biology
null
1209
https://en.wikipedia.org/wiki/Area
Area
Area is the measure of a region's size on a surface. The area of a plane region or plane area refers to the area of a shape or planar lamina, while surface area refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept). Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area". The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number. There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus. For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus. Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable if one supposes the axiom of choice. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions. Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists. Formal definition An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties: For all S in M, . If S and T are in M then so are and , and also . If S and T are in M with then is in M and . If a set S is in M and S is congruent to T then T is also in M and . Every rectangle R is in M. If the rectangle has length h and breadth k then . Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. . If there is a unique number c such that for all such step regions S and T, then . It can be proved that such an area function actually exists. Units Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units. The SI unit of area is the square metre, which is considered an SI derived unit. Conversions Calculation of the area of a square whose length and width are 1 metre would be: 1 metre × 1 metre = 1 m2 and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as: 3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are: 1 square kilometre = 1,000,000 square metres 1 square metre = 10,000 square centimetres = 1,000,000 square millimetres 1 square centimetre = 100 square millimetres. Non-metric units In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units. 1 foot = 12 inches, the relationship between square feet and square inches is 1 square foot = 144 square inches, where 144 = 122 = 12 × 12. Similarly: 1 square yard = 9 square feet 1 square mile = 3,097,600 square yards = 27,878,400 square feet In addition, conversion factors include: 1 square inch = 6.4516 square centimetres 1 square foot = square metres 1 square yard = square metres 1 square mile = square kilometres Other units including historical There are several other common units for area. The are was the original unit of area in the metric system, with: 1 are = 100 square metres Though the are has fallen out of use, the hectare is still commonly used to measure land: 1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres Other uncommon metric units of area include the tetrad, the hectad, and the myriad. The acre is also commonly used to measure land areas, where 1 acre = 4,840 square yards = 43,560 square feet. An acre is approximately 40% of a hectare. On the atomic scale, area is measured in units of barns, such that: 1 barn = 10−28 square meters. The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics. In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used. Some traditional South Asian units that have fixed value: 1 Killa = 1 acre 1 Ghumaon = 1 acre 1 Kanal = 0.125 acre (1 acre = 8 kanal) 1 Decimal = 48.4 square yards 1 Chatak = 180 square feet History Circle area In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared. Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk.) Archimedes approximated the value of (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons). Triangle area Quadrilateral area In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral. General polygon area The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century. Areas determined using calculus The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects. Area formulas Polygon formulas For a non-self-intersecting (simple) polygon, the Cartesian coordinates (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula: where when i=n-1, then i+1 is expressed as modulus n and so refers to 0. Rectangles The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:  (rectangle). That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:  (square). The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers. Dissection, parallelograms, and triangles Most other simple formulas for area follow from the method of dissection. This involves cutting a shape into pieces, whose areas must sum to the area of the original shape. For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:  (parallelogram). However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:  (triangle). Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons. Area of curved shapes Circles The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is :  (circle). Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle. This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral: Ellipses The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is: Non-planar surface area Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed. The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:  (sphere), where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus. General formulas Areas of 2-dimensional figures A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where is the given angle and and are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use calculus to find the area. A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem. Area in calculus The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve: The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x): where is the curve with the greater y-value. An area bounded by a function expressed in polar coordinates is: The area enclosed by a parametric curve with endpoints is given by the line integrals: or the z-component of (For details, see .) This is the principle of the planimeter mechanical device. Bounded area between two quadratic functions To find the bounded area between two quadratic functions, we first subtract one from the other, writing the difference as where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound. By the area integral formulas above and Vieta's formula, we can obtain that The above remains valid if one of the bounding functions is linear instead of quadratic. Surface area of 3-dimensional figures Cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone. Cube: , where s is the length of an edge. Cylinder: , where r is the radius of a base and h is the height. The can also be rewritten as , where d is the diameter. Prism: , where B is the area of a base, P is the perimeter of a base, and h is the height of the prism. pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant. Rectangular prism: , where is the length, w is the width, and h is the height. General formula for surface area The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary: An even more general formula for the area of the graph of a parametric surface in the vector form where is a continuously differentiable vector function of is: List of formulas The above calculations show how to find the areas of many common shapes. The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula). Relation of area to perimeter The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses, and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter. At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°. For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr. The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side). Fractals Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal. Area bisectors There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle. Any line through the midpoint of a parallelogram bisects the area. All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle. Optimization Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles. The question of the filling area of the Riemannian circle remains open. The circle has the largest area of any two-dimensional object having the same perimeter. A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths. A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral. The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral. The ratio of the area of the incircle to the area of an equilateral triangle, , is larger than that of any non-equilateral triangle. The ratio of the area to the square of the perimeter of an equilateral triangle, is larger than that for any other triangle.
Mathematics
Geometry and topology
null
1210
https://en.wikipedia.org/wiki/Astronomical%20unit
Astronomical unit
The astronomical unit (symbol: au or AU) is a unit of length defined to be exactly equal to . Historically, the astronomical unit was conceived as the average Earth-Sun distance (the average of Earth's aphelion and perihelion), before its modern redefinition in 2012. The astronomical unit is used primarily for measuring distances within the Solar System or around other stars. It is also a fundamental component in the definition of another unit of astronomical length, the parsec. One au is equivalent to 499 light-seconds to within 10 parts per million. History of symbol usage A variety of unit symbols and abbreviations have been in use for the astronomical unit. In a 1976 resolution, the International Astronomical Union (IAU) had used the symbol A to denote a length equal to the astronomical unit. In the astronomical literature, the symbol AU is common. In 2006, the International Bureau of Weights and Measures (BIPM) had recommended ua as the symbol for the unit, from the French "unité astronomique". In the non-normative Annex C to ISO 80000-3:2006 (later withdrawn), the symbol of the astronomical unit was also ua. In 2012, the IAU, noting "that various symbols are presently in use for the astronomical unit", recommended the use of the symbol "au". The scientific journals published by the American Astronomical Society and the Royal Astronomical Society subsequently adopted this symbol. In the 2014 revision and 2019 edition of the SI Brochure, the BIPM used the unit symbol "au". ISO 80000-3:2019, which replaces ISO 80000-3:2006, does not mention the astronomical unit. Development of unit definition Earth's orbit around the Sun is an ellipse. The semi-major axis of this elliptic orbit is defined to be half of the straight line segment that joins the perihelion and aphelion. The centre of the Sun lies on this straight line segment, but not at its midpoint. Because ellipses are well-understood shapes, measuring the points of its extremes defined the exact shape mathematically, and made possible calculations for the entire orbit as well as predictions based on observation. In addition, it mapped out exactly the largest straight-line distance that Earth traverses over the course of a year, defining times and places for observing the largest parallax (apparent shifts of position) in nearby stars. Knowing Earth's shift and a star's shift enabled the star's distance to be calculated. But all measurements are subject to some degree of error or uncertainty, and the uncertainties in the length of the astronomical unit only increased uncertainties in the stellar distances. Improvements in precision have always been a key to improving astronomical understanding. Throughout the twentieth century, measurements became increasingly precise and sophisticated, and ever more dependent on accurate observation of the effects described by Einstein's theory of relativity and upon the mathematical tools it used. Improving measurements were continually checked and cross-checked by means of improved understanding of the laws of celestial mechanics, which govern the motions of objects in space. The expected positions and distances of objects at an established time are calculated (in au) from these laws, and assembled into a collection of data called an ephemeris. NASA Jet Propulsion Laboratory HORIZONS System provides one of several ephemeris computation services. In 1976, to establish a more precise measure for the astronomical unit, the IAU formally adopted a new definition. Although directly based on the then-best available observational measurements, the definition was recast in terms of the then-best mathematical derivations from celestial mechanics and planetary ephemerides. It stated that "the astronomical unit of length is that length (A) for which the Gaussian gravitational constant (k) takes the value when the units of measurement are the astronomical units of length, mass and time". Equivalently, by this definition, one au is "the radius of an unperturbed circular Newtonian orbit about the sun of a particle having infinitesimal mass, moving with an angular frequency of "; or alternatively that length for which the heliocentric gravitational constant (the product G) is equal to ()2 au3/d2, when the length is used to describe the positions of objects in the Solar System. Subsequent explorations of the Solar System by space probes made it possible to obtain precise measurements of the relative positions of the inner planets and other objects by means of radar and telemetry. As with all radar measurements, these rely on measuring the time taken for photons to be reflected from an object. Because all photons move at the speed of light in vacuum, a fundamental constant of the universe, the distance of an object from the probe is calculated as the product of the speed of light and the measured time. However, for precision the calculations require adjustment for things such as the motions of the probe and object while the photons are transiting. In addition, the measurement of the time itself must be translated to a standard scale that accounts for relativistic time dilation. Comparison of the ephemeris positions with time measurements expressed in Barycentric Dynamical Time (TDB) leads to a value for the speed of light in astronomical units per day (of ). By 2009, the IAU had updated its standard measures to reflect improvements, and calculated the speed of light at (TDB). In 1983, the CIPM modified the International System of Units (SI) to make the metre defined as the distance travelled in a vacuum by light in 1 / . This replaced the previous definition, valid between 1960 and 1983, which was that the metre equalled a certain number of wavelengths of a certain emission line of krypton-86. (The reason for the change was an improved method of measuring the speed of light.) The speed of light could then be expressed exactly as c0 = , a standard also adopted by the IERS numerical standards. From this definition and the 2009 IAU standard, the time for light to traverse an astronomical unit is found to be τA = , which is slightly more than 8 minutes 19 seconds. By multiplication, the best IAU 2009 estimate was A = c0τA = , based on a comparison of Jet Propulsion Laboratory and IAA–RAS ephemerides. In 2006, the BIPM reported a value of the astronomical unit as . In the 2014 revision of the SI Brochure, the BIPM recognised the IAU's 2012 redefinition of the astronomical unit as . This estimate was still derived from observation and measurements subject to error, and based on techniques that did not yet standardize all relativistic effects, and thus were not constant for all observers. In 2012, finding that the equalization of relativity alone would make the definition overly complex, the IAU simply used the 2009 estimate to redefine the astronomical unit as a conventional unit of length directly tied to the metre (exactly ). The new definition recognizes as a consequence that the astronomical unit has reduced importance, limited in use to a convenience in some applications. {| style="border-spacing:0" |- |rowspan=7 style="vertical-align:top; padding-right:0"|1 astronomical unit  |= metres (by definition) |- |= (exactly) |- |≈ |- |≈ light-seconds |- |≈ |- |≈ |} This definition makes the speed of light, defined as exactly , equal to exactly  ×  ÷  or about , some 60 parts per trillion less than the 2009 estimate. Usage and significance With the definitions used before 2012, the astronomical unit was dependent on the heliocentric gravitational constant, that is the product of the gravitational constant, G, and the solar mass, . Neither G nor can be measured to high accuracy separately, but the value of their product is known very precisely from observing the relative positions of planets (Kepler's third law expressed in terms of Newtonian gravitation). Only the product is required to calculate planetary positions for an ephemeris, so ephemerides are calculated in astronomical units and not in SI units. The calculation of ephemerides also requires a consideration of the effects of general relativity. In particular, time intervals measured on Earth's surface (Terrestrial Time, TT) are not constant when compared with the motions of the planets: the terrestrial second (TT) appears to be longer near January and shorter near July when compared with the "planetary second" (conventionally measured in TDB). This is because the distance between Earth and the Sun is not fixed (it varies between and ) and, when Earth is closer to the Sun (perihelion), the Sun's gravitational field is stronger and Earth is moving faster along its orbital path. As the metre is defined in terms of the second and the speed of light is constant for all observers, the terrestrial metre appears to change in length compared with the "planetary metre" on a periodic basis. The metre is defined to be a unit of proper length. Indeed, the International Committee for Weights and Measures (CIPM) notes that "its definition applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored". As such, a distance within the Solar System without specifying the frame of reference for the measurement is problematic. The 1976 definition of the astronomical unit was incomplete because it did not specify the frame of reference in which to apply the measurement, but proved practical for the calculation of ephemerides: a fuller definition that is consistent with general relativity was proposed, and "vigorous debate" ensued until August 2012 when the IAU adopted the current definition of 1 astronomical unit = metres. The astronomical unit is typically used for stellar system scale distances, such as the size of a protostellar disk or the heliocentric distance of an asteroid, whereas other units are used for other distances in astronomy. The astronomical unit is too small to be convenient for interstellar distances, where the parsec and light-year are widely used. The parsec (parallax arcsecond) is defined in terms of the astronomical unit, being the distance of an object with a parallax of . The light-year is often used in popular works, but is not an approved non-SI unit and is rarely used by professional astronomers. When simulating a numerical model of the Solar System, the astronomical unit provides an appropriate scale that minimizes (overflow, underflow and truncation) errors in floating point calculations. History The book On the Sizes and Distances of the Sun and Moon, which is ascribed to Aristarchus, says the distance to the Sun is 18 to 20 times the distance to the Moon, whereas the true ratio is about . The latter estimate was based on the angle between the half-moon and the Sun, which he estimated as (the true value being close to ). Depending on the distance that van Helden assumes Aristarchus used for the distance to the Moon, his calculated distance to the Sun would fall between and Earth radii. Hipparchus gave an estimate of the distance of Earth from the Sun, quoted by Pappus as equal to 490 Earth radii. According to the conjectural reconstructions of Noel Swerdlow and G. J. Toomer, this was derived from his assumption of a "least perceptible" solar parallax of . A Chinese mathematical treatise, the Zhoubi Suanjing (), shows how the distance to the Sun can be computed geometrically, using the different lengths of the noontime shadows observed at three places li apart and the assumption that Earth is flat. According to Eusebius in the Praeparatio evangelica (Book XV, Chapter 53), Eratosthenes found the distance to the Sun to be "σταδιων μυριαδας τετρακοσιας και οκτωκισμυριας" (literally "of stadia myriads 400 and ") but with the additional note that in the Greek text the grammatical agreement is between myriads (not stadia) on the one hand and both 400 and on the other: all three are accusative plural, while σταδιων is genitive plural ("of stadia") . All three words (or all four including stadia) are inflected. This has been translated either as stadia (1903 translation by Edwin Hamilton Gifford), or as stadia (edition of Édouard des Places, dated 1974–1991). Using the Greek stadium of 185 to 190 metres, the former translation comes to to , which is far too low, whereas the second translation comes to 148.7 to 152.8 billion metres (accurate within 2%). In the 2nd century CE, Ptolemy estimated the mean distance of the Sun as times Earth's radius. To determine this value, Ptolemy started by measuring the Moon's parallax, finding what amounted to a horizontal lunar parallax of 1° 26′, which was much too large. He then derived a maximum lunar distance of Earth radii. Because of cancelling errors in his parallax figure, his theory of the Moon's orbit, and other factors, this figure was approximately correct. He then measured the apparent sizes of the Sun and the Moon and concluded that the apparent diameter of the Sun was equal to the apparent diameter of the Moon at the Moon's greatest distance, and from records of lunar eclipses, he estimated this apparent diameter, as well as the apparent diameter of the shadow cone of Earth traversed by the Moon during a lunar eclipse. Given these data, the distance of the Sun from Earth can be trigonometrically computed to be Earth radii. This gives a ratio of solar to lunar distance of approximately 19, matching Aristarchus's figure. Although Ptolemy's procedure is theoretically workable, it is very sensitive to small changes in the data, so much so that changing a measurement by a few per cent can make the solar distance infinite. After Greek astronomy was transmitted to the medieval Islamic world, astronomers made some changes to Ptolemy's cosmological model, but did not greatly change his estimate of the Earth–Sun distance. For example, in his introduction to Ptolemaic astronomy, al-Farghānī gave a mean solar distance of Earth radii, whereas in his zij, al-Battānī used a mean solar distance of Earth radii. Subsequent astronomers, such as al-Bīrūnī, used similar values. Later in Europe, Copernicus and Tycho Brahe also used comparable figures ( and Earth radii), and so Ptolemy's approximate Earth–Sun distance survived through the 16th century. Johannes Kepler was the first to realize that Ptolemy's estimate must be significantly too low (according to Kepler, at least by a factor of three) in his Rudolphine Tables (1627). Kepler's laws of planetary motion allowed astronomers to calculate the relative distances of the planets from the Sun, and rekindled interest in measuring the absolute value for Earth (which could then be applied to the other planets). The invention of the telescope allowed far more accurate measurements of angles than is possible with the naked eye. Flemish astronomer Godefroy Wendelin repeated Aristarchus’ measurements in 1635, and found that Ptolemy's value was too low by a factor of at least eleven. A somewhat more accurate estimate can be obtained by observing the transit of Venus. By measuring the transit in two different locations, one can accurately calculate the parallax of Venus and from the relative distance of Earth and Venus from the Sun, the solar parallax (which cannot be measured directly due to the brightness of the Sun). Jeremiah Horrocks had attempted to produce an estimate based on his observation of the 1639 transit (published in 1662), giving a solar parallax of , similar to Wendelin's figure. The solar parallax is related to the Earth–Sun distance as measured in Earth radii by The smaller the solar parallax, the greater the distance between the Sun and Earth: a solar parallax of is equivalent to an Earth–Sun distance of Earth radii. Christiaan Huygens believed that the distance was even greater: by comparing the apparent sizes of Venus and Mars, he estimated a value of about Earth radii, equivalent to a solar parallax of . Although Huygens' estimate is remarkably close to modern values, it is often discounted by historians of astronomy because of the many unproven (and incorrect) assumptions he had to make for his method to work; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out. Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of , equivalent to an Earth–Sun distance of about Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of Earth, which had been measured by their colleague Jean Picard in 1669 as toises. This same year saw another estimate for the astronomical unit by John Flamsteed, which accomplished it alone by measuring the martian diurnal parallax. Another colleague, Ole Rømer, discovered the finite speed of light in 1676: the speed was so great that it was usually quoted as the time required for light to travel from the Sun to the Earth, or "light time per unit distance", a convention that is still followed by astronomers today. A better method for observing Venus transits was devised by James Gregory and published in his Optica Promata (1663). It was strongly advocated by Edmond Halley and was applied to the transits of Venus observed in 1761 and 1769, and then again in 1874 and 1882. Transits of Venus occur in pairs, but less than one pair every century, and observing the transits in 1761 and 1769 was an unprecedented international scientific operation including observations by James Cook and Charles Green from Tahiti. Despite the Seven Years' War, dozens of astronomers were dispatched to observing points around the world at great expense and personal danger: several of them died in the endeavour. The various results were collated by Jérôme Lalande to give a figure for the solar parallax of . Karl Rudolph Powalky had made an estimate of in 1864. Another method involved determining the constant of aberration. Simon Newcomb gave great weight to this method when deriving his widely accepted value of for the solar parallax (close to the modern value of ), although Newcomb also used data from the transits of Venus. Newcomb also collaborated with A. A. Michelson to measure the speed of light with Earth-based equipment; combined with the constant of aberration (which is related to the light time per unit distance), this gave the first direct measurement of the Earth–Sun distance in metres. Newcomb's value for the solar parallax (and for the constant of aberration and the Gaussian gravitational constant) were incorporated into the first international system of astronomical constants in 1896, which remained in place for the calculation of ephemerides until 1964. The name "astronomical unit" appears first to have been used in 1903. The discovery of the near-Earth asteroid 433 Eros and its passage near Earth in 1900–1901 allowed a considerable improvement in parallax measurement. Another international project to measure the parallax of 433 Eros was undertaken in 1930–1931. Direct radar measurements of the distances to Venus and Mars became available in the early 1960s. Along with improved measurements of the speed of light, these showed that Newcomb's values for the solar parallax and the constant of aberration were inconsistent with one another. Developments The unit distance (the value of the astronomical unit in metres) can be expressed in terms of other astronomical constants: where is the Newtonian constant of gravitation, is the solar mass, is the numerical value of Gaussian gravitational constant and is the time period of one day. The Sun is constantly losing mass by radiating away energy, so the orbits of the planets are steadily expanding outward from the Sun. This has led to calls to abandon the astronomical unit as a unit of measurement. As the speed of light has an exact defined value in SI units and the Gaussian gravitational constant is fixed in the astronomical system of units, measuring the light time per unit distance is exactly equivalent to measuring the product × in SI units. Hence, it is possible to construct ephemerides entirely in SI units, which is increasingly becoming the norm. A 2004 analysis of radiometric measurements in the inner Solar System suggested that the secular increase in the unit distance was much larger than can be accounted for by solar radiation, + metres per century. The measurements of the secular variations of the astronomical unit are not confirmed by other authors and are quite controversial. Furthermore, since 2010, the astronomical unit has not been estimated by the planetary ephemerides. Examples The following table contains some distances given in astronomical units. It includes some examples with distances that are normally not given in astronomical units, because they are either too short or far too long. Distances normally change over time. Examples are listed by increasing distance.
Physical sciences
Length and distance
null
1242
https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29
Ada (programming language)
Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, called Ada 2022 informally, is ISO/IEC 8652:2023. Ada was originally designed by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer. Features Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP). Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as "or else" and "and then") to symbols (such as "||" and "&&"). Ada uses the basic arithmetical operators "+", "-", "*", and "/", but avoids using other symbols. Code blocks are delimited by words such as "declare", "begin", and "end", where the "end" (in most cases) is followed by the identifier of the block it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is sometimes used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology. Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to. Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool. A double-dash ("--"), resembling an em dash, denotes comment text. Comments stop at end of line; there is intentionally no way to make a comment span multiple lines, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code therefore requires the prefixing of each line (or column) individually with "--". While this clearly denotes disabled code by creating a column of repeated "--" down the page, it also renders the experimental dis/re-enablement of large blocks a more drawn-out process in editors without block commenting support. The semicolon (";") is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed. Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written. One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, and GNAT which is part of the GNU Compiler Collection. Alire is a package and toolchain management tool for Ada. History In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original straw-man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996. HOLWG crafted the Steelman language requirements , a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. The requirements were created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing. It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (Honeywell, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at Honeywell, was chosen and given the name Ada—after Augusta Ada King, Countess of Lovelace, usually known as Ada Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, Tony Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook. Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required Ada Compiler Validation Capability (ACVC) validation suite that was required in another novel feature of the Ada language effort. The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. Computer manufacturers who had a significant business in the defense, aerospace, or related industries, also offered Ada compilers and tools on their platforms; these included Concurrent Computer Corporation, Cray Research, Inc., Digital Equipment Corporation, Harris Computer Systems, and Siemens Nixdorf Informationssysteme AG. In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada. By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to. Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking. For example, the Primary Flight Control System, the fly-by-wire system software in the Boeing 777, was written in Ada, as were the fly-by-wire systems for the aerodynamically unstable Eurofighter Typhoon, Saab Gripen, Lockheed Martin F-22 Raptor and the DFCS replacement flight control system for the Grumman F-14 Tomcat. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support () air traffic control system is designed and implemented using SPARK Ada. It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City. The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming. Standardization Preliminary Ada can be found in ACM Sigplan Notices Vol 14, No 6, June 1979 Ada was first published in 1980 as an ANSI standard ANSI/MIL-STD 1815. As this very first version held many errors and inconsistencies , the revised edition was published in 1983 as ANSI/MIL-STD 1815A. Without any further changes, it became an ISO standard in 1987. This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. There is also a French translation; DIN translated it into German as DIN 66268 in 1988. Ada 95, the joint ISO/IEC/ANSI standard ISO/IEC 8652:1995 was published in February 1995, making it the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection. Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007, commonly known as Ada 2005 because work on the new standard was finished that year. At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the ISO/IEC JTC 1/SC 22/WG 9 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval. ISO/IEC 8652:2012(see Ada 2012 RM) was published in December 2012, known as Ada 2012. A technical corrigendum, ISO/IEC 8652:2012/COR 1:2016, was published (see RM 2012 with TC 1). On May 2, 2023, the Ada community saw the formal approval of publication of the Ada 2022 edition of the programming language standard. Despite the names Ada 83, 95 etc., legally there is only one Ada standard, the one of the last ISO/IEC standard: with the acceptance of a new standard version, the previous one becomes withdrawn. The other names are just informal ones referencing a certain edition. Other related standards include ISO/IEC 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada. Language constructs Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal. "Hello, world!" in Ada A common example of a language's syntax is the Hello world program: (hello.adb) with Ada.Text_IO; procedure Hello is begin Ada.Text_IO.Put_Line ("Hello, world!"); end Hello; This program can be compiled by using the freely available open source compiler GNAT, by executing gnatmake hello.adb Data types Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted. Special types provided by the language are task types and protected types. For example, a date might be represented as: type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; Important to note: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal: Today: Day_type := 4; Current_Month: Month_type := 10; ... Today + Current_Month ... -- illegal The predefined plus-operator can only add values of the same type, so the expression is illegal. Types can be refined by declaring subtypes: subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day subtype Working_Day is Weekday range Monday .. Friday; -- Days to work Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration := (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types. Control structures Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed. -- while a is not equal to b, loop. while a /= b loop Ada.Text_IO.Put_Line ("Waiting"); end loop; if a > b then Ada.Text_IO.Put_Line ("Condition met"); else Ada.Text_IO.Put_Line ("Condition not met"); end if; for i in 1 .. 10 loop Ada.Text_IO.Put ("Iteration: "); Ada.Text_IO.Put (i); Ada.Text_IO.Put_Line; end loop; loop a := a + 1; exit when a = 10; end loop; case i is when 0 => Ada.Text_IO.Put ("zero"); when 1 => Ada.Text_IO.Put ("one"); when 2 => Ada.Text_IO.Put ("two"); -- case statements have to cover all possible cases: when others => Ada.Text_IO.Put ("none of the above"); end case; for aWeekday in Weekday'Range loop -- loop over an enumeration Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration if aWeekday in Working_Day then -- check of a subtype of an enumeration Put_Line ( " to work for " & Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table end if; end loop; Packages, procedures and functions Among the parts of an Ada program are packages, procedures and functions. Functions differ from procedures in that they must return a value. Function calls cannot be used "as a statement", and their result must be assigned to a variable. However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state. Example: Package specification (example.ads) package Example is type Number is range 1 .. 11; procedure Print_and_Increment (j: in out Number); end Example; Package body (example.adb) with Ada.Text_IO; package body Example is i : Number := Number'First; procedure Print_and_Increment (j: in out Number) is function Next (k: in Number) return Number is begin return k + 1; end Next; begin Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) ); j := Next (j); end Print_and_Increment; -- package initialization executed when the package is elaborated begin while i < Number'Last loop Print_and_Increment (i); end loop; end Example; This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing gnatmake -z example.adb Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block. Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order. Pragmas A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific. Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions). Generics
Technology
"Historical" languages
null
1267
https://en.wikipedia.org/wiki/Alpha%20decay
Alpha decay
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or "decays" into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms. Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles. Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force. Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air. Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production. History Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions. By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law. Mechanism The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size. One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation where is the initial mass of the nucleus, is the mass of the nucleus after particle emission, and is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry. These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape. Quantum tunneling Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it: It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed. The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is . The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except 146Sm.) Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law. Uses Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm. Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones). Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay. Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly. Toxicity Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission. Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons. However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha () divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations. The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden. The Russian defector Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.
Physical sciences
Nuclear physics
Physics
1271
https://en.wikipedia.org/wiki/Analytical%20engine
Analytical engine
The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's Difference Engine, which was a design for a simpler mechanical calculator. The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-Complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage. Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837. Design Babbage's first attempt at a mechanical computing device, the Difference Engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project. During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833. The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic. There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.6 kB). An arithmetic unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially (1838) it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. Later drawings (1858) depict a regularised grid layout. Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify. The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. Babbage developed some two dozen programs for the analytical engine between 1837 and 1840, and one program later. These programs treat polynomials, iterative formulas, Gaussian elimination, and Bernoulli numbers. In 1842, the Italian mathematician Luigi Federico Menabrea published a description of the engine in French, based on lectures Babbage gave when he visited Turin in 1840. In 1843, the description was translated into English and extensively annotated by Ada Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine (widely considered to be the first complete computer program), she has been described as the first computer programmer. Construction Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871. In 1878, a committee of the British Association for the Advancement of Science described the analytical engine as "a marvel of mechanical ingenuity", but recommended against constructing it. The committee acknowledged the usefulness and value of the machine, but could not estimate the cost of building it, and were unsure whether the machine would function correctly after being built. Intermittently from 1880 to 1910, Babbage's son Henry Prevost Babbage was constructing a part of the mill and the printing apparatus. In 1910, it was able to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's "analytical engine mill" is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: "perhaps for a first machine ten (columns) would do, with fifteen wheels in each". Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. "It is only a question of cards and time", wrote Henry Babbage in 1888, "... and there is no reason why (twenty thousand) cards should not be used if necessary, in an analytical engine for the purposes of the mathematician". In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the analytical engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time. In October 2010, John Graham-Cumming started a "Plan 28" campaign to raise funds by "public subscription" to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical analytical engine. As of May 2016, actual construction had not been attempted, since no consistent understanding could yet be obtained from Babbage's original design drawings. In particular it was unclear whether it could handle the indexed variables which were required for Lovelace's Bernoulli program. By 2017, the "Plan 28" effort reported that a searchable database of all catalogued material was available, and an initial review of Babbage's voluminous Scribbling Books had been completed. Many of Babbage's original drawings have been digitised and are publicly available online. Instruction set Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided. Allan G. Bromley has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete: ...the cards could be ordered to move forward and reverse (and hence to loop)... The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs. There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage's sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user. In their emulator of the engine, Fourmilab say: The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again. This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example, a factorial program would be written as: N0 6 N1 1 N2 1 × L1 L0 S1 – L0 L2 S0 L2 L0 CB?11 where the CB is the conditional branch instruction or "combination card" used to make the control flow jump, in this case backward by 11 cards. Influence Predicted influence Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his Passages from the Life of a Philosopher, "As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the shortest time?" Computer science From 1872, Henry continued diligently with his father's work and then intermittently in retirement in 1875. Percy Ludgate wrote about the engine in 1914 and published his own design for an analytical engine in 1909. It was drawn up in detail, but never built, and the drawings have never been found. Ludgate's engine would be much smaller (about , which corresponds to cube of side length ) than Babbage's, and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds. In his work Essays on Automatics (1914) Leonardo Torres Quevedo, inspired by Babbage, designed a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also contains the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically. Vannevar Bush's paper Instrumental Analysis (1936) included several references to Babbage's work. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer. Despite this groundwork, Babbage's work fell into historical obscurity, and the analytical engine was unknown to builders of electromechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the analytical engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the analytical engine "the greatest disappointment of my life". The Mark I showed no influence from the analytical engine and lacked the analytical engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's analytical engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC. Comparison to other early computers If the analytical engine had been built, it would have been digital, programmable and Turing-complete. It would, however, have been very slow. Luigi Federico Menabrea reported in Sketch of the Analytical Engine: "Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes". By comparison the Harvard Mark I could perform the same task in just six seconds (though it is debatable that computer is Turing complete; the ENIAC, which is, would also have been faster). A modern CPU could do the same thing in under a billionth of a second. In popular culture The cyberpunk novelists William Gibson and Bruce Sterling co-authored a steampunk novel of alternative history titled The Difference Engine in which Babbage's difference and analytical engines became available to Victorian society. The novel explores the consequences and implications of the early introduction of computational technology. Moriarty by Modem, a short story by Jack Nimersheim, describes an alternative history where Babbage's analytical engine was indeed completed and had been deemed highly classified by the British government. The characters of Sherlock Holmes and Moriarty had in reality been a set of prototype programs written for the analytical engine. This short story follows Holmes as his program is implemented on modern computers and he is forced to compete against his nemesis yet again in the modern counterparts of Babbage's analytical engine. A similar setting to The Difference Engine is used by Sydney Padua in the webcomic The Thrilling Adventures of Lovelace and Babbage. It features an alternative history where Ada Lovelace and Babbage have built the analytical engine and use it to fight crime at Queen Victoria's request. The comic is based on thorough research on the biographies of and correspondence between Babbage and Lovelace, which is then twisted for humorous effect. The Orion's Arm online project features the Machina Babbagenseii, fully sentient Babbage-inspired mechanical computers. Each is the size of a large asteroid, only capable of surviving in microgravity conditions, and processes data at 0.5% the speed of a human brain. Charles Babbage and Ada Lovelace appear in an episode of Doctor Who, "Spyfall Part 2", where the engine is displayed and referenced.
Technology
Early computers
null
1300
https://en.wikipedia.org/wiki/Abalone
Abalone
Abalone ( or ; via Spanish , from Rumsen aulón) is a common name for any small to very large marine gastropod mollusc in the family Haliotidae, which once contained six genera but now contains only one genus, Haliotis. Other common names are ear shells, sea ears, and, now rarely, muttonfish or muttonshells in parts of Australia, ormer in the United Kingdom, perlemoen in South Africa, and pāua in New Zealand. The number of abalone species recognized worldwide ranges between 30 and 130 with over 230 species-level taxa described. The most comprehensive treatment of the family considers 56 species valid, with 18 additional subspecies. The shells of abalone have a low, open spiral structure, and are characterized by several open respiratory pores in a row near the shell's outer edge. The thick inner layer of the shell is composed of nacre, which in many species is highly iridescent, giving rise to a range of strong, changeable colors which make the shells attractive to humans as ornaments, jewelry, and as a source of colorful mother-of-pearl. The flesh of abalone is widely considered to be a delicacy, and is consumed raw or cooked by a variety of cuisines. Description Most abalone vary in size from (Haliotis pulcherrima) to . The largest species, Haliotis rufescens, reaches . The shell of abalone is convex, rounded to oval in shape, and may be highly arched or very flattened. The shell of the majority of species has a small, flat spire and two to three whorls. The last whorl, known as the body whorl, is auriform, meaning that the shell resembles an ear, giving rise to the common name "ear shell". Haliotis asinina has a somewhat different shape, as it is more elongated and distended. The shell of Haliotis cracherodii cracherodii is also unusual as it has an ovate form, is imperforate, shows an exserted spire, and has prickly ribs. A mantle cleft in the shell impresses a groove in the shell, in which are the row of holes characteristic of the genus. These holes are respiratory apertures for venting water from the gills and for releasing sperm and eggs into the water column. They make up what is known as the selenizone, which forms as the shell grows. This series of eight to 38 holes is near the anterior margin. Only a small number is generally open. The older holes are gradually sealed up as the shell grows and new holes form. Each species has a typical number of open holes, between four and 10, in the selenizone. An abalone has no operculum. The aperture of the shell is very wide and nacreous. The exterior of the shell is striated and dull. The color of the shell is very variable from species to species, which may reflect the animal's diet. The iridescent nacre that lines the inside of the shell varies in color from silvery white, to pink, red and green-red to deep blue, green to purple. The animal has fimbriated head lobes and side lobes that are fimbriated and cirrated. The radula has small median teeth, and the lateral teeth are single and beam-like. They have about 70 uncini, with denticulated hooks, the first four very large. The rounded foot is very large in comparison to most molluscs. The soft body is coiled around the columellar muscle, and its insertion, instead of being on the columella, is on the middle of the inner wall of the shell. The gills are symmetrical and both well developed. These snails cling solidly with their broad, muscular foot to rocky surfaces at sublittoral depths, although some species such as Haliotis cracherodii used to be common in the intertidal zone. Abalone reach maturity at a relatively small size. Their fecundity is high and increases with their size, laying from 10,000 to 11 million eggs at a time. The spermatozoa are filiform and pointed at one end, and the anterior end is a rounded head. Distribution The haliotid family has a worldwide distribution, along the coastal waters of every continent, except the Pacific coast of South America, the Atlantic coast of North America, the Arctic, and Antarctica. The majority of abalone species are found in cold waters, such as off the coasts of New Zealand, South Africa, Australia, Western North America, and Japan. Structure and properties of the shell The shell of the abalone is exceptionally strong and is made of microscopic calcium carbonate tiles stacked like bricks. Between the layers of tiles is a clingy protein substance. When the abalone shell is struck, the tiles slide instead of shattering and the protein stretches to absorb the energy of the blow. Material scientists around the world are studying this tiled structure for insight into stronger ceramic products such as body armor. The dust created by grinding and cutting abalone shell is dangerous; appropriate safeguards must be taken to protect people from inhaling these particles. Diseases and pests Abalone are subject to various diseases. The Victorian Department of Primary Industries said in 2007 that ganglioneuritis killed up to 90% of stock in affected regions. Abalone are also severe hemophiliacs, as their fluids will not clot in the case of a laceration or puncture wound. Members of the Spionidae of the polychaetes are known as pests of abalone. Human use Abalone has been harvested worldwide for centuries as a source of food and decorative items. Abalone shells and associated materials, like their claw-like pearls and nacre, have been used as jewelry and for buttons, buckles, and inlay. These shells have been found in archaeological sites around the world, ranging from 100,000-year-old deposits at Blombos Cave in South Africa to historic Chinese abalone middens on California's Northern Channel Islands. For at least 12,000 years, abalone were harvested to such an extent around the Channel Islands that shells in the area decreased in size four thousand years ago. Farming Farming of abalone began in the late 1950s and early 1960s in Japan and China. Since the mid-1990s, there have been many increasingly successful endeavors to commercially farm abalone for the purpose of consumption. Overfishing and poaching have reduced wild populations to such an extent that farmed abalone now supplies most of the abalone meat consumed. The principal abalone farming regions are China, Taiwan, Japan, and Korea. Abalone is also farmed in Australia, Canada, Chile, France, Iceland, Ireland, Mexico, Namibia, New Zealand, South Africa, Spain, Thailand, and the United States. After trials in 2012, a commercial "sea ranch" was set up in Flinders Bay, Western Australia to raise abalone. The ranch is based on an artificial reef made up of 5,000 separate concrete abalone habitat units, which can host 400 abalone each. The reef is seeded with young abalone from an onshore hatchery. The abalone feed on seaweed that grows naturally on the habitats; the ecosystem enrichment of the bay also results in growing numbers of dhufish, pink snapper, wrasse, and Samson fish among other species. Consumption Abalone have long been a valuable food source for humans in every area of the world where a species is abundant. The meat of this mollusc is considered a delicacy in certain parts of Latin America (particularly Chile), France, New Zealand, East Asia and Southeast Asia. In the Greater China region and among Overseas Chinese communities, abalone is commonly known as bao yu, and sometimes forms part of a Chinese banquet. In the same way as shark fin soup or bird's nest soup, abalone is considered a luxury item, and is traditionally reserved for celebrations. As abalone became more popular and less common, the prices adjusted accordingly. In the 1920s, a restaurant-served portion of abalone, about 4 ounces, would cost (in inflation adjusted dollars) about US$7; by 2004, the price had risen to US$75. In the United States, prior to this time, abalone was predominantly eaten, gathered, and prepared by Chinese immigrants. Before that, abalone were collected to be eaten, and used for other purposes by Native American tribes. By 1900, laws were passed in California to outlaw the taking of abalone above the intertidal zone. This forced the Chinese out of the market and the Japanese perfected diving, with or without gear, to enter the market. Abalone started to become popular in the US after the Panama–Pacific International Exposition in 1915, which exhibited 365 varieties of fish with cooking demonstrations, and a 1,300-seat dining hall. In Japan, live and raw abalone are used in awabi sushi, or served steamed, salted, boiled, chopped, or simmered in soy sauce. Salted, fermented abalone entrails are the main component of tottsuru, a local dish from Honshū. Tottsuru is mainly enjoyed with sake. In South Korea, abalone is called Jeonbok (/juhn-bok/) and used in various recipes. Jeonbok porridge and pan-fried abalone steak with butter are popular but also commonly used in soups or ramyeon. In California, abalone meat can be found on pizza, sautéed with caramelized mango, or in steak form dusted with cracker meal and flour. Sport harvesting Australia Tasmania supplies about 25% of the yearly world abalone harvest. Around 12,500 Tasmanians recreationally fish for blacklip and greenlip abalone. For blacklip abalone, the size limit varies between for the southern end of the state and for the northern end of the state. Greenlip abalone have a minimum size of , except for an area around Perkins Bay in the north of the state where the minimum size is . With a recreational abalone licence, the bag limit is 10 per day, with a total possession limit of 20. Scuba diving for abalone is allowed, and has a rich history in Australia. (Scuba diving for abalone in the states of New South Wales and Western Australia is illegal; a free-diving catch limit of two is allowed). Victoria has had an active abalone fishery since the late 1950s. The state is sectioned into three fishing zones, Eastern, Central and Western, with each fisher required a zone-allocated licence. Harvesting is performed by divers using surface-supplied air "hookah" systems operating from runabout-style, outboard-powered boats. While the diver seeks out colonies of abalone amongst the reef beds, the deckhand operates the boat, known as working "live" and stays above where the diver is working. Bags of abalone pried from the rocks are brought to the surface by the diver or by way of "shot line", where the deckhand drops a weighted rope for the catch bag to be connected then retrieved. Divers measure each abalone before removing from the reef and the deckhand remeasures each abalone and removes excess weed growth from the shell. Since 2002, the Victorian industry has seen a significant decline in catches, with the total allowable catch reduced from 1440 to 787 tonnes for the 2011/12 fishing year, due to dwindling stocks and most notably the abalone virus ganglioneuritis, which is fast-spreading and lethal to abalone stocks. United States Sport harvesting of red abalone is permitted with a California fishing license and an abalone stamp card. In 2008, the abalone card also came with a set of 24 tags. This was reduced to 18 abalone per year in 2014, and as of 2017 the limit has been reduced to 12, only nine of which may be taken south of Mendocino County. Legal-size abalone must be tagged immediately. Abalone may only be taken using breath-hold techniques or shorepicking; scuba diving for abalone is strictly prohibited. Taking of abalone is not permitted south of the mouth of San Francisco Bay. A size minimum of measured across the shell is in place. A person may be in possession of only three abalone at any given time. As of 2017, abalone season is May to October, excluding July. Transportation of abalone may only legally occur while the abalone is still attached in the shell. Sale of sport-obtained abalone is illegal, including the shell. Only red abalone may be taken, as black, white, pink, flat, green, and pinto abalone are protected by law. In 2018, the California Fish and Game Commission closed recreational abalone season due to dramatically declining populations. That year, they extended the moratorium to last through April 2021. Afterwards, they extended the ban for another 5 years until April 2026. An abalone diver is normally equipped with a thick wetsuit, including a hood, bootees, and gloves, and usually also a mask, snorkel, weight belt, abalone iron, and abalone gauge. Alternatively, the rock picker can feel underneath rocks at low tides for abalone. Abalone are mostly taken in depths from a few inches up to ; less common are freedivers who can work deeper than . Abalone are normally found on rocks near food sources such as kelp. An abalone iron is used to pry the abalone from the rock before it has time to fully clamp down. Divers dive from boats, kayaks, tube floats, or directly off the shore. The largest abalone recorded in California is , caught by John Pepper somewhere off the coast of San Mateo County in September 1993. The mollusc Concholepas concholepas is often sold in the United States under the name "Chilean abalone", though it is not an abalone, but a muricid. New Zealand In New Zealand, abalone is called pāua (, from the Māori language). Haliotis iris (or blackfoot pāua) is the ubiquitous New Zealand pāua, the highly polished nacre of which is extremely popular as souvenirs with its striking blue, green, and purple iridescence. Haliotis australis and Haliotis virginea are also found in New Zealand waters, but are less popular than H. iris. Haliotis pirimoana is a small species endemic to Manawatāwhi / the Three Kings Islands that superficially resembles H. virginea. Like all New Zealand shellfish, recreational harvesting of pāua does not require a permit provided catch limits, size restrictions, and seasonal and local restrictions set by the Ministry for Primary Industries (MPI) are followed. The legal recreational daily limit is 10 per diver, with a minimum shell length of for H. iris and for H. australis. In addition, no person may be in possession, even on land, of more than 20 pāua or more than of pāua meat at any one time. Pāua can only be caught by free-diving; it is illegal to catch them using scuba gear. An extensive global black market exists in collecting and exporting abalone meat. This can be a particularly awkward problem where the right to harvest pāua can be granted legally under Māori customary rights. When such permits to harvest are abused, it is frequently difficult to police. The limit is strictly enforced by roving Ministry for Primary Industries fishery officers with the backing of the New Zealand Police. Poaching is a major industry in New Zealand with many thousands being taken illegally, often undersized. Convictions have resulted in seizure of diving gear, boats, and motor vehicles and fines and in rare cases, imprisonment. South Africa There are five species endemic to South Africa, namely H. parva, H. spadicea, H. queketti and H. speciosa. The largest abalone in South Africa, Haliotis midae, occurs along roughly two-thirds of the country's coastline. Abalone-diving has been a recreational activity for many years, but stocks are currently being threatened by illegal commercial harvesting. In South Africa, all persons harvesting this shellfish need permits that are issued annually, and no abalone may be harvested using scuba gear. For the last few years, however, no permits have been issued for collecting abalone, but commercial harvesting still continues as does illegal collection by syndicates. In 2007, because of widespread poaching of abalone, the South African government listed abalone as an endangered species according to the CITES section III appendix, which requests member governments to monitor the trade in this species. This listing was removed from CITES in June 2010 by the South African government and South African abalone is no longer subject to CITES trade controls. Export permits are still required, however. The abalone meat from South Africa is prohibited for sale in the country to help reduce poaching; however, much of the illegally harvested meat is sold in Asian countries. As of early 2008, the wholesale price for abalone meat was approximately US$40.00 per kilogram. There is an active trade in the shells, which sell for more than US$1,400 per tonne. Channel Islands, Brittany and Normandy Ormers (Haliotis tuberculata) are considered a delicacy in the British Channel Islands as well as in adjacent areas of France, and are pursued with great alacrity by the locals. This, and a recent lethal bacterial disease, has led to a dramatic depletion in numbers since the latter half of the 19th century, and "ormering" is now strictly regulated to preserve stocks. The gathering of ormers is now restricted to a number of 'ormering tides', from 1 January to 30 April, which occur on the full or new moon and two days following. No ormers may be taken from the beach that are under in shell length. Gatherers are not allowed to wear wetsuits or even put their heads underwater. Any breach of these laws is a criminal offence and can lead to a fine of up to £5,000 or six months in prison. The demand for ormers is such that they led to the world's first underwater arrest, when Mr. Kempthorne-Leigh of Guernsey was arrested by a police officer in full diving gear when illegally diving for ormers. Decorative items The highly iridescent inner nacre layer of the shell of abalone has traditionally been used as a decorative item, in jewelry, buttons, and as inlay in furniture and musical instruments, such as on fret boards and binding of guitars. See article Najeonchilgi regarding Korean handicraft. Indigenous use Abalone has been an important staple in a number of Indigenous cultures around the world, specifically in Africa and on the Northwest American coast. The meat is a traditional food, and the shell is used to make ornaments; historically, the shells were also used as currency in some communities. Threat of extinction Abalone are one of the many classes of organism threatened with extinction due to overfishing and the acidification of oceans from recent higher levels of carbon dioxide, as reduced pH erodes their shells. In the 21st century, white, pink, and green abalone are on the United States federal endangered species list, and possible restoration sites have been proposed for the San Clemente Island and Santa Barbara Island areas. The possibility of farming abalone to be reintroduced into the wild has also been proposed, with these abalone having special tags to help track the population. Species The number of species that are recognized within the genus Haliotis has fluctuated over time, and depends on the source that is consulted. The number of recognized species range from 30 to 130. This list finds a compromise using the WoRMS database, plus some species that have been added, for a total of 57. The majority of abalone have not been rated for conservation status. Those that have been reviewed tend to show that the abalone in general is an animal that is declining in numbers, and will need protection throughout the globe. Synonyms
Biology and health sciences
Gastropods
Animals
1313
https://en.wikipedia.org/wiki/Aromatic%20compound
Aromatic compound
Aromatic compounds or arenes are organic compounds "with a chemistry typified by benzene" and "cyclically conjugated." The word "aromatic" originates from the past grouping of molecules based on odor, before their general chemical properties were understood. The current definition of aromatic compounds does not have any relation to their odor. Aromatic compounds are now defined as cyclic compounds satisfying Hückel's Rule. Aromatic compounds have the following general properties: Typically unreactive Often non polar and hydrophobic High carbon-hydrogen ratio Burn with a strong sooty yellow flame, due to high C:H ratio Undergo electrophilic substitution reactions and nucleophilic aromatic substitutions Arenes are typically split into two categories - benzoids, that contain a benzene derivative and follow the benzene ring model, and non-benzoids that contain other aromatic cyclic derivatives. Aromatic compounds are commonly used in organic synthesis and are involved in many reaction types, following both additions and removals, as well as saturation and dearomatization. Heteroarenes Heteroarenes are aromatic compounds, where at least one methine or vinylene (-C= or -CH=CH-) group is replaced by a heteroatom: oxygen, nitrogen, or sulfur. Examples of non-benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes a single oxygen atom, and pyridine, a heterocyclic compound with a six-membered ring containing one nitrogen atom. Hydrocarbons without an aromatic ring are called aliphatic. Approximately half of compounds known in 2000 are described as aromatic to some extent. Applications Aromatic compounds are pervasive in nature and industry. Key industrial aromatic hydrocarbons are benzene, toluene, xylene called BTX. Many biomolecules have phenyl groups including the so-called aromatic amino acids. Benzene ring model Benzene, C6H6, is the least complex aromatic hydrocarbon, and it was the first one defined as such. Its bonding nature was first recognized independently by Joseph Loschmidt and August Kekulé in the 19th century. Each carbon atom in the hexagonal cycle has four electrons to share. One electron forms a sigma bond with the hydrogen atom, and one is used in covalently bonding to each of the two neighboring carbons. This leaves six electrons, shared equally around the ring in delocalized pi molecular orbitals the size of the ring itself. This represents the equivalent nature of the six carbon-carbon bonds all of bond order 1.5. This equivalency can also explained by resonance forms. The electrons are visualized as floating above and below the ring, with the electromagnetic fields they generate acting to keep the ring flat. The circle symbol for aromaticity was introduced by Sir Robert Robinson and his student James Armit in 1925 and popularized starting in 1959 by the Morrison & Boyd textbook on organic chemistry. The proper use of the symbol is debated: some publications use it to any cyclic π system, while others use it only for those π systems that obey Hückel's rule. Some argue that, in order to stay in line with Robinson's originally intended proposal, the use of the circle symbol should be limited to monocyclic 6 π-electron systems. In this way the circle symbol for a six-center six-electron bond can be compared to the Y symbol for a three-center two-electron bond. Benzene and derivatives of benzene Benzene derivatives have from one to six substituents attached to the central benzene core. Examples of benzene compounds with just one substituent are phenol, which carries a hydroxyl group, and toluene with a methyl group. When there is more than one substituent present on the ring, their spatial relationship becomes important for which the arene substitution patterns ortho, meta, and para are devised. When reacting to form more complex benzene derivatives, the substituents on a benzene ring can be described as either activated or deactivated, which are electron donating and electron withdrawing respectively. Activators are known as ortho-para directors, and deactivators are known as meta directors. Upon reacting, substituents will be added at the ortho, para or meta positions, depending on the directivity of the current substituents to make more complex benzene derivatives, often with several isomers. Electron flow leading to re-aromatization is key in ensuring the stability of such products. For example, three isomers exist for cresol because the methyl group and the hydroxyl group (both ortho para directors) can be placed next to each other (ortho), one position removed from each other (meta), or two positions removed from each other (para). Given that both the methyl and hydroxyl group are ortho-para directors, the ortho and para isomers are typically favoured. Xylenol has two methyl groups in addition to the hydroxyl group, and, for this structure, 6 isomers exist. Arene rings can stabilize charges, as seen in, for example, phenol (C6H5–OH), which is acidic at the hydroxyl (OH), as charge on the oxygen (alkoxide –O−) is partially delocalized into the benzene ring. Non-benzylic arenes Although benzylic arenes are common, non-benzylic compounds are also exceedingly important. Any compound containing a cyclic portion that conforms to Hückel's rule and is not a benzene derivative can be considered a non-benzylic aromatic compound. Monocyclic arenes Of annulenes larger than benzene, [12]annulene and [14]annulene are weakly aromatic compounds and [18]annulene, Cyclooctadecanonaene, is aromatic, though strain within the structure causes a slight deviation from the precisely planar structure necessary for aromatic categorization. Another example of a non-benzylic monocyclic arene is the cyclopropenyl (cyclopropenium cation), which satisfies Hückel's rule with an n equal to 0. Note, only the cationic form of this cyclic propenyl is aromatic, given that neutrality in this compound would violate either the octet rule or Hückel's rule. Other non-benzylic monocyclic arenes include the aforementioned heteroarenes that can replace carbon atoms with other heteroatoms such as N, O or S. Common examples of these are the six-membered pyrrole and five-membered pyridine, both of which have a substituted nitrogen Polycyclic aromatic hydrocarbons Polycyclic aromatic hydrocarbons, also known as polynuclear aromatic compounds (PAHs) are aromatic hydrocarbons that consist of fused aromatic rings and do not contain heteroatoms or carry substituents. Naphthalene is the simplest example of a PAH. PAHs occur in oil, coal, and tar deposits, and are produced as byproducts of fuel burning (whether fossil fuel or biomass). As pollutants, they are of concern because some compounds have been identified as carcinogenic, mutagenic, and teratogenic. PAHs are also found in cooked foods. Studies have shown that high levels of PAHs are found, for example, in meat cooked at high temperatures such as grilling or barbecuing, and in smoked fish. They are also a good candidate molecule to act as a basis for the earliest forms of life. In graphene the PAH motif is extended to large 2D sheets. Reactions Aromatic ring systems participate in many organic reactions. Substitution In aromatic substitution, one substituent on the arene ring, usually hydrogen, is replaced by another reagent. The two main types are electrophilic aromatic substitution, when the active reagent is an electrophile, and nucleophilic aromatic substitution, when the reagent is a nucleophile. In radical-nucleophilic aromatic substitution, the active reagent is a radical. An example of electrophilic aromatic substitution is the nitration of salicylic acid, where a nitro group is added para to the hydroxide substituent: Nucleophilic aromatic substitution involves displacement of a leaving group, such as a halide, on an aromatic ring. Aromatic rings usually nucleophilic, but in the presence of electron-withdrawing groups aromatic compounds undergo nucleophilic substitution. Mechanistically, this reaction differs from a common SN2 reaction, because it occurs at a trigonal carbon atom (sp2 hybridization). Hydrogenation Hydrogenation of arenes create saturated rings. The compound 1-naphthol is completely reduced to a mixture of decalin-ol isomers. The compound resorcinol, hydrogenated with Raney nickel in presence of aqueous sodium hydroxide forms an enolate which is alkylated with methyl iodide to 2-methyl-1,3-cyclohexandione: Dearomatization In dearomatization reactions the aromaticity of the reactant is lost. In this regard, the dearomatization is related to hydrogenation. A classic approach is Birch reduction. The methodology is used in synthesis.
Physical sciences
Hydrocarbons
null
1317
https://en.wikipedia.org/wiki/Antimatter
Antimatter
In modern physics, antimatter is defined as matter composed of the antiparticles (or "partners") of the corresponding particles in "ordinary" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. Antimatter occurs in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form antiatoms. Minuscule numbers of antiparticles can be generated at particle accelerators, but total artificial production has been only a few nanograms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. Nonetheless, antimatter is an essential component of widely available applications related to beta decay, such as positron emission tomography, radiation therapy, and industrial imaging. In theory, a particle and its antiparticle (for example, a proton and an antiproton) have the same mass, but opposite electric charge, and other differences in quantum numbers. A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particleantiparticle pairs. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with the notable mass–energy equivalence equation, . Antiparticles bind with each other to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced, albeit with difficulty, and are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles is hypothesised to have occurred is called baryogenesis. Definitions Antimatter particles carry the same charge as matter particles, but of opposite sign. That is, an antiproton is negatively charged and an antielectron (positron) is positively charged. Neutrons do not carry a net charge, but their constituent quarks do. Protons and neutrons have a baryon number of +1, while antiprotons and antineutrons have a baryon number of –1. Similarly, electrons have a lepton number of +1, while that of positrons is –1. When a particle and its corresponding antiparticle collide, they are both converted into energy. The French term for "made of or pertaining to antimatter", , led to the initialism "C.T." and the science fiction term , as used in such novels as Seetee Ship. Conceptual history The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into. The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity. The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. Although Dirac had laid the groundwork for the existence of these “antielectrons” he initially failed to pick up on the implications contained within his own equation. He freely gave the credit for that insight to J. Robert Oppenheimer, whose seminal paper “On the Theory of Electrons and Protons” (Feb 14th 1930) drew on Dirac's equation and argued for the existence of a positively charged electron (a positron), which as a counterpart to the electron should have the same mass as the electron itself. This meant that it could not be, as Dirac had in fact suggested, a proton. Dirac further postulated the existence of antimatter in a 1931 paper which referred to the positron as an "anti-electron". These were discovered by Carl D. Anderson in 1932 and named positrons from "positive electron". Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929. The Feynman–Stueckelberg interpretation states that antimatter and antiparticles behave exactly identical to regular particles, but traveling backward in time. This concept is nowadays used in modern particle physics, in Feynman diagrams. Notation One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by positive and negative electric charge. Thus, the electron and positron are denoted simply as and respectively. To prevent confusion, however, the two conventions are never mixed. Properties There is no difference in the gravitational behavior of matter and antimatter. In other words, antimatter falls down when dropped, not up. This was confirmed with the thin, very cold gas of thousands of antihydrogen atoms that were confined in a vertical shaft surrounded by superconducting electromagnetic coils. These can create a magnetic bottle to keep the antimatter from coming into contact with matter and annihilating. The researchers then gradually weakened the magnetic fields and detected the antiatoms using two sensors as they escaped and annihilated. Most of the anti-atoms came out of the bottom opening, and only one-quarter out of the top. There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter. Origin and asymmetry Most things observable from the Earth seem to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable. Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays striking Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2). Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the Galactic Center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the Galactic Center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant. Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters. In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. Antimatter quantum interferometry has been first demonstrated in 2018 in the Positron Laboratory (L-NESS) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi. Natural production Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in terrestrial gamma ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module. Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism that produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction. Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma via the jets. Observation in cosmic rays Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind that crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle. Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles. Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy. There is an ongoing search for larger antimatter nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. The detection of natural antihelium could imply the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. AMS-02 revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and , the team is trying to rule out contamination. Artificial production Positrons Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in large numbers. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; newer simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source. In 2023, the production of the first electron-positron beam-plasma was reported by a collaboration led by researchers at University of Oxford working with the High-Radiation to Materials (HRMT) facility at CERN. The beam demonstrated the highest positron yield achieved so far in a laboratory setting. The experiment employed the 440 GeV proton beam, with protons, from the Super Proton Synchrotron, and irradiated a particle converter composed of carbon and tantalum. This yielded a total electron-positron pairs via a particle shower process. The produced pair beams have a volume that fills multiple Debye spheres and are thus able to sustain collective plasma oscillations. Antiprotons, antineutrons, and antinuclei The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues. In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory. Antihydrogen atoms In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to  – still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about or 0.1% of the original amount. The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than . While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series). Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped. On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of antihydrogen. In 2016, a new antiproton decelerator and cooler called ELENA (Extra Low ENergy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter. The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of anti-hydrogen). However, CERN only produces 1% of the anti-matter Fermilab does, and neither are designed to produce anti-matter. According to Gerald Jackson, using technology already in use today we are capable of producing and capturing 20 grams of anti-matter particles per year at a yearly cost of 670 million dollars per facility. Antihelium Antihelium-3 nuclei () were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) () from such collisions. The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3. Preservation Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam. In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation. Cost Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators) and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007. Several studies funded by NASA Innovative Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately the belts of gas giants like Jupiter, ideally at a lower cost per gram. Uses Medical Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. Fuel Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft. If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton–proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass–energy equivalence formula, ), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated. Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust. Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about ). Weapons Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. Nonetheless, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.
Physical sciences
Antimatter
null
1327
https://en.wikipedia.org/wiki/Antiparticle
Antiparticle
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron. Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle. Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography. The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall. Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN. Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons,  mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct. History Experiment In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios. The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps. Dirac hole theory Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons. This picture implied an infinite negative charge for the universea problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction  +  →  + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory. Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time. Elementary antiparticles Composite antiparticles Particle–antiparticle annihilation If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as  +  →  (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair,  +  → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization. Properties Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal . and are linear, unitary operators, is antilinear and antiunitary, . If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin. If , and can be defined separately on the particles and antiparticles, then where the proportionality sign indicates that there might be a phase on the right hand side. As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q. Quantum field theory This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory. One may try to quantize an electron field without mixing the annihilation and creation operators by writing where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0. So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form where the first sum is over positive energy states and the second over those of negative energy. The energy becomes where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion. This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons. Feynman–Stückelberg interpretation By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today. Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.
Physical sciences
Antimatter
null
1335
https://en.wikipedia.org/wiki/Associative%20property
Associative property
In mathematics, the associative property is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs. Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations: Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations". Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, , so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative. Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative. However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error. Definition Formally, a binary operation on a set is called associative if it satisfies the associative law: , for all in . Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication. , for all in . The associative law can also be expressed in functional notation thus: Generalized associative law If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. This is called the generalized associative law. The number of possible bracketings is just the Catalan number, , for n operations on n+1 values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), in possible ways: If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation. An example where this does not work is the logical biconditional . It is associative; thus, is equivalent to , but most commonly means , which is not equivalent. Examples Some examples of associative operations include the following. Propositional logic Rule of replacement In standard truth-functional propositional logic, association, or associativity are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules (using logical connectives notation) are: and where "" is a metalogical symbol representing "can be replaced in a proof with". Truth functional connectives Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following (and their converses, since is commutative) are truth-functional tautologies. Associativity of disjunction Associativity of conjunction Associativity of equivalence Joint denial is an example of a truth functional connective that is not associative. Non-associative operation A binary operation on a set S that does not satisfy the associative law is called non-associative. Symbolically, For such an operation the order of evaluation does matter. For example: Subtraction Division Exponentiation Vector cross product Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example, whereas Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures called non-associative algebras, which have also an addition and a scalar multiplication. Examples are the octonions and Lie algebras. In Lie algebras, the multiplication satisfies Jacobi identity instead of the associative law; this allows abstracting the algebraic nature of infinitesimal transformations. Other examples are quasigroup, quasifield, non-associative ring, and commutative non-associative magmas. Nonassociativity of floating point calculation In mathematics, addition and multiplication of real numbers are associative. By contrast, in computer science, addition and multiplication of floating point numbers are not associative, as different rounding errors may be introduced when dissimilar-sized values are joined in a different order. To illustrate this, consider a floating point representation with a 4-bit significand: Even though most computers compute with 24 or 53 bits of significand, this is still an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing. Notation for non-associative operations In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like ). However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses. A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e., while a right-associative operation is conventionally evaluated from right to left: Both left-associative and right-associative operations occur. Left-associative operations include the following: Subtraction and division of real numbers Function application This notation can be motivated by the currying isomorphism, which enables partial application. Right-associative operations include the following: Exponentiation of real numbers in superscript notation Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication: Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression the addition is performed before the exponentiation despite there being no explicit parentheses wrapped around it. Thus given an expression such as , the full exponent of the base is evaluated first. However, in some contexts, especially in handwriting, the difference between , and can be hard to see. In such a case, right-associativity is usually implied. Function definition Using right-associative notation for these operations can be motivated by the Curry–Howard correspondence and by the currying isomorphism. Non-associative operations for which no conventional evaluation order is defined include the following. Exponentiation of real numbers in infix notation Knuth's up-arrow operators Taking the cross product of three vectors Taking the pairwise average of real numbers Taking the relative complement of sets .(Compare material nonimplication in logic.) History William Rowan Hamilton seems to have coined the term "associative property" around 1844, a time when he was contemplating the non-associative algebra of the octonions he had learned about from John T. Graves.
Mathematics
Algebra
null
1346
https://en.wikipedia.org/wiki/Apatosaurus
Apatosaurus
Apatosaurus (; meaning "deceptive lizard") is a genus of herbivorous sauropod dinosaur that lived in North America during the Late Jurassic period. Othniel Charles Marsh described and named the first-known species, A. ajax, in 1877, and a second species, A. louisae, was discovered and named by William H. Holland in 1916. Apatosaurus lived about 152 to 151 million years ago (mya), during the late Kimmeridgian to early Tithonian age, and are now known from fossils in the Morrison Formation of modern-day Colorado, Oklahoma, New Mexico, Wyoming, and Utah in the United States. Apatosaurus had an average length of , and an average mass of . A few specimens indicate a maximum length of 11–30% greater than average and a mass of approximately . The cervical vertebrae of Apatosaurus are less elongated and more heavily constructed than those of Diplodocus, a diplodocid like Apatosaurus, and the bones of the leg are much stockier despite being longer, implying that Apatosaurus was a more robust animal. The tail was held above the ground during normal locomotion. Apatosaurus had a single claw on each forelimb and three on each hindlimb. The Apatosaurus skull, long thought to be similar to Camarasaurus, is much more similar to that of Diplodocus. Apatosaurus was a generalized browser that likely held its head elevated. To lighten its vertebrae, Apatosaurus had air sacs that made the bones internally full of holes. Like that of other diplodocids, its tail may have been used as a whip to create loud noises, or, as more recently suggested, as a sensory organ. The skull of Apatosaurus was confused with that of Camarasaurus and Brachiosaurus until 1909, when the holotype of A. louisae was found, and a complete skull just a few meters away from the front of the neck. Henry Fairfield Osborn disagreed with this association, and went on to mount a skeleton of Apatosaurus with a Camarasaurus skull cast. Apatosaurus skeletons were mounted with speculative skull casts until 1970, when McIntosh showed that more robust skulls assigned to Diplodocus were more likely from Apatosaurus. Apatosaurus is a genus in the family Diplodocidae. It is one of the more basal genera, with only Amphicoelias and possibly a new, unnamed genus more primitive. Although the subfamily Apatosaurinae was named in 1929, the group was not used validly until an extensive 2015 study. Only Brontosaurus is also in the subfamily, with the other genera being considered synonyms or reclassified as diplodocines. Brontosaurus has long been considered a junior synonym of Apatosaurus; its type species was reclassified as A.excelsus in 1903. A 2015 study concluded that Brontosaurus is a valid genus of sauropod distinct from Apatosaurus, but not all paleontologists agree with this division. As it existed in North America during the late Jurassic, Apatosaurus would have lived alongside dinosaurs such as Allosaurus, Camarasaurus, Diplodocus, and Stegosaurus. Description Apatosaurus was a large, long-necked, quadrupedal animal with a long, whip-like tail. Its forelimbs were slightly shorter than its hindlimbs. Most size estimates are based on specimen CM3018, the type specimen of A.louisae, reaching in length and in body mass. A 2015 study that estimated the mass of volumetric models of Dreadnoughtus, Apatosaurus, and Giraffatitan estimates CM3018 at , similar in mass to Dreadnoughtus. Some specimens of A.ajax (such as OMNH1670) represent individuals 1130% longer, suggesting masses twice that of CM3018 or , potentially rivaling the largest titanosaurs. However, the upper size estimate of OMNH1670 is likely an exaggeration, with the size estimates revised in 2020 at in length and in body mass based on volumetric analysis. The skull is small in relation to the size of the animal. The jaws are lined with spatulate (chisel-like) teeth suited to an herbivorous diet. The snout of Apatosaurus and similar diplodocoids is squared, with only Nigersaurus having a squarer skull. The braincase of Apatosaurus is well preserved in specimen BYU17096, which also preserved much of the skeleton. A phylogenetic analysis found that the braincase had a morphology similar to those of other diplodocoids. Some skulls of Apatosaurus have been found still in articulation with their teeth. Those teeth that have the enamel surface exposed do not show any scratches on the surface; instead, they display a sugary texture and little wear. Like those of other sauropods, the neck vertebrae are deeply bifurcated; they carried neural spines with a large trough in the middle, resulting in a wide, deep neck. The vertebral formula for the holotype of A.louisae is 15cervicals, 10dorsals, 5sacrals, and 82caudals. The caudal vertebra number may vary, even within species. The cervical vertebrae of Apatosaurus and Brontosaurus are stouter and more robust than those of other diplodocids and were found to be most similar to Camarasaurus by Charles Whitney Gilmore. In addition, they support cervical ribs that extend farther towards the ground than in diplodocines, and have vertebrae and ribs that are narrower towards the top of the neck, making the neck nearly triangular in cross-section. In Apatosaurus louisae, the atlas-axis complex of the first cervicals is nearly fused. The dorsal ribs are not fused or tightly attached to their vertebrae and are instead loosely articulated. Apatosaurus has ten dorsal ribs on either side of the body. The large neck was filled with an extensive system of weight-saving air sacs. Apatosaurus, like its close relative Supersaurus, has tall neural spines, which make up more than half the height of the individual bones of its vertebrae. The shape of the tail is unusual for a diplodocid; it is comparatively slender because of the rapidly decreasing height of the vertebral spines with increasing distance from the hips. Apatosaurus also had very long ribs compared to most other diplodocids, giving it an unusually deep chest. As in other diplodocids, the tail transformed into a whip-like structure towards the end. The limb bones are also very robust. Within Apatosaurinae, the scapula of Apatosaurus louisae is intermediate in morphology between those of A.ajax and Brontosaurus excelsus. The arm bones are stout, so the humerus of Apatosaurus resembles that of Camarasaurus, as well as Brontosaurus. However, the humeri of Brontosaurus and A.ajax are more similar to each other than they are to A.louisae. In 1936, Charles Gilmore noted that previous reconstructions of Apatosaurus forelimbs erroneously proposed that the radius and ulna could cross; in life they would have remained parallel. Apatosaurus had a single large claw on each forelimb, a feature shared by all sauropods more derived than Shunosaurus. The first three toes had claws on each hindlimb. The phalangeal formula is 2-1-1-1-1, meaning the innermost finger (phalanx) on the forelimb has two bones and the next has one. The single manual claw bone (ungual) is slightly curved and squarely truncated on the anterior end. The pelvic girdle includes the robust ilia, and the fused (co-ossified) pubes and ischia. The femora of Apatosaurus are very stout and represent some of the most robust femora of any member of Sauropoda. The tibia and fibula bones are different from the slender bones of Diplodocus but are nearly indistinguishable from those of Camarasaurus. The fibula is longer and slenderer than the tibia. The foot of Apatosaurus has three claws on the innermost digits; the digit formula is 3-4-5-3-2. The first metatarsal is the stoutest, a feature shared among diplodocids. Discovery and species Initial discovery The first Apatosaurus fossils were discovered by Arthur Lakes, a local miner, and his friend Henry C. Beckwith in the spring of 1877 in Morrison, a town in the eastern foothills of the Rocky Mountains in Jefferson County, Colorado. Arthur Lakes wrote to Othniel Charles Marsh, Professor of Paleontology at Yale University, and Edward Drinker Cope, a paleontologist based in Philadelphia, about the discovery until eventually collecting several fossils and sending them to both paleontologists. Marsh named Atlantosaurus montanus based on some of the fossils sent and hired Lakes to collect the rest of the material at Morrison and send it to Yale, while Cope attempted to hire Lakes as well but was rejected. One of the best specimens collected by Lakes in 1877 was a well preserved partial postcranial skeleton, including many vertebrae, and a partial braincase (YPM VP 1860), which was sent to Marsh and named Apatosaurus ajax in November 1877. The composite term Apatosaurus comes from the Greek words ()/ () meaning "deception"/"deceptive", and () meaning "lizard"; thus, "deceptive lizard". Marsh gave it this name based on the chevron bones, which are dissimilar to those of other dinosaurs; instead, the chevron bones of Apatosaurus showed similarities with those of mosasaurs, most likely that of the representative species Mosasaurus. By the end of excavations at Lakes' quarry in Morrison, several partial specimens of Apatosaurus had been collected, but only the type specimen of A. ajax can be confidently referred to the species. During excavation and transportation, the bones of the holotype skeleton were mixed with those of another Apatosaurine individual originally described as Atlantosaurus immanis; as a consequence, some elements cannot be ascribed to either specimen with confidence. Marsh distinguished the new genus Apatosaurus from Atlantosaurus on the basis of the number of sacral vertebrae, with Apatosaurus possessing three and Atlantosaurus four. Recent research shows that traits usually used to distinguish taxa at this time were actually widespread across several taxa, causing many of the taxa named to be invalid, like Atlantosaurus. Two years later, Marsh announced the discovery of a larger and more complete specimen (YPM VP 1980) from Como Bluff, Wyoming, he gave this specimen the name Brontosaurus excelsus. Also at Como Bluff, the Hubbell brothers working for Edward Drinker Cope collected a tibia, fibula, scapula, and several caudal vertebrae along with other fragments belonging to Apatosaurus in 1877–78 at Cope's Quarry 5 at the site. Later in 1884, Othniel Marsh named Diplodocus lacustris based on a chimeric partial dentary, snout, and several teeth collected by Lakes in 1877 at Morrison. In 2013, it was suggested that the dentary of D. lacustris and its teeth were actually from Apatosaurus ajax based on its proximity to the type braincase of A. ajax. All specimens currently considered Apatosaurus were from the Morrison Formation, the location of the excavations of Marsh and Cope. Second Dinosaur Rush and skull issue After the end of the Bone Wars, many major institutions in the eastern United States were inspired by the depictions and finds by Marsh and Cope to assemble their own dinosaur fossil collections. The competition to mount the first sauropod skeleton specifically was the most intense, with the American Museum of Natural History, Carnegie Museum of Natural History, and Field Museum of Natural History all sending expeditions to the west to find the most complete sauropod specimen, bring it back to the home institution, and mount it in their fossil halls. The American Museum of Natural History was the first to launch an expedition, finding a well preserved skeleton (AMNH 460), which is occasionally assigned to Apatosaurus, is considered nearly complete; only the head, feet, and sections of the tail are missing, and it was the first sauropod skeleton mounted. The specimen was found north of Medicine Bow, Wyoming, in 1898 by Walter Granger, and took the entire summer to extract. To complete the mount, sauropod feet that were discovered at the same quarry and a tail fashioned to appear as Marsh believed it shouldbut which had too few vertebraewere added. In addition, a sculpted model of what the museum thought the skull of this massive creature might look like was made. This was not a delicate skull like that of Diplodocuswhich was later found to be more accuratebut was based on "the biggest, thickest, strongest skull bones, lower jaws and tooth crowns from three different quarries". These skulls were likely those of Camarasaurus, the only other sauropod for which good skull material was known at the time. The mount construction was overseen by Adam Hermann, who failed to find Apatosaurus skulls. Hermann was forced to sculpt a stand-in skull by hand. Osborn said in a publication that the skull was "largely conjectural and based on that of Morosaurus" (now Camarasaurus). In 1903, Elmer Riggs published a study that described a well-preserved skeleton of a diplodocid from the Grand River Valley near Fruita, Colorado, Field Museum of Natural History specimen P25112. Riggs thought that the deposits were similar in age to those of the Como Bluff in Wyoming from which Marsh had described Brontosaurus. Most of the skeleton was found, and after comparison with both Brontosaurus and Apatosaurus ajax, Riggs realized that the holotype of A.ajax was immature, and thus the features distinguishing the genera were not valid. Since Apatosaurus was the earlier name, Brontosaurus should be considered a junior synonym of Apatosaurus. Because of this, Riggs recombined Brontosaurus excelsus as Apatosaurus excelsus. Based on comparisons with other species proposed to belong to Apatosaurus, Riggs also determined that the Field Columbian Museum specimen was likely most similar to A.excelsus. Despite Riggs' publication, Henry Fairfield Osborn, who was a strong opponent of Marsh and his taxa, labeled the Apatosaurus mount of the American Museum of Natural History Brontosaurus. Because of this decision the name Brontosaurus was commonly used outside of scientific literature for what Riggs considered Apatosaurus, and the museum's popularity meant that Brontosaurus became one of the best known dinosaurs, even though it was invalid throughout nearly all of the 20th and early 21st centuries. It was not until 1909 that an Apatosaurus skull was found during the first expedition, led by Earl Douglass, to what would become known as the Carnegie Quarry at Dinosaur National Monument. The skull was found a short distance from a skeleton (specimen CM3018) identified as the new species Apatosaurus louisae, named after Louise Carnegie, wife of Andrew Carnegie, who funded field research to find complete dinosaur skeletons in the American West. The skull was designated CM11162; it was very similar to the skull of Diplodocus. Another smaller skeleton of A.louisae was found nearby CM11162 and CM3018. The skull was accepted as belonging to the Apatosaurus specimen by Douglass and Carnegie Museum director William H. Holland, although other scientistsmost notably Osbornrejected this identification. Holland defended his view in 1914 in an address to the Paleontological Society of America, yet he left the Carnegie Museum mount headless. While some thought Holland was attempting to avoid conflict with Osborn, others suspected Holland was waiting until an articulated skull and neck were found to confirm the association of the skull and skeleton. After Holland's death in 1934, museum staff placed a cast of a Camarasaurus skull on the mount. While most other museums were using cast or sculpted Camarasaurus skulls on Apatosaurus mounts, the Yale Peabody Museum decided to sculpt a skull based on the lower jaw of a Camarasaurus, with the cranium based on Marsh's 1891 illustration of the skull. The skull also included forward-pointing nasalssomething unusual for any dinosaurand fenestrae differing from both the drawing and other skulls. No Apatosaurus skull was mentioned in literature until the 1970s when John Stanton McIntosh and David Berman redescribed the skulls of Diplodocus and Apatosaurus. They found that though he never published his opinion, Holland was almost certainly correct, that Apatosaurus had a Diplodocus-like skull. According to them, many skulls long thought to pertain to Diplodocus might instead be those of Apatosaurus. They reassigned multiple skulls to Apatosaurus based on associated and closely associated vertebrae. Even though they supported Holland, it was noted that Apatosaurus might have possessed a Camarasaurus-like skull, based on a disarticulated Camarasaurus-like tooth found at the precise site where an Apatosaurus specimen was found years before. On October20, 1979, after the publications by McIntosh and Berman, the first true skull of Apatosaurus was mounted on a skeleton in a museum, that of the Carnegie. In 1998, it was suggested that the Felch Quarry skull that Marsh had included in his 1896 skeletal restoration instead belonged to Brachiosaurus. This was supported in 2020 with a redescription of the brachiosaurid material found at the Felch Quarry. Recent discoveries and reassessment In 2011, the first specimen of Apatosaurus where a skull was found articulated with its cervical vertebrae was described. This specimen, CMCVP7180, was found to differ in both skull and neck features from A.louisae, but shared many features of the cervical vertebrae with A.ajax. Another well-preserved skull is Brigham Young University specimen 17096, a well-preserved skull and skeleton, with a preserved braincase. The specimen was found in Cactus Park Quarry in western Colorado. In 2013, Matthew Mossbrucker and several other authors published an abstract that described a premaxilla and maxilla from Lakes' original quarry in Morrison and referred the material to Apatosaurus ajax. Almost all modern paleontologists agreed with Riggs that the two dinosaurs should be classified together in a single genus. According to the rules of the ICZN (which governs the scientific names of animals), the name Apatosaurus, having been published first, has priority as the official name; Brontosaurus was considered a junior synonym and was therefore long discarded from formal use. Despite this, at least one paleontologistRobert T. Bakkerargued in the 1990s that A.ajax and A.excelsus were in fact sufficiently distinct for the latter to merit a separate genus. In 2015, Emanuel Tschopp, Octávio Mateus, and Roger Benson released a paper on diplodocoid systematics, and proposed that genera could be diagnosed by thirteen differing characters, and species separated based on six. The minimum number for generic separation was chosen based on the fact that A.ajax and A.louisae differ in twelve characters, and Diplodocus carnegiei and D.hallorum differ in eleven characters. Thus, thirteen characters were chosen to validate the separation of genera. The six differing features for specific separation were chosen by counting the number of differing features in separate specimens generally agreed to represent one species, with only one differing character in D.carnegiei and A.louisae, but five differing features in B.excelsus. Therefore, Tschopp etal. argued that Apatosaurus excelsus, originally classified as Brontosaurus excelsus, had enough morphological differences from other species of Apatosaurus that it warranted being reclassified as a separate genus again. The conclusion was based on a comparison of 477 morphological characteristics across 81 different dinosaur individuals. Among the many notable differences are the widerand presumably strongerneck of Apatosaurus species compared to B.excelsus. Other species previously assigned to Apatosaurus, such as Elosaurus parvus and Eobrontosaurus yahnahpin were also reclassified as Brontosaurus. Some features proposed to separate Brontosaurus from Apatosaurus include: posterior dorsal vertebrae with the centrum longer than wide; the scapula rear to the acromial edge and the distal blade being excavated; the acromial edge of the distal scapular blade bearing a rounded expansion; and the ratio of the proximodistal length to transverse breadth of the astragalus 0.55 or greater. Sauropod expert Michael D'Emic pointed out that the criteria chosen were to an extent arbitrary and that they would require abandoning the name Brontosaurus again if newer analyzes obtained different results. Mammal paleontologist Donald Prothero criticized the mass media reaction to this study as superficial and premature, concluding that he would keep "Brontosaurus" in quotes and not treat the name as a valid genus. Valid species Many species of Apatosaurus have been designated from scant material. Marsh named as many species as he could, which resulted in many being based upon fragmentary and indistinguishable remains. In 2005, Paul Upchurch and colleagues published a study that analyzed the species and specimen relationships of Apatosaurus. They found that A.louisae was the most basal species, followed by FMNHP25112, and then a polytomy of A.ajax, A.parvus, and A.excelsus. Their analysis was revised and expanded with many additional diplodocid specimens in 2015, which resolved the relationships of Apatosaurus slightly differently, and also supported separating Brontosaurus from Apatosaurus. Apatosaurus ajax was named by Marsh in 1877 after Ajax, a hero from Greek mythology. Marsh designated the incomplete, juvenile skeleton YPM1860 as its holotype. The species is less studied than Brontosaurus and A.louisae, especially because of the incomplete nature of the holotype. In 2005, many specimens in addition to the holotype were found assignable to A.ajax, YPM1840, NSMT-PV 20375, YPM1861, and AMNH460. The specimens date from the late Kimmeridgian to the early Tithonian ages. In 2015, only the A.ajax holotype YPM1860 assigned to the species, with AMNH460 found either to be within Brontosaurus, or potentially its own taxon. However, YPM1861 and NSMT-PV 20375 only differed in a few characteristics, and cannot be distinguished specifically or generically from A.ajax. YPM1861 is the holotype of "Atlantosaurus" immanis, which means it might be a junior synonym of A.ajax. Apatosaurus louisae was named by Holland in 1916, being first known from a partial skeleton that was found in Utah. The holotype is CM3018, with referred specimens including CM3378, CM11162, and LACM52844. The former two consist of a vertebral column; the latter two consist of a skull and a nearly complete skeleton, respectively. Apatosaurus louisae specimens all come from the late Kimmeridgian of Dinosaur National Monument. In 2015, Tschopp etal. found the type specimen of Apatosaurus laticollis to nest closely with CM3018, meaning the former is likely a junior synonym of A.louisae. The cladogram below is the result of an analysis by Tschopp, Mateus, and Benson (2015). The authors analyzed most diplodocid type specimens separately to deduce which specimen belonged to which species and genus. Reassigned species Apatosaurus grandis was named in 1877 by Marsh in the article that described A.ajax. It was briefly described, figured, and diagnosed. Marsh later mentioned it was only provisionally assigned to Apatosaurus when he reassigned it to his new genus Morosaurus in 1878. Since Morosaurus has been considered a synonym of Camarasaurus, C.grandis is the oldest-named species of the latter genus. Apatosaurus excelsus was the original type species of Brontosaurus, first named by Marsh in 1879. Elmer Riggs reclassified Brontosaurus as a synonym of Apatosaurus in 1903, transferring the species B.excelsus to A.excelsus. In 2015, Tschopp, Mateus, and Benson argued that the species was distinct enough to be placed in its own genus, so they reclassified it back into Brontosaurus. Apatosaurus parvus, first described from a juvenile specimen as Elosaurus in 1902 by Peterson and Gilmore, was reassigned to Apatosaurus in 1994, and then to Brontosaurus in 2015. Many other, more mature specimens were assigned to it following the 2015 study. Apatosaurus minimus was originally described as a specimen of Brontosaurus sp. in 1904 by Osborn. In 1917, Henry Mook named it as its own species, A.minimus, for a pair of ilia and their sacrum. In 2012, Mike P. Taylor and Matt J. Wedel published a short abstract describing the material of A. minimus, finding it hard to place among either Diplodocoidea or Macronaria. While it was placed with Saltasaurus in a phylogenetic analysis, it was thought to represent instead some form with convergent features from many groups. The study of Tschopp etal. did find that a camarasaurid position for the taxon was supported, but noted that the position of the taxon was found to be highly variable and there was no clearly more likely position. Apatosaurus alenquerensis was named in 1957 by Albert-Félix de Lapparent and Georges Zbyweski. It was based on post cranial material from Portugal. In 1990, this material was reassigned to Camarasaurus, but in 1998 it was given its own genus, Lourinhasaurus. This was further supported by the findings of Tschopp etal. in 2015, where Lourinhasaurus was found to be sister to Camarasaurus and other camarasaurids. Apatosaurus yahnahpin was named by James Filla and Patrick Redman in 1994. Bakker made A.yahnahpin the type species of a new genus, Eobrontosaurus in 1998, and Tschopp reclassified it as Brontosaurus yahnahpin in 2015. Classification Apatosaurus is a member of the family Diplodocidae, a clade of gigantic sauropod dinosaurs. The family includes some of the longest creatures ever to walk the earth, including Diplodocus, Supersaurus, and Barosaurus. Apatosaurus is sometimes classified in the subfamily Apatosaurinae, which may also include Suuwassea, Supersaurus, and Brontosaurus. Othniel Charles Marsh described Apatosaurus as allied to Atlantosaurus within the now-defunct group Atlantosauridae. In 1878, Marsh raised his family to the rank of suborder, including Apatosaurus, Atlantosaurus, Morosaurus (=Camarasaurus) and Diplodocus. He classified this group within Sauropoda, a group he erected in the same study. In 1903, Elmer S. Riggs said the name Sauropoda would be a junior synonym of earlier names; he grouped Apatosaurus within Opisthocoelia. Sauropoda is still used as the group name. In 2011, John Whitlock published a study that placed Apatosaurus a more basal diplodocid, sometimes less basal than Supersaurus. Cladogram of the Diplodocidae after Tschopp, Mateus, and Benson (2015). Paleobiology It was believed throughout the 19th and early 20th centuries that sauropods like Apatosaurus were too massive to support their own weight on dry land. It was theorized that they lived partly submerged in water, perhaps in swamps. More recent findings do not support this; sauropods are now thought to have been fully terrestrial animals. A study of diplodocid snouts showed that the square snout, large proportion of pits, and fine, subparallel scratches of the teeth of Apatosaurus suggests it was a ground-height, nonselective browser. It may have eaten ferns, cycadeoids, seed ferns, horsetails, and algae. Stevens and Parish (2005) speculate that these sauropods fed from riverbanks on submerged water plants. A 2015 study of the necks of Apatosaurus and Brontosaurus found many differences between them and other diplodocids, and that these variations may have shown that the necks of Apatosaurus and Brontosaurus were used for intraspecific combat. Various uses for the single claw on the forelimb of sauropods have been proposed. One suggestion is that they were used for defense, but their shape and size make this unlikely. It was also possible they were for feeding, but the most probable use for the claw was grasping objects such as tree trunks when rearing. Trackways of sauropods like Apatosaurus show that they may have had a range of around per day, and that they could potentially have reached a top speed of per hour. The slow locomotion of sauropods may be due to their minimal muscling, or to recoil after strides. A trackway of a juvenile has led some to believe that they were capable of bipedalism, though this is disputed. Neck posture Diplodocids like Apatosaurus are often portrayed with their necks held high up in the air, allowing them to browse on tall trees. Some studies state diplodocid necks were less flexible than previously believed, because the structure of the neck vertebrae would not have allowed the neck to bend far upward, and that sauropods like Apatosaurus were adapted to low browsing or ground feeding. Other studies by Taylor find that all tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture; they argue the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from that of other animals. Apatosaurus, like Diplodocus, would have held its neck angled upward with the head pointing downward in a resting posture. Kent Stevens and Michael Parrish (1999 and 2005) state Apatosaurus had a great feeding range; its neck could bend into a U-shape laterally. The neck's range of movement would have also allowed the head to feed at the level of the feet. Matthew Cobley et al. (2013) dispute this, finding that large muscles and cartilage would have limited movement of the neck. They state the feeding ranges for sauropods like Diplodocus were smaller than previously believed, and the animals may have had to move their whole bodies around to better access areas where they could browse vegetation. As such, they might have spent more time foraging to meet their minimum energy needs. The conclusions of Cobley etal. are disputed by Taylor, who analyzed the amount and positioning of intervertebral cartilage to determine the flexibility of the neck of Apatosaurus and Diplodocus. He found that the neck of Apatosaurus was very flexible. Physiology Given the large body mass and long neck of sauropods like Apatosaurus, physiologists have encountered problems determining how these animals breathed. Beginning with the assumption that, like crocodilians, Apatosaurus did not have a diaphragm, the dead-space volume (the amount of unused air remaining in the mouth, trachea, and air tubes after each breath) has been estimated at for a specimen. Paladino calculates its tidal volume (the amount of air moved in or out during a single breath) at with an avian respiratory system, if mammalian, and if reptilian. On this basis, its respiratory system would likely have been parabronchi, with multiple pulmonary air sacs as in avian lungs, and a flow-through lung. An avian respiratory system would need a lung volume of about compared with a mammalian requirement of , which would exceed the space available. The overall thoracic volume of Apatosaurus has been estimated at , allowing for a , four-chambered heart and a lung capacity. That would allow about for the necessary tissue. Evidence for the avian system in Apatosaurus and other sauropods is also present in the pneumaticity of the vertebrae. Though this plays a role in reducing the weight of the animal, Wedel (2003) states they are also likely connected to air sacs, as in birds. James Spotila et al. (1991) concludes that the large body size of sauropods would have made them unable to maintain high metabolic rates because they would not have been able to release enough heat. They assumed sauropods had a reptilian respiratory system. Wedel says that an avian system would have allowed it to dump more heat. Some scientists state that the heart would have had trouble sustaining sufficient blood pressure to oxygenate the brain. Others suggest that the near-horizontal posture of the head and neck would have eliminated the problem of supplying blood to the brain because it would not have been elevated. James Farlow (1987) calculates that an Apatosaurus-sized dinosaur about would have possessed of fermentation contents, though he cautions that the regression equation being used is based on living mammals which are much smaller and physiologically different. Assuming Apatosaurus had an avian respiratory system and a reptilian resting-metabolism, Frank Paladino etal. (1997) estimate the animal would have needed to consume only about of water per day. Growth A 1999 microscopic study of Apatosaurus and Brontosaurus bones concluded the animals grew rapidly when young and reached near-adult sizes in about 10years. In 2008, a study on the growth rates of sauropods was published by Thomas Lehman and Holly Woodward. They said that by using growth lines and length-to-mass ratios, Apatosaurus would have grown to 25t (25 long tons; 28 short tons) in 15years, with growth peaking at in a single year. An alternative method, using limb length and body mass, found Apatosaurus grew per year, and reached its full mass before it was about 70years old. These estimates have been called unreliable because the calculation methods are not sound; old growth lines would have been obliterated by bone remodeling. One of the first identified growth factors of Apatosaurus was the number of sacral vertebrae, which increased to five by the time of the creature's maturity. This was first noted in 1903 and again in 1936. Long-bone histology enables researchers to estimate the age that a specific individual reached. A study by Eva Griebeler etal. (2013) examined long-bone histological data and concluded the Apatosaurus sp.SMA0014 weighed , reached sexual maturity at 21years, and died aged 28. The same growth model indicated Apatosaurus sp.BYU 601–17328 weighed , reached sexual maturity at 19years, and died aged 31. Juveniles Compared with most sauropods, a relatively large amount of juvenile material is known from Apatosaurus. Multiple specimens in the OMNH are from juveniles of an undetermined species of Apatosaurus; this material includes partial shoulder and pelvic girdles, some vertebrae, and limb bones. OMNH juvenile material is from at least two different age groups and based on overlapping bones likely comes from more than three individuals. The specimens exhibit features that distinguish Apatosaurus from its relatives, and thus likely belong to the genus. Juvenile sauropods tend to have proportionally shorter necks and tails, and a more pronounced forelimb-hindlimb disparity than found in adult sauropods. Tail An article published in 1997 reported research of the mechanics of Apatosaurus tails by Nathan Myhrvold and paleontologist Philip J. Currie. Myhrvold carried out a computer simulation of the tail, which in diplodocids like Apatosaurus was a very long, tapering structure resembling a bullwhip. This computer modeling suggested diplodocids were capable of producing a whiplike cracking sound of over 200 decibels, comparable to the volume of a cannon being fired. A pathology has been identified on the tail of Apatosaurus, caused by a growth defect. Two caudal vertebrae are seamlessly fused along the entire articulating surface of the bone, including the arches of the neural spines. This defect might have been caused by the lack or inhibition of the substance that forms intervertebral disks or joints. It has been proposed that the whips could have been used in combat and defense, but the tails of diplodocids were quite light and narrow compared to Shunosaurus and mamenchisaurids, and thus to injure another animal with the tail would severely injure the tail itself. More recently, Baron (2020) considers the use of the tail as a bullwhip unlikely because of the potentially catastrophic muscle and skeletal damage such speeds could cause on the large and heavy tail. Instead, he proposes that the tails might have been used as a tactile organ to keep in touch with the individuals behind and on the sides in a group while migrating, which could have augmented cohesion and allowed communication among individuals while limiting more energetically demanding activities like stopping to search for dispersed individuals, turning to visually check on individuals behind, or communicating vocally. Paleoecology The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, dates from between 156.3mya at its base, and 146.8mya at the top, placing it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. This formation is interpreted as originating in a locally semiarid environment with distinct wet and dry seasons. The Morrison Basin, where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan; it was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains. This formation is similar in age to the Lourinhã Formation in Portugal and the Tendaguru Formation in Tanzania. Apatosaurus was the second most common sauropod in the Morrison Formation ecosystem, after Camarasaurus. Apatosaurus may have been more solitary than other Morrison Formation dinosaurs. Fossils of the genus have only been found in the upper levels of the formation. Those of Apatosaurus ajax are known exclusively from the upper Brushy Basin Member, about 152–151 mya. A.louisae fossils are rare, known only from one site in the upper Brushy Basin Member; they date to the late Kimmeridgian stage, about 151mya. Additional Apatosaurus remains are known from similarly aged or slightly younger rocks, but they have not been identified as any particular species, and thus may instead belong to Brontosaurus. The Morrison Formation records a time when the local environment was dominated by gigantic sauropod dinosaurs. Dinosaurs known from the Morrison Formation include the theropods Allosaurus, Ceratosaurus, Ornitholestes, and Torvosaurus; the sauropods Brontosaurus, Brachiosaurus, Camarasaurus, and Diplodocus; and the ornithischians Camptosaurus, Dryosaurus, and Stegosaurus. Apatosaurus is commonly found at the same sites as Allosaurus, Camarasaurus, Diplodocus, and Stegosaurus. Allosaurus accounted for 70–75% of theropod specimens and was at the top trophic level of the Morrison food web. Many of the dinosaurs of the Morrison Formation are of the same genera as those seen in Portuguese rocks of the Lourinhã Formationmainly Allosaurus, Ceratosaurus, and Torvosaurusor have a close counterpartBrachiosaurus and Lusotitan, Camptosaurus and Draconyx, and Apatosaurus and Dinheirosaurus. Other vertebrates that are known to have shared this paleo-environment include ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs, and several species of pterosaur. Shells of bivalves and aquatic snails are also common. The flora of the period has been evidenced in fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns with fern understory (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum.
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
1347
https://en.wikipedia.org/wiki/Allosaurus
Allosaurus
Allosaurus () is an extinct genus of large carnosaurian theropod dinosaur that lived 155 to 145 million years ago during the Late Jurassic period (Kimmeridgian to late Tithonian ages). The name "Allosaurus" means "different lizard", alluding to its unique (at the time of its discovery) concave vertebrae. It is derived from the Greek words () ("different", "strange", or "other") and () ("lizard" or "reptile"). The first fossil remains that could definitively be ascribed to this genus were described in 1877 by famed paleontologist Othniel Charles Marsh. The genus has a very complicated taxonomy and includes at least three valid species, the best known of which is A. fragilis. The bulk of Allosaurus remains have come from North America's Morrison Formation, with material also known from the Alcobaça Formation and Lourinhã Formation in Portugal with teeth known from Germany. It was known for over half of the 20th century as Antrodemus, but a study of the abundant remains from the Cleveland-Lloyd Dinosaur Quarry returned the name "Allosaurus" to prominence. As one of the first well-known theropod dinosaurs, it has long attracted attention outside of paleontological circles. Allosaurus was a large bipedal predator for its time. Its skull was light, robust, and equipped with dozens of sharp, serrated teeth. It averaged in length for A. fragilis, with the largest specimens estimated as being long. Relative to the large and powerful legs, its three-fingered hands were small and the body was balanced by a long, muscular tail. It is classified as an allosaurid, a type of carnosaurian theropod dinosaur. As the most abundant large predator of the Morrison Formation, Allosaurus was at the top of the food chain and probably preyed on contemporaneous large herbivorous dinosaurs, with the possibility of hunting other predators. Potential prey included ornithopods, stegosaurids, and sauropods. Some paleontologists interpret Allosaurus as having had cooperative social behavior and hunting in packs, while others believe individuals may have been aggressive toward each other and that congregations of this genus are the result of lone individuals feeding on the same carcasses. Discovery and history Early discoveries and research The discovery and early study of Allosaurus is complicated by the multiplicity of names coined during the Bone Wars of the late 19th century. The first described fossil in this history was a bone obtained secondhand by Ferdinand Vandeveer Hayden in 1869. It came from Middle Park, near Granby, Colorado, probably from Morrison Formation rocks. The locals had identified such bones as "petrified horse hoofs". Hayden sent his specimen to Joseph Leidy, who identified it as half of a tail vertebra and tentatively assigned it to the European dinosaur genus Poekilopleuron as Poicilopleuron valens. He later decided it deserved its own genus, Antrodemus. Allosaurus itself is based on YPM 1930, a small collection of fragmentary bones including parts of three vertebrae, a rib fragment, a tooth, a toe bone, and (most useful for later discussions) the shaft of the right humerus (upper arm). Othniel Charles Marsh gave these remains the formal name Allosaurus fragilis in 1877. Allosaurus comes from the Greek words /, meaning "strange" or "different", and /, meaning "lizard" or "reptile". It was named 'different lizard' because its vertebrae were different from those of other dinosaurs known at the time of its discovery. The species epithet fragilis is Latin for "fragile", referring to lightening features in the vertebrae. The bones were collected from the Morrison Formation of Garden Park, north of Cañon City. O. C. Marsh and Edward Drinker Cope, who were in scientific competition with each other, went on to coin several other genera based on similarly sparse material that would later figure in the taxonomy of Allosaurus. These include Marsh's Creosaurus and Labrosaurus, as well as Cope's Epanterias. In their haste, Cope and Marsh did not always follow up on their discoveries (or, more commonly, those made by their subordinates). For example, after the discovery by Benjamin Mudge of the type specimen of Allosaurus in Colorado, Marsh elected to concentrate work in Wyoming. When work resumed at Garden Park in 1883, M. P. Felch found an almost complete Allosaurus and several partial skeletons. In addition, one of Cope's collectors, H. F. Hubbell, found a specimen in the Como Bluff area of Wyoming in 1879, but apparently did not mention its completeness and Cope never unpacked it. Upon unpacking it in 1903 (several years after Cope had died), it was found to be one of the most complete theropod specimens then known and the skeleton, now cataloged as AMNH 5753, was put on public view in 1908. This is the well-known mount poised over a partial Apatosaurus skeleton as if scavenging it, illustrated as such in a painting by Charles R. Knight. Although notable as the first free-standing mount of a theropod dinosaur and often illustrated and photographed, it has never been scientifically described. The multiplicity of early names complicated later research, with the situation compounded by the terse descriptions provided by Marsh and Cope. Even at the time, authors such as Samuel Wendell Williston suggested that too many names had been coined. For example, Williston pointed out in 1901 that Marsh had never been able to adequately distinguish Allosaurus from Creosaurus. The most influential early attempt to sort out the convoluted situation was produced by Charles W. Gilmore in 1920. He came to the conclusion that the tail vertebra named Antrodemus by Leidy was indistinguishable from those of Allosaurus and that Antrodemus should be the preferred name because, as the older name, it had priority. Antrodemus became the accepted name for this familiar genus for over 50 years, until James Henry Madsen published on the Cleveland-Lloyd specimens and concluded that Allosaurus should be used because Antrodemus was based on material with poor, if any, diagnostic features and locality information. For example, the geological formation that the single bone of Antrodemus came from is unknown. "Antrodemus" has been used informally for convenience when distinguishing between the skull Gilmore restored and the composite skull restored by Madsen. Cleveland-Lloyd discoveries Although sporadic work at what became known as the Cleveland-Lloyd Dinosaur Quarry in Emery County, Utah, had taken place as early as 1927 and the fossil site itself described by William L. Stokes in 1945, major operations did not begin there until 1960. Under a cooperative effort involving nearly 40 institutions, thousands of bones were recovered between 1960 and 1965, led by James Henry Madsen. The quarry is notable for the predominance of Allosaurus remains, the condition of the specimens, and the lack of scientific resolution on how it came to be. The majority of bones belong to the large theropod Allosaurus fragilis (it is estimated that the remains of at least 46 A. fragilis have been found there, out of at a minimum 73 dinosaurs) and the fossils found there are disarticulated and well-mixed. Nearly a dozen scientific papers have been written on the taphonomy of the site, suggesting numerous mutually exclusive explanations for how it may have formed. Suggestions have ranged from animals getting stuck in a bog, becoming trapped in deep mud, falling victim to drought-induced mortality around a waterhole, and getting trapped in a spring-fed pond or seep. Regardless of the actual cause, the great quantity of well-preserved Allosaurus remains has allowed this genus to be known in great detail, making it among the best-known of all theropods. Skeletal remains from the quarry pertain to individuals of almost all ages and sizes, from less than to long, and the disarticulation is an advantage for describing bones usually found fused. Due to being one of Utah's two fossil quarries where numerous Allosaurus specimens have been discovered, Allosaurus was designated as the state fossil of Utah in 1988. Modern study The period since Madsen's monograph has been marked by a great expansion in studies dealing with topics concerning Allosaurus in life (paleobiological and paleoecological topics). Such studies have covered topics including skeletal variation, growth, skull construction, hunting methods, the brain, and the possibility of gregarious living and parental care. Reanalysis of old material (particularly of large 'allosaur' specimens), new discoveries in Portugal, and several very complete new specimens have also contributed to the growing knowledge base. "Big Al" and "Big Al II" In 1991, "Big Al" (MOR 693), a 95% complete, partially articulated specimen of Allosaurus was discovered, measuring about long. MOR 693 was excavated near Shell, Wyoming, by a joint Museum of the Rockies and University of Wyoming Geological Museum team. This skeleton was discovered by a Swiss team, led by Kirby Siber. Chure and Loewen in 2020 identified the individual as a representative of the species A. jimmadseni. In 1996, the same team discovered a second Allosaurus, "Big Al II". This specimen, the best preserved skeleton of its kind to date, is also referred to A. jimmadseni. The completeness, preservation, and scientific importance of this skeleton gave "Big Al" its name. The individual itself was below the average size for Allosaurus fragilis, as it was a subadult estimated at only 87% grown. The specimen was described by Breithaupt in 1996. Nineteen of its bones were broken or showed signs of serious infection, which may have contributed to "Big Al's" death. Pathologic bones included five ribs, five vertebrae, and four bones of the feet. Several of its damaged bones showed signs of osteomyelitis, a severe bone infection. A particular problem for the living animal was infection and trauma to the right foot that probably affected movement and may have also predisposed the other foot to injury because of a change in gait. "Big Al" had an infection on the first phalanx on the third toe that was afflicted by an involucrum. The infection was long-lived, perhaps up to six months. "Big Al II" is also known to have multiple injuries. Portuguese discoveries In 1988, during construction works of a warehouse, a skeleton of a large theropod was discovered near the village of Andrés, Leiria District, Portugal. The Andrés quarry is included in the Bombarral Formation ("Grés Superiores"). The lower part of this formation is diachronic with the Alcobaça Formation in the northen Lusitanian Basin, and is dated to the Early Tithonian. This specimen was reported in 1999 as the first occurrence of Allosaurus fragilis outside North America. The specimen, labelled MNHNUL/AND.001, is deposited in the National Museum of Natural History and Science, Lisbon. It consists of a partial skeleton, composed of an incomplete right quadrate, several vertebrae and chevrons, several dorsal ribs and gastralia, a partial pelvis, most of the hind limbs and several indeterminate fragments. In 2003, Miguel Telles Antunes and Octávio Mateus published a review of the dinosaurs from Portugal, where they assigned the Andrés specimen to Allosaurus sp. The Guimarota coal mine in Leiria, Portugal, produced plenty of remains of micro-vertebrates while it was being explored. The Guimarota beds belong to the Alcobaça Formation, and are dated of the Late Kimmeridgian. In 2005, Oliver Rauhut and Regina Fechner describe the right maxilla of a juvenile theropod (IPFUB Gui Th 4) from the Guimarota mine, that was stored in the collections of the Institute of Geological Sciences of the Free University of Berlin. They attribute the maxilla to Allosaurus sp. based on the large maxillary fenestra and coeval presence of the other Allosaurus specimens. This specimen allowed the authors to conclude that the development of paranasal pneumacity in theropods is heterochronic, with juveniles having more pronouced pneumaticity than adults. In 2006, a new species of Allosaurus, A. europaeus, was reported based a specimen found in a beach near Vale Frades, Lourinhã, Portugal. The specimen, labelled ML415, is deposited in the Lourinhã Museum, and consists of a partial skull, three cervical vertebrae and cervical ribs. It was found in rocks of the Praia Azul Member of the Lourinhã Formation, which in that sector is dated to the Early Tithonian. In 2005, the Andrés quarry was reactivated for further prospection, which yielded remains of a diverse vertebrate fauna and new Allosaurus remains. These new remains (such as a partial right frontal, MNHNUL/AND.001/062), along with further preparation of the original Andrés specimen, allowed for a more detailed comparison with other Allosaurus species. The authors concluded that the Andrés specimen is compatible with the diagnosis of A. fragilis, and also disputed the attribution of the Vale Frades specimen to a new species, claiming that the autapomorphies proposed in the diagnosis of A. europaeus can be explained by individual variation. In 2010, new Allosaurus elements from the Andrés quarry are reported, including new cranial remains such as a right quadrate-quadratojudal, two lacrimals, a right dentary, a right frontal, the posterior end of the right mandible and a complete braincase. A second complete left ilium suggests the presence of a second Allosaurus individual in the quarry, larger than the first. The authors once again claim that A. europaeus should be considered a nomen dubium until a more detailed description of the Vale Frades specimen is published. A detailed description of the remains of the Andrés specimen was published on the doctoral thesis of Elisabete Malafaia. The remains were collected between 1988 and 2010, and include cranial elements (such as the maxilla, nasal, lacrimals, prefrontal, postorbitals, frontals, palatines, quadrate, quadratojugal, squamosal, vomer, braincase, articular, surangulars, prearticular, angulars, supradentary and coronoid, isolated mesial and lateral teeth) and postcranial elements (intercentrum of the atlas, dorsal, sacral and caudal vertebrae, cervical and dorsal ribs, chevrons, coracoid, ilium, pubes, femora, tibiae, fibulae, astragalus and calcaneum, distal tarsal III, second, tird, and fourth metatarsals, and several phalanges). Duplicate elements reported in the thesis include the previously mentioned left ilium, a fragmentary pubic peduncle in articulation with the pubes, and a right frontal, caudal vertebra, and pedal phalanges of a third much smaller individual. The author claims that the Andrés specimens present noticeable differences with both A. fragilis and the type specimen of A. europaeus, but tentatively assigns it to Allosaurus cf. europaeus, pending the discovery of more specimens that allow the comparison between the two. In 2024, Burigo and Mateus publish a redescription and revised diagnosis of the Vale Frades specimen. The authors report new elements, such as the atlas-axis, coronoid, new teeth and rib fragments, and confirm the validity of the species. A specimen-level phylogenetic analysis using scored cranial characters was performed. The authors claim that the Andrés specimen is attributable to A. europaeus, and that A. europaeus is more closely related to A. jimmadsenni than to A. fragilis. Species Seven species of Allosaurus have been named: A. anax, A. amplus, A. atrox, A. europaeus, the type species A. fragilis, A. jimmadseni and A. lucasi. Among these (excluding A. anax, which was named in 2024), Daniel Chure and Mark Loewen in 2020 only recognized the species A. fragilis, A. europaeus, and the newly-named A. jimmadseni as being valid species. Some studies have suggested that A. europaeus does not show any unique characters compared to the North American species, though other authors have suggested that the species is valid and has a number of distinguishing characters. A. fragilis is the type species and was named by Marsh in 1877. It is known from the remains of at least 60 individuals, all found in the Kimmeridgian–Tithonian Upper Jurassic-age Morrison Formation of the United States, spread across Colorado, Montana, New Mexico, Oklahoma, South Dakota, Utah, and Wyoming. Details of the humerus (upper arm) of A. fragilis have been used as diagnostic among Morrison theropods, but A. jimmadseni indicates that this is no longer the case at the species level. A. jimmadseni has been scientifically described based on two nearly complete skeletons. The first specimen to wear the identification was unearthed in Dinosaur National Monument in northeastern Utah, with the original "Big Al" individual subsequently recognized as belonging to the same species. This species differs from A. fragilis in several anatomical details, including a jugal (cheekbone) with a straight lower margin. Fossils are confined to the Salt Wash Member of the Morrison Formation, with A. fragilis only found in the higher Brushy Basin Member. The specific name jimmadseni is named in honor of Madsen, for his contributions to the taxonomy of the genus, notably for his 1976 work. A. fragilis, A. jimmadseni, A. anax, A. amplus, and A. lucasi are all known from remains discovered in the Kimmeridgian–Tithonian Upper Jurassic-age Morrison Formation of the United States, spread across Colorado, Montana, New Mexico, Oklahoma, South Dakota, Utah and Wyoming. A. fragilis is regarded as the most common, known from the remains of at least 60 individuals. For a while in the late 1980s and early 1990s, it was common to recognize A. fragilis as the short-snouted species, with the long-snouted taxon being A. atrox. However, subsequent analysis of specimens from the Cleveland-Lloyd Dinosaur Quarry, Como Bluff, and Dry Mesa Quarry showed that the differences seen in the Morrison Formation material could be attributed to individual variation. A study of skull elements from the Cleveland-Lloyd site found wide variation between individuals, calling into question previous species-level distinctions based on such features as the shape of the lacrimal horns and the proposed differentiation of A. jimmadseni based on the shape of the jugal. A. anax was described and named in 2024 from several fossils representing various skeleton parts, the holotype being a postorbital numbered as OMNH 1771. This species is characterized by the lack of rugose ornamentation on the postorbital, the dorsal vertebrae with hourglass-shaped centra and pneumatic foramina, and other features of the postorbital, cervical vertebrae, and fibula. The specific name comes from the Ancient Greek ἄναξ (anax, "king", "lord" or "tribal chief"), and is intended to be an updated reference to the now dubious saurischian genus Saurophaganax, to which the fossils were previously attributed. The Allosaurus material from Portugal has a controversial taxonomic research history. The Andrés Allosaurus specimens, consisting of very complete cranial and post-cranial remains, have been attributed to A. fragilis, A. sp, A. europaeus and A. cf. europaeus. The Vale Frades Allosaurus, consisting of a partial skull and cervical vertebrae and ribs, is the type specimen of A. europaeus, although the validity of that species has been previously questioned. In 2024, a revised diagnosis of A. europaeus was published, confirming the validity of the species. The specific affinities of the Andrés specimens are still unclear. The issue of species and potential synonyms was historically complicated by the type specimen of Allosaurus fragilis (YPM 1930) being extremely fragmentary, consisting of a few incomplete vertebrae, limb fragments, rib fragments, and a single tooth. Because of this, several scientists have interpreted the type specimen as potentially dubious, meaning the genus Allosaurus itself or at least the species A. fragilis would be a nomen dubium ("dubious name", based on a specimen too incomplete to compare to other specimens or to classify). To address this situation, Gregory S. Paul and Kenneth Carpenter (2010) submitted a petition to the ICZN to have the name A. fragilis officially transferred to the more complete specimen USNM4734 (as a neotype), a decision that was ratified by the ICZN on December 29, 2023. Teeth of indeterminate species of Allosaurus have been reported from Tönniesberg and Kahlberg in Saxony, Germany, dating to the upper Kimmeridigian. Synonyms Creosaurus, Epanterias, and Labrosaurus are regarded as junior synonyms of Allosaurus. Most of the species that are regarded as synonyms of A. fragilis, or that were misassigned to the genus, are obscure and based on very scrappy remains. One exception is Labrosaurus ferox, named in 1884 by Marsh for an oddly formed partial lower jaw, with a prominent gap in the tooth row at the tip of the jaw, and a rear section greatly expanded and turned down. Later researchers suggested that the bone was pathologic, showing an injury to the living animal, and that part of the unusual form of the rear of the bone was due to plaster reconstruction. It is now regarded as an example of A. fragilis. In his 1988 book, Predatory Dinosaurs of the World, the freelance artist & author Gregory S. Paul proposed that A. fragilis had tall pointed horns and a slender build compared to a postulated second species A. atrox, as well as not being a different sex due to rarity. Allosaurus atrox was originally named by Marsh in 1878 as the type species of its own genus, Creosaurus, and is based on YPM 1890, an assortment of bones that includes a couple of pieces of the skull, portions of nine tail vertebrae, two hip vertebrae, an ilium, and ankle and foot bones. Although the idea of two common Morrison allosaur species was followed in some semi-technical and popular works, the 2000 thesis on Allosauridae noted that Charles Gilmore mistakenly reconstructed USNM 4734 as having a shorter skull than the specimens referred by Paul to atrox, refuting supposed differences between USNM 4734 and putative A. atrox specimens like DINO 2560, AMNH 600, and AMNH 666. "Allosaurus agilis", seen in Zittel, 1887, and Osborn, 1912, is a typographical error for A. fragilis. "Allosaurus ferox" is a typographical error by Marsh for A. fragilis in a figure caption for the partial skull YPM 1893 and YPM 1893 has been treated as a specimen of A fragilis. Likewise, "Labrosaurus fragilis" is a typographical error by Marsh (1896) for Labrosaurus ferox. "A. whitei" is a nomen nudum coined by Pickering in 1996 for the complete Allosaurus specimens that Paul referred to A. atrox. "Madsenius" was coined by David Lambert in 1990, being based on remains from Dinosaur National Monument assigned to Allosaurus or Creosaurus (a synonym of Allosaurus), and was to be described by paleontologist Robert Bakker as "Madsenius trux". However, "Madsenius" is now seen as yet another synonym of Allosaurus because Bakker's action was predicated upon the false assumption of USNM 4734 being distinct from long-snouted Allosaurus due to errors in Gilmore's 1920 reconstruction of USNM 4734. "Wyomingraptor" was informally coined by Bakker for allosaurid remains from the Morrison Formation of the Late Jurassic. The remains unearthed are labeled as Allosaurus and are housed in the Tate Geological Museum. However, there has been no official description of the remains and "Wyomingraptor" has been dismissed as a nomen nudum, with the remains referable to Allosaurus. Formerly assigned species and fossils Several species initially classified within or referred to Allosaurus do not belong within the genus. A. medius was named by Marsh in 1888 for various specimens from the Early Cretaceous Arundel Formation of Maryland, although most of the remains were removed by Richard Swann Lull to the new ornithopod species Dryosaurus grandis, except for a tooth. It was transferred to Antrodemus by Oliver Hay in 1902, but Hay later clarified that this was an inexplicable error on his part. Gilmore considered the tooth nondiagnostic but transferred it to Dryptosaurus, as D. medius. The referral was not accepted in the most recent review of basal tetanurans, and Allosaurus medius was simply listed as a dubious species of theropod. It may be closely related to Acrocanthosaurus. Allosaurus valens is a new combination for Antrodemus valens used by Friedrich von Huene in 1932; Antrodemus valens itself may also pertain to Allosaurus fragilis, as Gilmore suggested in 1920. A. lucaris, another Marsh name, was given to a partial skeleton in 1878. He later decided it warranted its own genus, Labrosaurus, but this has not been accepted, and A. lucaris is also regarded as another specimen of A. fragilis. Allosaurus lucaris, is known mostly from vertebrae, sharing characters with Allosaurus. Paul and Carpenter stated that the type specimen of this species, YPM 1931, was from a younger age than Allosaurus, and might represent a different genus. However, they found that the specimen was undiagnostic, and thus A. lucaris was a nomen dubium. Allosaurus sibiricus was described in 1914 by A. N. Riabinin on the basis of a bone, later identified as a partial fourth metatarsal, from the Early Cretaceous of Buryatia, Russia. It was transferred to Chilantaisaurus in 1990, but is now considered a nomen dubium indeterminate beyond Theropoda. Allosaurus meriani was a new combination by George Olshevsky for Megalosaurus meriani Greppin, 1870, based on a tooth from the Late Jurassic of Switzerland. However, a recent overview of Ceratosaurus included it in Ceratosaurus sp. Apatodon mirus, based on a scrap of vertebra Marsh first thought to be a mammalian jaw, has been listed as a synonym of Allosaurus fragilis. However, it was considered indeterminate beyond Dinosauria by Chure, and Mickey Mortimer believes that the synonymy of Apatodon with Allosaurus was due to correspondence to Ralph Molnar by John McIntosh, whereby the latter reportedly found a paper saying that Othniel Charles Marsh admitted that the Apatodon holotype was actually an allosaurid dorsal vertebra. A. amplexus was named by Gregory S. Paul for giant Morrison allosaur remains, and included in his conception Saurophagus maximus (later Saurophaganax). A. amplexus was originally coined by Cope in 1878 as the type species of his new genus Epanterias, and is based on what is now AMNH 5767, parts of three vertebrae, a coracoid, and a metatarsal. Following Paul's work, this species has been accepted as a synonym of A. fragilis. A 2010 study by Paul and Kenneth Carpenter, however, indicates that Epanterias is temporally younger than the A. fragilis type specimen, so it is a separate species at minimum. A. maximus was a new combination by David K. Smith for Chure's Saurophaganax maximus, a taxon created by Chure in 1995 for giant allosaurid remains from the Morrison of Oklahoma. These remains had been known as Saurophagus, but that name was already in use, leading Chure to propose a substitute. Smith, in his 1998 analysis of variation, concluded that S. maximus was not different enough from Allosaurus to be a separate genus, but did warrant its own species, A. maximus. This reassignment was rejected in a review of basal tetanurans. A 2024 reassessment of fossil material assigned to Saurophaganax suggested that the holotype neural arch of this taxon could not confidently be assigned to a theropod, but that it exhibited some similarities to sauropods. Other Saurophaganax bones could be referred to diplodocid sauropods. As such, the researchers assigned the remaining theropod bones to a new species of Allosaurus, A. anax. There are also several species left over from the synonymizations of Creosaurus and Labrosaurus with Allosaurus. Creosaurus potens was named by Lull in 1911 for a vertebra from the Early Cretaceous of Maryland. It is now regarded as a dubious theropod. Labrosaurus stechowi, described in 1920 by Janensch based on isolated Ceratosaurus-like teeth from the Tendaguru beds of Tanzania, was listed by Donald F. Glut as a species of Allosaurus, is now considered a dubious ceratosaurian related to Ceratosaurus. L. sulcatus, named by Marsh in 1896 for a Morrison theropod tooth, which like L. stechowi is now regarded as a dubious Ceratosaurus-like ceratosaur. A. tendagurensis was named in 1925 by Werner Janensch for a partial shin (MB.R.3620) found in the Kimmeridgian-age Tendaguru Formation in Mtwara, Tanzania. Although tabulated as a tentatively valid species of Allosaurus in the second edition of the Dinosauria, subsequent studies place it as indeterminate beyond Tetanurae, either a carcharodontosaurian or megalosaurid. Although obscure, it was a large theropod, possibly around long and in weight. Kurzanov and colleagues in 2003 designated six teeth from Siberia as Allosaurus sp. (meaning the authors found the specimens to be most like those of Allosaurus, but did not or could not assign a species to them). They were reclassified as an indeterminate theropod. Also, reports of Allosaurus in Shanxi, China go back to at least 1982. These were interpreted as Torvosaurus remains in 2012. An astragalus (ankle bone) thought to belong to a species of Allosaurus was found at Cape Paterson, Victoria in Early Cretaceous beds in southeastern Australia. It was thought to provide evidence that Australia was a refugium for animals that had gone extinct elsewhere. This identification was challenged by Samuel Welles, who thought it more resembled that of an ornithomimid, but the original authors defended their identification. With fifteen years of new specimens and research to look at, Daniel Chure reexamined the bone and found that it was not Allosaurus, but could represent an allosauroid. Similarly, Yoichi Azuma and Phil Currie, in their description of Fukuiraptor, noted that the bone closely resembled that of their new genus. This specimen is sometimes referred to as "Allosaurus robustus", an informal museum name. It likely belonged to something similar to Australovenator, although one study considered it to belong to an abelisaur. Description Allosaurus was a typical large theropod, having a massive skull on a short neck, a long, slightly sloping tail, and reduced forelimbs. Allosaurus fragilis, the best-known species, had an average length of and mass of , with the largest definitive Allosaurus specimen (AMNH 680) estimated at long, with an estimated weight of . In his 1976 monograph on Allosaurus, James H. Madsen mentioned a range of bone sizes which he interpreted to show a maximum length of . As with dinosaurs in general, weight estimates are debatable, and since 1980 have ranged between , , and approximately for modal adult weight (not maximum). John Foster, a specialist on the Morrison Formation, suggests that is reasonable for large adults of A. fragilis, but that is a closer estimate for individuals represented by the average-sized thigh bones he has measured. Using the subadult specimen nicknamed "Big Al", since assigned to the species Allosaurus jimmadseni, researchers using computer modeling arrived at a best estimate of for the individual, but by varying parameters they found a range from approximately to approximately . A separate computational project estimated the adaptive optimum body mass in Allosaurus to be ~2,345 kg. A. europaeus has been measured up to in length and in body mass. Several gigantic specimens have been attributed to Allosaurus, but may in fact belong to other genera. The dubious genus Saurophaganax (OMNH 1708) was estimated to reach around in length, and its single species was sometimes included in the genus Allosaurus as Allosaurus maximus. However, a 2024 study concluded that some material assigned to Saurophaganax actually belonged to a diplodocid sauropod with the material confidently assigned to Allosauridae belonging to a new species of Allosaurus, A. anax, and the body mass of this species was tentatively estimated around based on fragmentary material. Another potential specimen of Allosaurus, once assigned to the genus Epanterias (AMNH 5767), may have measured in length. A more recent discovery is a partial skeleton from the Peterson Quarry in Morrison rocks of New Mexico; this large allosaurid was suggested to be a potential specimen of Saurophaganax prior to this taxon's 2024 reassessment. David K. Smith, examining Allosaurus fossils by quarry, found that the Cleveland-Lloyd Dinosaur Quarry (Utah) specimens are generally smaller than those from Como Bluff (Wyoming) or Brigham Young University's Dry Mesa Quarry (Colorado), but the shapes of the bones themselves did not vary between the sites. A later study by Smith incorporating Garden Park (Colorado) and Dinosaur National Monument (Utah) specimens found no justification for multiple species based on skeletal variation; skull variation was most common and was gradational, suggesting individual variation was responsible. Further work on size-related variation again found no consistent differences, although the Dry Mesa material tended to clump together on the basis of the astragalus, an ankle bone. Kenneth Carpenter, using skull elements from the Cleveland-Lloyd site, found wide variation between individuals, calling into question previous species-level distinctions based on such features as the shape of the horns, and the proposed differentiation of A. jimmadseni based on the shape of the jugal. A study published by Motani et al., in 2020 suggests that Allosaurus was also sexually dimorphic in the width of the femur's head against its length. Skull The skull and teeth of Allosaurus were modestly proportioned for a theropod of its size. Paleontologist Gregory S. Paul gives a length of for a skull belonging to an individual he estimates at long. Each premaxilla (the bones that formed the tip of the snout) held five teeth with D-shaped cross-sections, and each maxilla (the main tooth-bearing bones in the upper jaw) had between 14 and 17 teeth; the number of teeth does not exactly correspond to the size of the bone. Each dentary (the tooth-bearing bone of the lower jaw) had between 14 and 17 teeth, with an average count of 16. The teeth became shorter, narrower, and more curved toward the back of the skull. All of the teeth had saw-like edges. They were shed easily, and were replaced continually, making them common fossils. Its skull was light, robust and equipped with dozens of sharp, serrated teeth. The skull had a pair of horns above and in front of the eyes. These horns were composed of extensions of the lacrimal bones, and varied in shape and size. There were also lower paired ridges running along the top edges of the nasal bones that led into the horns. The horns were probably covered in a keratin sheath and may have had a variety of functions, including acting as sunshades for the eyes, being used for display, and being used in combat against other members of the same species (although they were fragile). There was a ridge along the back of the skull roof for muscle attachment, as is also seen in tyrannosaurids. Inside the lacrimal bones were depressions that may have held glands, such as salt glands. Within the maxillae were sinuses that were better developed than those of more basal theropods such as Ceratosaurus and Marshosaurus; they may have been related to the sense of smell, perhaps holding something like Jacobson's organs. The roof of the braincase was thin, perhaps to improve thermoregulation for the brain. The skull and lower jaws had joints that permitted motion within these units. In the lower jaws, the bones of the front and back halves loosely articulated, permitting the jaws to bow outward and increasing the animal's gape. The braincase and frontals may also have had a joint. Postcranial skeleton Allosaurus had nine vertebrae in the neck, 14 in the back, and five in the sacrum supporting the hips. The number of tail vertebrae is unknown and varied with individual size; James Madsen estimated about 50, while Gregory S. Paul considered that to be too many and suggested 45 or less. There were hollow spaces in the neck and anterior back vertebrae. Such spaces, which are also found in modern theropods (that is, the birds), are interpreted as having held air sacs used in respiration. The rib cage was broad, giving it a barrel chest, especially in comparison to less derived theropods like Ceratosaurus. Allosaurus had gastralia (belly ribs), but these are not common findings, and they may have ossified poorly. In one published case, the gastralia show evidence of injury during life. A furcula (wishbone) was also present, but has only been recognized since 1996; in some cases furculae were confused with gastralia. The ilium, the main hip bone, was massive, and the pubic bone had a prominent foot that may have been used for both muscle attachment and as a prop for resting the body on the ground. Madsen noted that in about half of the individuals from the Cleveland-Lloyd Dinosaur Quarry, independent of size, the pubes had not fused to each other at their foot ends. He suggested that this was a sexual characteristic, with females lacking fused bones to make egg-laying easier. This proposal has not attracted further attention, however. The forelimbs of Allosaurus were short in comparison to the hindlimbs (only about 35% the length of the hindlimbs in adults) and had three fingers per hand, tipped with large, strongly curved and pointed claws. The arms were powerful, and the forearm was somewhat shorter than the upper arm (1:1.2 ulna/humerus ratio). The wrist had a version of the semilunate carpal also found in more derived theropods like maniraptorans. Of the three fingers, the innermost (or thumb) was the largest, and diverged from the others. The phalangeal formula is 2-3-4-0-0, meaning that the innermost finger (phalange) has two bones, the next has three, and the third finger has four. The legs were not as long or suited for speed as those of tyrannosaurids, and the claws of the toes were less developed and more hoof-like than those of earlier theropods. Each foot had three weight-bearing toes and an inner dewclaw, which Madsen suggested could have been used for grasping in juveniles. There was also what is interpreted as the splint-like remnant of a fifth (outermost) metatarsal, perhaps used as a lever between the Achilles tendon and foot. Skin Skin impressions from Allosaurus have been described. One impression, from a juvenile specimen, measures 30 cm² and is associated with the anterior dorsal ribs/pectoral region. The impression shows small scales measuring 1–3 mm in diameter. A skin impression from the "Big Al Two" specimen, associated with the base of the tail, measures 20 cm x 20 cm and shows large scales measuring up to 2 cm in diameter. However, it has been noted that these scales are more similar to those of sauropods, and due to the presence of non-theropod remains associated with the tail of "Big Al Two" there is a possibility that this skin impression is not from Allosaurus. Another Allosaurus fossil features a skin impression from the mandible, showing scales measuring 1–2 mm in diameter. The same fossil also preserves skin measuring 20 x 20 cm from the ventral side of the neck, showing scutate scales measuring 0.5 cm wide and 11 cm long. A small skin impression from an Allosaurus skull has been reported but never described. Classification Allosaurus was an allosaurid, a member of a family of large theropods within the larger group Carnosauria. The family name Allosauridae was created for this genus in 1878 by Othniel Charles Marsh, but the term was largely unused until the 1970s in favor of Megalosauridae, another family of large theropods that eventually became a wastebasket taxon. This, along with the use of Antrodemus for Allosaurus during the same period, is a point that needs to be remembered when searching for information on Allosaurus in publications that predate James Madsen's 1976 monograph. Major publications using the name "Megalosauridae" instead of "Allosauridae" include Gilmore, 1920, von Huene, 1926, Romer, 1956 and 1966, Steel, 1970, and Walker, 1964. Following the publication of Madsen's influential monograph, Allosauridae became the preferred family assignment, but it too was not strongly defined. Semi-technical works used Allosauridae for a variety of large theropods, usually those that were larger and better-known than megalosaurids. Typical theropods that were thought to be related to Allosaurus included Indosaurus, Piatnitzkysaurus, Piveteausaurus, Yangchuanosaurus, Acrocanthosaurus, Chilantaisaurus, Compsosuchus, Stokesosaurus, and Szechuanosaurus. Given modern knowledge of theropod diversity and the advent of cladistic study of evolutionary relationships, none of these theropods is now recognized as an allosaurid, although several, like Acrocanthosaurus and Yangchuanosaurus, are members of closely related families. Below is a cladogram based on the analysis of Benson et al. in 2010. Allosauridae is one of four families in Allosauroidea; the other three are Neovenatoridae, Carcharodontosauridae and Sinraptoridae. Allosauridae has at times been proposed as ancestral to the Tyrannosauridae (which would make it paraphyletic), one example being Gregory S. Paul's Predatory Dinosaurs of the World, but this has been rejected, with tyrannosaurids identified as members of a separate branch of theropods, the Coelurosauria. Allosauridae is the smallest of the carnosaur families, with only Saurophaganax and a currently unnamed French allosauroid accepted as possible valid genera besides Allosaurus in the most recent review. Another genus, Epanterias, is a potential valid member, but it and Saurophaganax may turn out to be large examples of Allosaurus. Some reviews have kept the genus Saurophaganax and included Epanterias with Allosaurus. The controversial Saurophaganax, initially recognized as a large Allosaurus-like theropod, has had a controversial taxonomic history. In 2019, Rauhut and Pol noted that its taxonomic placement within Allosauroidea is unstable, being recovered as a sister taxon of Metriacanthosauridae or Allosauria, or even as the basalmost carcharodontosaurian. In 2024, Saurophaganax was reassessed as a dubious, chimeric taxon with the holotype being so fragmentary that it could only be confidently referred to the Saurischia, and some specimens more likely belonging to a diplodocid sauropod. Paleobiology Life history The wealth of Allosaurus fossils, from nearly all ages of individuals, allows scientists to study how the animal grew and how long its lifespan may have been. Remains may reach as far back in the lifespan as eggs—crushed eggs from Colorado have been suggested as those of Allosaurus. Based on histological analysis of limb bones, bone deposition appears to stop at around 22 to 28 years, which is comparable to that of other large theropods like Tyrannosaurus. From the same analysis, its maximum growth appears to have been at age 15, with an estimated growth rate of about 150 kilograms (330 lb) per year. Medullary bone tissue (endosteally derived, ephemeral, mineralization located inside the medulla of the long bones in gravid female birds) has been reported in at least one Allosaurus specimen, a shin bone from the Cleveland-Lloyd Quarry. Today, this bone tissue is only formed in female birds that are laying eggs, as it is used to supply calcium to shells. Its presence in the Allosaurus individual has been used to establish sex and show it had reached reproductive age. However, other studies have called into question some cases of medullary bone in dinosaurs, including this Allosaurus individual. Data from extant birds suggested that the medullary bone in this Allosaurus individual may have been the result of a bone pathology instead. However, with the confirmation of medullary tissue indicating sex in a specimen of Tyrannosaurus, it may be possible to ascertain whether or not the Allosaurus in question was indeed female. The discovery of a juvenile specimen with a nearly complete hindlimb shows that the legs were relatively longer in juveniles, and the lower segments of the leg (shin and foot) were relatively longer than the thigh. These differences suggest that younger Allosaurus were faster and had different hunting strategies than adults, perhaps chasing small prey as juveniles, then becoming ambush hunters of large prey upon adulthood. The thigh bone became thicker and wider during growth, and the cross-section less circular, as muscle attachments shifted, muscles became shorter, and the growth of the leg slowed. These changes imply that juvenile legs has less predictable stresses compared with adults, which would have moved with more regular forward progression. Conversely, the skull bones appear to have generally grown isometrically, increasing in size without changing in proportion. Feeding Most paleontologists accept Allosaurus as an active predator of large animals. There is dramatic evidence for allosaur attacks on Stegosaurus, including an Allosaurus tail vertebra with a partially healed puncture wound that fits a Stegosaurus tail spike, and a Stegosaurus neck plate with a U-shaped wound that correlates well with an Allosaurus snout. Sauropods seem to be likely candidates as both live prey and as objects of scavenging, based on the presence of scrapings on sauropod bones fitting allosaur teeth well and the presence of shed allosaur teeth with sauropod bones. However, as Gregory Paul noted in 1988, Allosaurus was probably not a predator of fully grown sauropods, unless it hunted in packs, as it had a modestly sized skull and relatively small teeth, and was greatly outweighed by contemporaneous sauropods. Another possibility is that it preferred to hunt juveniles instead of fully grown adults. Research in the 1990s and the first decade of the 21st century may have found other solutions to this question. Robert T. Bakker, comparing Allosaurus to Cenozoic saber-toothed carnivorous mammals, found similar adaptations, such as a reduction of jaw muscles and increase in neck muscles, and the ability to open the jaws extremely wide. Although Allosaurus did not have saber teeth, Bakker suggested another mode of attack that would have used such neck and jaw adaptations: the short teeth in effect became small serrations on a saw-like cutting edge running the length of the upper jaw, which would have been driven into prey. This type of jaw would permit slashing attacks against much larger prey, with the goal of weakening the victim. Similar conclusions were drawn by another study using finite element analysis on an Allosaurus skull. According to their biomechanical analysis, the skull was very strong but had a relatively small bite force. By using jaw muscles only, it could produce a bite force of 805 to 8,724 N, but the skull could withstand nearly 55,500 N of vertical force against the tooth row. The authors suggested that Allosaurus used its skull like a machete against prey, attacking open-mouthed, slashing flesh with its teeth, and tearing it away without splintering bones, unlike Tyrannosaurus, which is thought to have been capable of damaging bones. They also suggested that the architecture of the skull could have permitted the use of different strategies against different prey; the skull was light enough to allow attacks on smaller and more agile ornithopods, but strong enough for high-impact ambush attacks against larger prey like stegosaurids and sauropods. Their interpretations were challenged by other researchers, who found no modern analogs to a hatchet attack and considered it more likely that the skull was strong to compensate for its open construction when absorbing the stresses from struggling prey. The original authors noted that Allosaurus itself has no modern equivalent, that the tooth row is well-suited to such an attack, and that articulations in the skull cited by their detractors as problematic actually helped protect the palate and lessen stress. Another possibility for handling large prey is that theropods like Allosaurus were "flesh grazers" which could take bites of flesh out of living sauropods that were sufficient to sustain the predator so it would not have needed to expend the effort to kill the prey outright. This strategy would also potentially have allowed the prey to recover and be fed upon in a similar way later. An additional suggestion notes that ornithopods were the most common available dinosaurian prey, and that Allosaurus may have subdued them by using an attack similar to that of modern big cats: grasping the prey with their forelimbs, and then making multiple bites on the throat to crush the trachea. This is compatible with other evidence that the forelimbs were strong and capable of restraining prey. Studies done by Stephen Lautenschager et al. from the University of Bristol also indicate Allosaurus could open its jaws quite wide and sustain considerable muscle force. When compared with Tyrannosaurus and the therizinosaurid Erlikosaurus in the same study, it was found that Allosaurus had a wider gape than either; the animal was capable of opening its jaws to a 92-degree angle at maximum. The findings also indicate that large carnivorous dinosaurs, like modern carnivores, had wider jaw gapes than herbivores. A biomechanical study published in 2013 by Eric Snively and colleagues found that Allosaurus had an unusually low attachment point on the skull for the longissimus capitis superficialis neck muscle compared to other theropods such as Tyrannosaurus. This would have allowed the animal to make rapid and forceful vertical movements with the skull. The authors found that vertical strikes as proposed by Bakker and Rayfield are consistent with the animal's capabilities. They also found that the animal probably processed carcasses by vertical movements in a similar manner to falcons, such as kestrels: the animal could have gripped prey with the skull and feet, then pulled back and up to remove flesh. This differs from the prey-handling envisioned for tyrannosaurids, which probably tore flesh with lateral shakes of the skull, similar to crocodilians. In addition, Allosaurus was able to "move its head and neck around relatively rapidly and with considerable control", at the cost of power. Other aspects of feeding include the eyes, arms, and legs. The shape of the skull of Allosaurus limited potential binocular vision to 20° of width, slightly less than that of modern crocodilians. As with crocodilians, this may have been enough to judge prey distance and time attacks. The arms, compared with those of other theropods, were suited for both grasping prey at a distance or clutching it close, and the articulation of the claws suggests that they could have been used to hook things. Finally, the top speed of Allosaurus has been estimated at per hour. A paper on the cranio-dental morphology of Allosaurus and how it worked has deemed the hatchet jaw attack unlikely, reinterpreting the unusually wide gape as an adaptation to allow Allosaurus to deliver a muscle-driven bite to large prey, with the weaker jaw muscles being a trade-off to allow for the widened gape. Sauropod carrion may also have been important to large theropods in the Morrison Formation. Forensic techniques indicate that sauropod carcasses were targeted by Allosaurus at all stages of decomposition, indicating that late-stage decay pathogens were not a significant deterrent. A survey of sauropod bones from the Morrison Formation also reported widespread bite marks on sauropod bones in low-economy regions, which suggests that large theropods scavenged large sauropods when available, with the scarcity of such bite marks on the remains of smaller bones being potentially attributable to much more complete consumption of smaller or adolescent sauropods and on ornithischians, which would have been more commonly taken as live prey. A single dead adult Barosaurus or Brachiosaurus would have had enough calories to sustain multiple large theropods for weeks or months, though the vast majority of the Morrison's sauropod fossil record consisted of much smaller-bodied taxa such as Camarasaurus lentus or Diplodocus. It has also been argued that disabled individuals such as Big Al and Big Al II were physically incapable of hunting due to their numerous injuries but were able to survive nonetheless as scavengers of giant sauropod-falls, Interestingly, a recent review of paleopathologies in theropods may support this conclusion. The researchers found a positive association between allosaurids and fractures to the appendicular skeleton, while tyrannosaurs had a statistically negative association with these types of injuries. The fact that allosaurs were more likely to survive and heal even when severe fractures limited their locomotion abilities can be explained, in part, by different resource accessibility paradigms for the two groups, as allosauroids generally lived in sauropod-inhabited ecosystems, some of which, including the Morrison, have been interpreted as arid and highly water-stressed environments; however, the water-stressed nature of the Morrison has been heavily criticized in several more recent works on the basis of fossil evidence for the presence of extensive forest cover and aquatic ecosystems. Social behavior It has been speculated since the 1970s that Allosaurus preyed on sauropods and other large dinosaurs by hunting in groups. Such a depiction is common in semitechnical and popular dinosaur literature. Robert T. Bakker has extended social behavior to parental care, and has interpreted shed allosaur teeth and chewed bones of large prey animals as evidence that adult allosaurs brought food to lairs for their young to eat until they were grown, and prevented other carnivores from scavenging on the food. However, there is actually little evidence of gregarious behavior in theropods, and social interactions with members of the same species would have included antagonistic encounters, as shown by injuries to gastralia and bite wounds to skulls (the pathologic lower jaw named Labrosaurus ferox is one such possible example). Such head-biting may have been a way to establish dominance in a pack or to settle territorial disputes. Although Allosaurus may have hunted in packs, it has been argued that Allosaurus and other theropods had largely aggressive interactions instead of cooperative interactions with other members of their own species. The study in question noted that cooperative hunting of prey much larger than an individual predator, as is commonly inferred for theropod dinosaurs, is rare among vertebrates in general, and modern diapsid carnivores (including lizards, crocodiles, and birds) rarely cooperate to hunt in such a way. Instead, they are typically territorial and will kill and cannibalize intruders of the same species, and will also do the same to smaller individuals that attempt to eat before they do when aggregated at feeding sites. According to this interpretation, the accumulation of remains of multiple Allosaurus individuals at the same site; e.g., in the Cleveland–Lloyd Quarry, are not due to pack hunting, but to the fact that Allosaurus individuals were drawn together to feed on other disabled or dead allosaurs, and were sometimes killed in the process. This could explain the high proportion of juvenile and subadult allosaurs present, as juveniles and subadults are disproportionally killed at modern group feeding sites of animals like crocodiles and Komodo dragons. The same interpretation applies to Bakker's lair sites. There is some evidence for cannibalism in Allosaurus, including Allosaurus shed teeth found among rib fragments, possible tooth marks on a shoulder blade, and cannibalized allosaur skeletons among the bones at Bakker's lair sites. Brain and senses The brain of Allosaurus, as interpreted from spiral CT scanning of an endocast, was more consistent with crocodilian brains than those of the other living archosaurs, birds. The structure of the vestibular apparatus indicates that the skull was held nearly horizontal, as opposed to strongly tipped up or down. The structure of the inner ear was like that of a crocodilian, indicating that Allosaurus was more adapted to hear lower frequencies and would have had difficulty hearing subtle sounds. The olfactory bulbs were large and well suited for detecting odors, but were typical for an animal of its size. Paleopathology In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by the behavior of the animal than other kinds of injury. Stress fractures and tendon avulsions occurring in the forelimb have special behavioral significance since while injuries to the feet could be caused by running or migration, resistant prey items are the most probable source of injuries to the hand. Allosaurus was one of only two theropods examined in the study to exhibit a tendon avulsion, and in both cases the avulsion occurred on the forelimb. When the researchers looked for stress fractures, they found that Allosaurus had a significantly greater number of stress fractures than Albertosaurus, Ornithomimus or Archaeornithomimus. Of the 47 hand bones the researchers studied, three were found to contain stress fractures. Of the feet, 281 bones were studied and 17 were found to have stress fractures. The stress fractures in the foot bones "were distributed to the proximal phalanges" and occurred across all three weight-bearing toes in "statistically indistinguishable" numbers. Since the lower end of the third metatarsal would have contacted the ground first while an allosaur was running, it would have borne the most stress. If the allosaurs' stress fractures were caused by damage accumulating while walking or running this bone should have experience more stress fractures than the others. The lack of such a bias in the examined Allosaurus fossils indicates an origin for the stress fractures from a source other than running. The authors conclude that these fractures occurred during interaction with prey, like an allosaur trying to hold struggling prey with its feet. The abundance of stress fractures and avulsion injuries in Allosaurus provide evidence for "very active" predation-based rather than scavenging diets. The left scapula and fibula of an Allosaurus fragilis specimen cataloged as USNM 4734 are both pathological, both probably due to healed fractures. The specimen USNM 8367 preserved several pathological gastralia which preserve evidence of healed fractures near their middle. Some of the fractures were poorly healed and "formed pseudoarthroses". A specimen with a fractured rib was recovered from the Cleveland-Lloyd Quarry. Another specimen had fractured ribs and fused vertebrae near the end of the tail. An apparent subadult male Allosaurus fragilis was reported to have extensive pathologies, with a total of fourteen separate injuries. The specimen MOR 693 had pathologies on five ribs, the sixth neck vertebra, the third, eighth, and thirteenth back vertebrae, the second tail vertebra and its chevron, the gastralia right scapula, manual phalanx I left ilium metatarsals III and V, the first phalanx of the third toe and the third phalanx of the second. The ilium had "a large hole...caused by a blow from above". The near end of the first phalanx of the third toe was afflicted by an involucrum. Additionally, a subadult Allosaurus individual that suffered from spondyloarthropathy has been discovered in Dana Quarry in Wyoming. This finding represents the first known fossil evidence of spondyloarthropathy occurring in a theropod. Other pathologies reported in Allosaurus include: Willow breaks in two ribs Healed fractures in the humerus and radius Distortion of joint surfaces in the foot, possibly due to osteoarthritis or developmental issues Osteopetrosis along the endosteal surface of a tibia. Distortions of the joint surfaces of the tail vertebrae, possibly due to osteoarthritis or developmental issues "[E]xtensive 'neoplastic' ankylosis of caudals", possibly due to physical trauma, as well as the fusion of chevrons to centra Coossification of vertebral centra near the end of the tail Amputation of a chevron and foot bone, both possibly a result of bites "[E]xtensive exostoses" in the first phalanx of the third toe Lesions similar to those caused by osteomyelitis in two scapulae Bone spurs in a premaxilla, ungual, and two metacarpals Exostosis in a pedal phalanx possibly attributable to an infectious disease A metacarpal with a round depressed fracture Paleoecology Allosaurus was the most common large theropod in the vast tract of Western American fossil-bearing rock known as the Morrison Formation, accounting for 70 to 75% of theropod specimens, and as such was at the top trophic level of the Morrison food chain. The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons, and flat floodplains. Vegetation varied from river-lining forests of conifers, tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum. The Morrison Formation has been a rich fossil hunting ground. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, ferns, cycads, ginkgoes, and several families of conifers. Animal fossils discovered include bivalves, snails, ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs, several species of pterosaur, numerous dinosaur species, and early mammals such as docodonts, multituberculates, symmetrodonts, and triconodonts. Dinosaurs known from the Morrison include the theropods Ceratosaurus, Ornitholestes, Tanycolagreus, and Torvosaurus, the sauropods Haplocanthosaurus, Camarasaurus, Cathetosaurus, Brachiosaurus, Suuwassea, Apatosaurus, Brontosaurus, Barosaurus, Diplodocus, Supersaurus, Amphicoelias, and Maraapunisaurus, and the ornithischians Camptosaurus, Dryosaurus, and Stegosaurus. Allosaurus is commonly found at the same sites as Apatosaurus, Camarasaurus, Diplodocus, and Stegosaurus. The Late Jurassic formations of Portugal where Allosaurus is present are interpreted as having been similar to the Morrison, but with a stronger marine influence. Many of the dinosaurs of the Morrison Formation are the same genera as those seen in Portuguese rocks (mainly Allosaurus, Ceratosaurus, Torvosaurus, and Stegosaurus), or have a close counterpart (Brachiosaurus and Lusotitan, Camptosaurus and Draconyx). Allosaurus coexisted with fellow large theropods Ceratosaurus and Torvosaurus in both the United States and Portugal. The three appear to have had different ecological niches, based on anatomy and the location of fossils. Ceratosaurus and Torvosaurus may have preferred to be active around waterways, and had lower, thinner bodies that would have given them an advantage in forest and underbrush terrains, whereas Allosaurus was more compact, with longer legs, faster but less maneuverable, and seems to have preferred dry floodplains. Ceratosaurus, better known than Torvosaurus, differed noticeably from Allosaurus in functional anatomy by having a taller, narrower skull with large, broad teeth. Allosaurus was itself a potential food item to other carnivores, as illustrated by an Allosaurus pubic foot marked by the teeth of another theropod, probably Ceratosaurus or Torvosaurus. The location of the bone in the body (along the bottom margin of the torso and partially shielded by the legs), and the fact that it was among the most massive in the skeleton, indicates that the Allosaurus was being scavenged. A bone assemblage in the Upper Jurassic Mygatt-Moore Quarry preserves an unusually high occurrence of theropod bite marks, most of which can be attributed to Allosaurus and Ceratosaurus, while others could have been made by Torvosaurus given the size of the striations. While the position of the bite marks on the herbivorous dinosaurs is consistent with predation or early access to remains, bite marks found on Allosaurus material suggest scavenging, either from the other theropods or from another Allosaurus. The unusually high concentration of theropod bite marks compared to other assemblages could be explained either by a more complete utilization of resources during a dry season by theropods, or by a collecting bias in other localities.
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
1348
https://en.wikipedia.org/wiki/AK-47
AK-47
The AK-47, officially known as the Avtomat Kalashnikova (; also known as the Kalashnikov or just AK), is an assault rifle that is chambered for the 7.62×39mm cartridge. Developed in the Soviet Union by Russian small-arms designer Mikhail Kalashnikov, it is the originating firearm of the Kalashnikov (or "AK") family of rifles. After more than seven decades since its creation, the AK-47 model and its variants remain one of the most popular and widely used firearms in the world. Design work on the AK-47 began in 1945. It was presented for official military trials in 1947, and, in 1948, the fixed-stock version was introduced into active service for selected units of the Soviet Army. In early 1949, the AK was officially accepted by the Soviet Armed Forces and used by the majority of the member states of the Warsaw Pact. The model and its variants owe their global popularity to their reliability under harsh conditions, low production cost (compared to contemporary weapons), availability in virtually every geographic region, and ease of use. The AK has been manufactured in many countries and has seen service with armed forces as well as irregular forces and insurgencies throughout the world. , "of the estimated 500 million firearms worldwide, approximately 100 million belong to the Kalashnikov family, three-quarters of which are AK-47s". The model is the basis for the development of many other types of individual, crew-served, and specialized firearms. History Origins During World War II, the Sturmgewehr 44 rifle used by German forces made a deep impression on their Soviet counterparts. The select-fire rifle was chambered for a new intermediate cartridge, the 7.92×33mm Kurz, and combined the firepower of a submachine gun with the range and accuracy of a rifle. On 15 July 1943, an earlier model of the Sturmgewehr was demonstrated before the People's Commissariat of Arms of the USSR. The Soviets were impressed with the weapon and immediately set about developing an intermediate caliber fully automatic rifle of their own, to replace the PPSh-41 submachine guns and outdated Mosin–Nagant bolt-action rifles that armed most of the Soviet Army. The Soviets soon developed the 7.62×39mm M43 cartridge, used in the semi-automatic SKS carbine and the RPD light machine gun. Shortly after World War II, the Soviets developed the AK-47 rifle, which quickly replaced the SKS in Soviet service. Introduced in 1959, the AKM is a lighter stamped steel version and the most ubiquitous variant of the entire AK series of firearms. In the 1960s, the Soviets introduced the RPK light machine gun, an AK-type weapon with a stronger receiver, a longer heavy barrel, and a bipod, that eventually replaced the RPD light machine gun. Concept Mikhail Kalashnikov began his career as a weapon designer in 1941 while recuperating from a shoulder wound that he received during the Battle of Bryansk. Kalashnikov himself stated..."I was in the hospital, and a soldier in the bed beside me asked: 'Why do our soldiers have only one rifle for two or three of our men when the Germans have automatics?' So I designed one. I was a soldier, and I created a machine gun for a soldier. It was called an Avtomat Kalashnikova, the automatic weapon of Kalashnikov—AK—and it carried the year of its first manufacture, 1947." The AK-47 is best described as a hybrid of previous rifle technology innovations. "Kalashnikov decided to design an automatic rifle combining the best features of the American M1 Garand and the German StG 44." Kalashnikov's team had access to these weapons and did not need to "reinvent the wheel". Kalashnikov himself observed: "A lot of Russian Army soldiers ask me how one can become a constructor, and how new weaponry is designed. These are very difficult questions. Each designer seems to have his own paths, his own successes and failures. But one thing is clear: before attempting to create something new, it is vital to have a good appreciation of everything that already exists in this field. I myself have had many experiences confirming this to be so." Some claimed that Kalashnikov copied designs like Bulkin's TKB-415 or Simonov's AVS-31. Early designs Kalashnikov started work on a submachine gun design in 1942 and a light machine gun design in 1943. Early in 1944, Kalashnikov was given some 7.62×39mm M43 cartridges and informed that other designers were working on weapons for this new Soviet small-arms cartridge. It was suggested that a new weapon might well lead to greater things. He then undertook work on the new rifle. In 1944, he entered a design competition with this new 7.62×39mm, semi-automatic, gas-operated, long-stroke piston carbine, strongly influenced by the American M1 Garand. The new rifle was in the same class as the SKS-45 carbine, with a fixed magazine and gas tube above the barrel. However, the new Kalashnikov design lost out to a Simonov design. In 1946, a new design competition was initiated to develop a new rifle. Kalashnikov submitted a gas-operated rifle with a short-stroke gas piston above the barrel, a breechblock mechanism similar to his 1944 carbine, and a curved 30-round magazine. Kalashnikov's rifles, the AK-1 (with a milled receiver) and AK-2 (with a stamped receiver) proved to be reliable weapons and were accepted to a second round of competition along with other designs. These prototypes (also known as the AK-46) had a rotary bolt, a two-part receiver with separate trigger unit housing, dual controls (separate safety and fire selector switches), and a non-reciprocating charging handle located on the left side of the weapon. This design had many similarities to the StG 44. In late 1946, as the rifles were being tested, one of Kalashnikov's assistants, Aleksandr Zaitsev, suggested a major redesign to improve reliability. At first, Kalashnikov was reluctant, given that their rifle had already fared better than its competitors. Eventually, however, Zaitsev managed to persuade Kalashnikov. In November 1947, the new prototypes (AK-47s) were completed. The rifle used a long-stroke gas piston above the barrel. The upper and lower receivers were combined into a single receiver. The selector and safety were combined into a single control lever/dust cover on the right side of the rifle and the bolt handle was attached to the bolt carrier. This simplified the design and production of the rifle. The first army trial series began in early 1948. The new rifle proved to be reliable under a wide range of conditions and possessed convenient handling characteristics. In 1949, it was adopted by the Soviet Army as the "7.62 mm Kalashnikov rifle (AK)". Further development There were many difficulties during the initial phase of production. The first production models had stamped sheet metal receivers with a milled trunnion and butt stock insert and a stamped body. Difficulties were encountered in welding the guide and ejector rails, causing high rejection rates. Instead of halting production, a heavy machined receiver was substituted for the sheet metal receiver. Even though production of these milled rifles started in 1951, they were officially referred to as AK-49, based on the date their development started, but they are widely known in the collectors' and current commercial market as "Type 2 AK-47". This was a more costly process, but the use of machined receivers accelerated production as tooling and labor for the earlier Mosin–Nagant rifle's machined receiver were easily adapted. Partly because of these problems, the Soviets were not able to distribute large numbers of the new rifles to soldiers until 1956. During this time, production of the interim SKS rifle continued. Once the manufacturing difficulties of non-milled receivers had been overcome, a redesigned version designated the AKM (M for "modernized" or "upgraded"; in Russian: []) was introduced in 1959. This new model used a stamped sheet metal receiver and featured a slanted muzzle brake on the end of the barrel to compensate for muzzle rise under recoil. In addition, a hammer retarder was added to prevent the weapon from firing out of battery (without the bolt being fully closed), during rapid or fully automatic fire. This is also sometimes referred to as a "cyclic rate reducer", or simply "rate reducer", as it also has the effect of reducing the number of rounds fired per minute during fully automatic fire. The rifle was also roughly one-third lighter than the previous model. Most licensed and unlicensed productions of the Kalashnikov assault rifle abroad were of the AKM variant, partially due to the much easier production of the stamped receiver. This model is the most commonly encountered, having been produced in much greater quantities. All rifles based on the Kalashnikov design are often colloquially referred to as "AK-47s" in the West and some parts of Asia, although this is only correct when applied to rifles based on the original three receiver types. In most former Eastern Bloc countries, the weapon is known simply as the "Kalashnikov" or "AK". The differences between the milled and stamped receivers includes the use of rivets rather than welds on the stamped receiver, as well as the placement of a small dimple above the magazine well for stabilization of the magazine. Replacement In 1974, the Soviets began replacing their AK-47 and AKM rifles with a newer design, the AK-74, which uses 5.45×39mm ammunition. This new rifle and cartridge had only started to be manufactured in Eastern European nations when the Soviet Union collapsed, drastically slowing the production of the AK-74 and other weapons of the former Soviet bloc. Design The AK-47 was designed to be a simple, reliable fully automatic rifle that could be manufactured quickly and cheaply, using mass production methods that were state of the art in the Soviet Union during the late 1940s. The AK-47 uses a long-stroke gas system generally associated with high reliability in adverse conditions. The large gas piston, generous clearance between moving parts, and tapered cartridge case design allow the gun to endure large amounts of foreign matter and fouling without failing to cycle. Cartridge The AK fires the 7.62×39mm cartridge with a muzzle velocity of . The cartridge weight is , and the projectile weight is . The original Soviet M43 bullets are 123-grain boat-tail bullets with a copper-plated steel jacket, a large steel core, and some lead between the core and the jacket. The AK has excellent penetration when shooting through heavy foliage, walls, or a common vehicle's metal body and into an opponent attempting to use these things as cover. The 7.62×39mm M43 projectile does not generally fragment when striking an opponent and has an unusual tendency to remain intact even after making contact with bone. The 7.62×39mm round produces significant wounding in cases where the bullet tumbles (yaws) in tissue, but produces relatively minor wounds in cases where the bullet exits before beginning to yaw. In the absence of yaw, the M43 round can pencil through tissue with relatively little injury. Most, if not all, of the 7.62×39mm ammunition found today is of the upgraded M67 variety. This variety deleted the steel insert, shifting the center of gravity rearward, and allowing the projectile to destabilize (or yaw) at about , nearly earlier in tissue than the M43 round. This change also reduces penetration in ballistic gelatin to ~ for the newer M67 round versus ~ for the older M43 round. However, the wounding potential of M67 is mostly limited to the small permanent wound channel the bullet itself makes, especially when the bullet yaws. Operating mechanism To fire, the operator inserts a loaded magazine, pulls back and releases the charging handle, and then pulls the trigger. In semi-automatic, the firearm fires only once, requiring the trigger to be released and depressed again for the next shot. In fully automatic, the rifle continues to fire automatically cycling fresh rounds into the chamber until the magazine is exhausted or pressure is released from the trigger. After ignition of the cartridge primer and propellant, rapidly expanding propellant gases are diverted into the gas cylinder above the barrel through a vent near the muzzle. The build-up of gases inside the gas cylinder drives the long-stroke piston and bolt carrier rearward and a cam guide machined into the underside of the bolt carrier, along with an ejector spur on the bolt carrier rail guide, rotates the bolt approximately 35° and unlocks it from the barrel extension via a camming pin on the bolt. The moving assembly has about of free travel, which creates a delay between the initial recoil impulse of the piston and the bolt unlocking sequence, allowing gas pressures to drop to a safe level before the seal between the chamber and the bolt is broken. The AK-47 does not have a gas valve; excess gases are ventilated through a series of radial ports in the gas cylinder. Unlike many other rifle platforms, such as the AR-15 platform, the Kalashnikov platform bolt locking lugs are chamfered allowing for primary extraction upon bolt rotation which aids reliable feeding and extraction, albeit not with that much force due to the short distance the bolt carrier travels before acting on the locking lug. The Kalashnikov platform then uses an extractor claw along with a fin shaped ejector to eject the spent cartridge case. Barrel The rifle received a barrel with a chrome-lined bore and four right-hand grooves at a 240 mm (1 in 9.45 in) or 31.5 calibers rifling twist rate. The gas block contains a gas channel that is installed at a slanted angle with the bore axis. The muzzle is threaded for the installation of various muzzle devices such as a muzzle brake or a blank-firing adaptor. Gas block The gas block of the AK-47 features a cleaning rod capture or sling loop. Gas relief ports that alleviate gas pressure are placed horizontally in a row on the gas cylinder. Fire selector The fire selector is a large lever located on the right side of the rifle; it acts as a dust cover and prevents the charging handle from being pulled fully to the rear when it is on safe. It is operated by the shooter's right fore-fingers and has three settings: safe (up), full-auto (center), and semi-auto (down). The reason for this is that a soldier under stress will push the selector lever down with considerable force, bypassing the full-auto stage and setting the rifle to semi-auto. To set the AK-47 to full-auto requires the deliberate action of centering the selector lever. To operate the fire selector lever, right-handed shooters have to briefly remove their right hand from the pistol grip, which is ergonomically sub-optimal. Some AK-type rifles also have a more traditional selector lever on the left side of the receiver, just above the pistol grip. This lever is operated by the shooter's right thumb and has three settings: safe (forward), full-auto (center), and semi-auto (backward). Sights The AK-47 uses a notched rear tangent iron sight calibrated in increments from . The front sight is a post adjustable for elevation in the field. Horizontal adjustment requires a special drift tool and is done by the armory before the issue or if the need arises by an armorer after the issue. The sight line elements are approximately over the bore axis. The "point-blank range" battle zero setting "П" standing for постоянная (constant) on the 7.62×39mm AK-47 rear tangent sight element corresponds to a zero. These settings mirror the Mosin–Nagant and SKS rifles, which the AK-47 replaced. For the AK-47 combined with service cartridges, the 300 m battle zero setting limits the apparent "bullet rise" within approximately relative to the line of sight. Soldiers are instructed to fire at any target within this range by simply placing the sights on the center of mass (the belt buckle, according to Russian and former Soviet doctrine) of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier. Some AK-type rifles have a front sight with a flip-up luminous dot that is calibrated at , for improved night fighting. Furniture The AK-47 was originally equipped with a buttstock, handguard, and an upper heat guard made from solid wood. With the introduction of the Type 3 receiver the buttstock, lower handguard, and upper heat guard were manufactured from birch plywood laminates. Such engineered woods are stronger and resist warping better than the conventional one-piece patterns, do not require lengthy maturing, and are cheaper. The wooden furniture was finished with the Russian amber shellac finishing process. AKS and AKMS models featured a downward-folding metal butt-stock similar to that of the German MP40 submachine-gun, for use in the restricted space in the BMP infantry combat vehicle, as well as by paratroops. All 100 series AKs use plastic furniture with side-folding stocks. Magazines The standard magazine capacity is 30 rounds. There are also 10-, 20-, and 40-round box magazines, as well as 75-round drum magazines. The AK-47's standard 30-round magazines have a pronounced curve that allows them to smoothly feed ammunition into the chamber. Their heavy steel construction combined with "feed-lips" (the surfaces at the top of the magazine that control the angle at which the cartridge enters the chamber) machined from a single steel billet makes them highly resistant to damage. These magazines are so strong that "Soldiers have been known to use their mags as hammers, and even bottle openers". This contributes to the AK-47 magazine being more reliable but makes it heavier than U.S. and NATO magazines. The early slab-sided steel AK-47 30-round detachable box magazines had sheet-metal bodies and weighed empty. The later steel AKM 30-round magazines had lighter sheet-metal bodies with prominent reinforcing ribs weighing empty. To further reduce weight, a lightweight magazine with an aluminum body with a prominent reinforcing waffle rib pattern weighing empty was developed for the AKM that proved to be too fragile, and the small issued amount of these magazines were quickly withdrawn from service. As a replacement steel-reinforced 30-round plastic 7.62×39mm box magazines were introduced. These rust-colored magazines weigh empty and are often mistakenly identified as being made of Bakelite (a phenolic resin), but were fabricated from two parts of AG-S4 molding compound (a glass-reinforced phenol-formaldehyde binder impregnated composite), assembled using an epoxy resin adhesive. Noted for their durability, these magazines did however compromise the rifle's camouflage and lacked the small horizontal reinforcing ribs running down both sides of the magazine body near the front that were added on all later plastic magazine generations. A second-generation steel-reinforced dark-brown (color shades vary from maroon to plum to near black) 30-round 7.62×39mm magazine was introduced in the early 1980s, fabricated from ABS plastic. The third generation steel-reinforced 30-round 7.62×39mm magazine is similar to the second generation, but is darker colored and has a matte non-reflective surface finish. The current issue is a steel-reinforced matte true black non- reflective surface finished 7.62×39mm 30-round magazine, fabricated from ABS plastic weighing empty. Early steel AK-47 magazines are long; the later ribbed steel AKM and newer plastic 7.62×39mm magazines are about shorter. The transition from steel to mainly plastic magazines yields a significant weight reduction and allows a soldier to carry more ammunition for the same weight. All 7.62×39mm AK magazines are backward compatible with older AK variants. 10.12 kg (22.3 lb) is the maximum amount of ammo that the average soldier can comfortably carry. It also allows for the best comparison of the three most common 7.62×39mm AK magazines. Most Yugoslavian and some East German AK magazines were made with cartridge followers that hold the bolt open when empty; however, most AK magazine followers allow the bolt to close when the magazine is empty. Accessories Accessories supplied with the rifle include a long 6H3 bayonet featuring a long spear point blade. The AK-47 bayonet is installed by slipping the diameter muzzle ring around the muzzle and latching the handle down on the bayonet lug under the front sight base. All current model AKM rifles can mount under-barrel 40 mm grenade launchers such as the GP-25 and its variants, which can fire up to 20 rounds per minute and have an effective range of up to 400 meters. The main grenade is the VOG-25 (VOG-25M) fragmentation grenade which has a 6 m (9 m) (20 ft (30 ft)) lethality radius. The VOG-25P/VOG-25PM ("jumping") variant explodes above the ground. The AK-47 can also mount a (rarely used) cup-type grenade launcher, the Kalashnikov grenade launcher that fires standard RGD-5 Soviet hand grenades. The maximum effective range is approximately 150 meters. This launcher can also be used to launch tear gas and riot control grenades. All current AKs (100 series) and some older models have side rails for mounting a variety of scopes and sighting devices, such as the PSO-1 Optical Sniper Sight. The side rails allow for the removal and remounting of optical accessories without interfering with the zeroing of the optic. However, the 100 series side folding stocks cannot be folded with the optics mounted. Characteristics Service life The AK-47 and its variants have been and are made in dozens of countries, with "quality ranging from finely engineered weapons to pieces of questionable workmanship." As a result, the AK-47 has a service/system life of approximately 6,000, to 10,000, to 15,000 rounds. The AK-47 was designed to be a cheap, simple, easy-to-manufacture rifle, perfectly matching Soviet military doctrine that treats equipment and weapons as disposable items. As units are often deployed without adequate logistical support and dependent on "battlefield cannibalization" for resupply, it is more cost-effective to replace rather than repair weapons. The AK-47 has small parts and springs that need to be replaced every few thousand rounds. However, "Every time it is disassembled beyond the field stripping stage, it will take some time for some parts to regain their fit, and some parts may tend to shake loose and fall out when firing the weapon. Some parts of the AK-47 line are riveted together. Repairing these can be quite a hassle since the end of the rivet has to be ground off and a new one set after the part is replaced." Variants Early variants (7.62×39mm) Issue of 1948/49: Type 1: The very earliest models, stamped sheet metal receivers, are now very rare. Issue of 1951: Type 2: Has a milled receiver. The barrel and chamber are chrome-plated to resist corrosion. Issue of 1954/55: Type 3: Lightened, milled receiver variant. Rifle weight is . AKS (AKS-47): Type 1, 2, or 3 receivers: Featured a downward under folding metal stock similar to that of the MP 40, for use in the restricted space of the BMP infantry combat vehicle, as well as for airborne troops. AKN (AKSN): Night sight rail. Modernized (7.62×39mm) AKM: A simplified, lighter version of the AK-47; the Type 4 receiver is made from stamped and riveted sheet metal. A slanted muzzle device was added to reduce muzzle rise in automatic fire. The rifle weight is due to the lighter receiver. This is the most ubiquitous variant of the AK-47. AKMS: Under-folding stock version of the AKM intended for airborne troops. AKMN (AKMSN): Night scope rail. AKML (AKMSL): Slotted flash suppressor and night scope rail. RPK: Hand-held machine gun version with longer barrel and bipod. The variants—RPKS, RPKN (RPKSN), RPKL (RPKSL)—mirror AKM variants. The "S" variants have a side-folding wooden stock. Foreign Variants (7.62×39mm) Type 56: Chinese assault rifle based on the . Still in production primarily for export markets. For the further developed AK models, see Kalashnikov rifles. Production Manufacturing countries of AK-47 and its variants in alphabetical order. A private company Kalashnikov Concern (formerly Izhmash) from Russia has repeatedly claimed that the majority of foreign manufacturers are producing AK-type rifles without proper licensing. Accuracy potential US military method The AK-47's accuracy is generally sufficient to hit an adult male torso out to about , though even experts firing from prone or bench rest positions at this range were observed to have difficulty placing ten consecutive rounds on target. Later designs did not significantly improve the rifle's accuracy. An AK can fire a 10-shot group of at , and at The newer stamped-steel receiver AKM models, while more rugged and less prone to metal fatigue, are less accurate than the forged/milled receivers of their predecessors: the milled AK-47s are capable of shooting groups at , whereas the stamped AKMs are capable of shooting groups at . The best shooters can hit a man-sized target at within five shots (firing from a prone or bench rest position) or ten shots (standing). The single-shot hit-probability on the NATO E-type Silhouette Target (a human upper body half and head silhouette) of the AK-47 and the later developed AK-74, M16A1, and M16A2 rifles were measured by the US military under ideal proving ground conditions in the 1980s as follows: Under worst field exercise circumstances, the hit probabilities for all the tested rifles were drastically reduced, from 34% at 50m down to 3–4% at 600m with no significant differences between weapons at each range. Russian method The following table represents the Russian circular error probable method for determining accuracy, which involves drawing two circles on the target, one for the maximum vertical dispersion of hits and one for the maximum horizontal dispersion of hits. They then disregard the hits on the outer part of the target and only count half of the hits (50% or R50) on the inner part of the circles. This significantly reduces the overall diameter of the groups. They then use both the vertical and horizontal measurements of the reduced groups to measure accuracy. When the R50 results are doubled, the hit probability increases to 93.7%. R50 means the closest 50 percent of the shot group will all be within a circle of the mentioned diameter. The vertical and horizontal mean (R50) deviations with service ammunition at for AK platforms are. Users Current − Type 56 variant. − EKAM: The counter-terrorist unit of the Hellenic Police − Type 58 variant – Locally made as well as being in service with the Army − Used by Thahan Phran Non-state current ELN FARC dissidents − Captured from the Syrian Army Karen National Defence Organisation Karen National Liberation Army Kurdistan Workers Party National Movement for the Liberation of Azawad New People's Army Syrian opposition Ta'ang National Liberation Army Former − MPi-K (AK-47) and MPi-KM (AKM) − Passed on to the unified Vietnamese state − Used by the Panama Defense Forces − Replaced by the AKM and AK-74 − Captured rifles were issued to ARVN irregular units Non-state former Afghan mujahideen − CIA supplied Egyptian and Chinese variants Contras Farabundo Martí National Liberation Front Iraqi insurgents Khmer Rouge Liberation Tigers of Tamil Eelam Malayan National Liberation Army Moro National Liberation Front Northern Alliance Provisional Irish Republican Army − Supplied by Libya RENAMO Revolutionary Armed Forces of Colombia Viet Cong Vigorous Burmese Student Warriors Illicit trade Throughout the world, the AK and its variants are commonly used by governments, revolutionaries, terrorists, criminals, and civilians alike. In some countries, such as Somalia, Rwanda, Mozambique, Congo, and Tanzania, the prices for Black Market AKs are between $30 and $125 per weapon and prices have fallen in the last few decades due to mass counterfeiting. In Kenya, "an AK-47 fetches five head of cattle (about 10,000 Kenya shillings or 100 U.S. dollars) when offered for barter, but costs almost half that price when cash is paid". There are places around the world where AK-type weapons can be purchased on the black market "for as little as $6, or traded for a chicken or a sack of grain". The AK-47 has also spawned a cottage industry of sorts and has been copied and manufactured (one gun at a time) in small shops around the world (see Khyber Pass Copy). The estimated numbers of AK-type weapons vary greatly. The Small Arms Survey suggests that "between 70 and 100 million of these weapons have been produced since 1947". The World Bank estimates that out of the 500 million total firearms available worldwide, 100 million are of the Kalashnikov family, and 75 million are AK-47s. Because AK-type weapons have been made in many countries, often illicitly, it is impossible to know how many exist. Conflicts The AK-47 has been used in the following conflicts: 1940s Malayan Emergency (1948−1960) 1950s Hungarian Revolution (1956) Vietnam War (1955–1975) Laotian Civil War (1959–1975) 1960s Congo Crisis (1960–1965) Portuguese Colonial War (1961–1974) Rhodesian Bush War (1964–1979) The Troubles (late 1960s–1998) Communist insurgency in Thailand (1965–1983) South African Border War (1966–1990) India-China clashes (1967) Cambodian Civil War (1968–1975) Communist insurgency in Malaysia (1968–1989) Moro Conflict (1968−2019) 1970s Yom Kippur War (1973) Ethiopian Civil War (1974–1991) Western Sahara War (1975–1991) Cambodian–Vietnamese War (1978–1989) Chadian–Libyan War (1978–1987) Soviet–Afghan War (1979–1989) 1980s 1979 Kurdish rebellion in Iran Iran–Iraq War (1980–1988) Insurgency in Jammu and Kashmir (1988–present) Sri Lankan Civil War (1983–2009) United States invasion of Grenada (1983) South Lebanon conflict (1985–2000) Lord's Resistance Army insurgency (1987–present) United States invasion of Panama (1989) 1990s KDPI insurgency (1989–1996) Tuareg rebellion (1990–1995) Gulf War (1990–1991) Somali Civil War (1991–present) Yugoslav Wars (1991–2001) Burundian Civil War (1993–2005) First Chechen War (1994−1996) Republic of the Congo Civil War (1997–1999) Kargil War (1999) 2000s War in Afghanistan (2001–2021) Iraq War (2003–2011) South Thailand insurgency (2004–present) Mexican drug war (2006–present) 2010s Libyan Civil War (2011) Syrian civil war (2011–present) Iraqi insurgency (2011–2013) Central African Republic Civil War (2012–present) Mali War (2012–present) Russo-Ukrainian War (2014–present) Western Iran clashes (2016–present) 2020s Second Nagorno-Karabakh War (2020) Tigray War (2020–2022) Myanmar civil war (2021–present) Russian invasion of Ukraine (2022–present) September–October 2022 attacks on Iraqi Kurdistan Israel-Hamas War (2023–present) Cultural influence and impact During the Cold War, the Soviet Union and the People's Republic of China, as well as United States and other NATO nations supplied arms and technical knowledge to numerous countries and rebel forces around the world. During this time the Western countries used relatively expensive automatic rifles, such as the FN FAL, the HK G3, the M14, and the M16. In contrast, the Russians and Chinese used the AK-47; its low production cost and ease of manufacture allow them to make AKs in vast numbers. In the pro-communist states, the AK-47 became a symbol of the Third World revolution. They were utilized in the Cambodian Civil War and the Cambodian–Vietnamese War. During the 1980s, the Soviet Union became the principal arms dealer to countries embargoed by Western nations, including Middle Eastern nations such as Libya and Syria, which welcomed Soviet Union backing against Israel. After the fall of the Soviet Union, AK-47s were sold both openly and on the black market to any group with cash, including drug cartels and dictatorial states, and more recently they have been seen in the hands of Islamic groups such as Al-Qaeda, ISIL, and the Taliban in Afghanistan and Iraq, and FARC, Ejército de Liberación Nacional guerrillas in Colombia. In Russia, the Kalashnikov is a tremendous source of national pride. "The family of the inventor of the world's most famous rifle, Mikhail Kalashnikov, has authorized German engineering company MMI to use the well-known Kalashnikov name on a variety of not-so-deadly goods." In recent years, Kalashnikov Vodka has been marketed with souvenir bottles in the shape of the AK-47 Kalashnikov. There are also Kalashnikov watches, umbrellas, and knives. The Kalashnikov Museum (also called the AK-47 museum) opened on 4 November 2004 in Izhevsk, Udmurt Republic. This city is in the Ural Region of Russia. The museum chronicles the biography of General Kalashnikov and documents the invention of the AK-47. The museum complex of Kalashnikov's small arms, a series of halls, and multimedia exhibitions are devoted to the evolution of the AK-47 rifle and attracts 10,000 monthly visitors. Nadezhda Vechtomova, the museum director, stated in an interview that the purpose of the museum is to honor the ingenuity of the inventor and the hard work of the employees and to "separate the weapon as a weapon of murder from the people who are producing it and to tell its history in our country". On 19 September 2017 a monument of Kalashnikov was unveiled in central Moscow. A protester, later detained by police, attempted to unfurl a banner reading "a creator of weapons is a creator of death". The proliferation of this weapon is reflected by more than just numbers. The AK-47 is included on the flag of Mozambique and its emblem, an acknowledgment that the country gained its independence in large part through the effective use of their AK-47s. It is also found in the coats of arms of East Timor, Zimbabwe and the revolution era Burkina Faso, as well as in the flags of Hezbollah, Syrian Resistance, FARC-EP, the New People's Army, TKP/TIKKO and the International Revolutionary People's Guerrilla Forces. U.S. and Western Europe countries frequently associate the AK-47 with their enemies; both Cold War era and present-day. For example, Western works of fiction (movies, television, novels, video games) often portray criminals, gang members, insurgents, and terrorists using AK-47s as the weapon of choice. Conversely, throughout the developing world, the AK-47 can be positively attributed with revolutionaries against foreign occupation, imperialism, or colonialism. In Ireland the AK-47 is associated with The Troubles due to its extensive use by republican paramilitaries during this period. In 2013, a decommissioned AK-47 was included in the A History of Ireland in 100 Objects collection. The AK-47 made an appearance in U.S. popular culture as a recurring focus in the Nicolas Cage film Lord of War (2005). Numerous monologues in the movie focus on the weapon, and its effects on global conflict and the gun running market. In Iraq and Afghanistan, private military company contractors from the U.K. and other countries used the AK-47 and its variants along with Western firearms such as the AR-15. In 2006, the Colombian musician and peace activist César López devised the escopetarra, an AK converted into a guitar. One sold for US$17,000 in a fundraiser held to benefit the victims of anti-personnel mines, while another was exhibited at the United Nations' Conference on Disarmament. In Mexico, the AK-47 is known as "Cuerno de Chivo" (literally "Goat's Horn") because of its curved magazine design. It is one of the weapons of choice of Mexican drug cartels. It is sometimes mentioned in Mexican folk music lyrics. Gallery
Technology
Specific firearms
null
1349
https://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry%20computer
Atanasoff–Berry computer
The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. The device was limited by the technology of the day. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) which is integrated into every modern processor's design. Its unique contribution was to make computing faster by being the first to use vacuum tubes to do arithmetic calculations. Prior to this, slower electro-mechanical methods were used by Konrad Zuse's Z1 computer, and the simultaneously developed Harvard Mark I. The first electronic, programmable, digital machine, the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC. Overview Conceived in 1937, the machine was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed only to solve systems of linear equations and was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was not perfected, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990. Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amid patent disputes over the first instance of an electronic computer. At that time ENIAC, that had been created by John Mauchly and J. Presper Eckert, was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff. When, in the mid-1970s, the secrecy surrounding the British World War II development of the Colossus computers that pre-dated ENIAC, was lifted and Colossus was described at a conference in Los Alamos, New Mexico, in June 1976, John Mauchly and Konrad Zuse were reported to have been astonished. Design and construction According to Atanasoff's account, several key principles of the Atanasoff–Berry computer were conceived in a sudden insight after a long nighttime drive to Rock Island, Illinois, during the winter of 1937–38. The ABC innovations included electronic computation, binary arithmetic, parallel processing, regenerative capacitor memory, and a separation of memory and computing functions. The mechanical and logic design was worked out by Atanasoff over the next year. A grant application to build a proof of concept prototype was submitted in March 1939 to the Agronomy department, which was also interested in speeding up computation for economic and research analysis. $5,000 of further funding () to complete the machine came from the nonprofit Research Corporation of New York City. The ABC was built by Atanasoff and Berry in the basement of the physics building at Iowa State College from 1939 to 1942. The initial funds were released in September, and the 11-tube prototype was first demonstrated in October 1939. A December demonstration prompted a grant for construction of the full-scale machine. The ABC was built and tested over the next two years. A January 15, 1941, story in the Des Moines Register announced the ABC as "an electrical computing machine" with more than 300 vacuum tubes that would "compute complicated algebraic equations" (but gave no precise technical description of the computer). The system weighed more than . It contained approximately of wire, 280 dual-triode vacuum tubes, 31 thyratrons, and was about the size of a desk. It was not programmable, which distinguishes it from more general machines of the same era, such as Konrad Zuse's 1941 Z3 (or earlier iterations) and the Colossus computers of 1943–1945. Nor did it implement the stored-program architecture, first implemented in the Manchester Baby of 1948, required for fully general-purpose practical computing machines. The machine was, however, the first to implement: Using vacuum tubes, rather than wheels, ratchets, mechanical switches, or telephone relays, allowing for greater speed than previous computers Using capacitors for memory, rather than mechanical components, allowing for greater speed and density The memory of the Atanasoff–Berry computer was a system called regenerative capacitor memory, which consisted of a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 "bands" of 50 (30 active bands and two spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed-point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits). The alternating current power-line frequency of 60 Hz was the primary clock rate for the lowest-level operations. The arithmetic logic functions were fully electronic, implemented with vacuum tubes. The family of logic gates ranged from inverters to two- and three-input gates. The input and output levels and operating voltages were compatible between the different gates. Each gate consisted of one inverting vacuum-tube amplifier, preceded by a resistor divider input network that defined the logical function. The control logic functions, which only needed to operate once per drum rotation and therefore did not require electronic speed, were electromechanical, implemented with relays. The ALU operated on only one bit of each number at a time; it kept the carry/borrow bit in a capacitor for use in the next AC cycle. Although the Atanasoff–Berry computer was an important step up from earlier calculating machines, it was not able to run entirely automatically through an entire problem. An operator was needed to operate the control switches to set up its functions, much like the electro-mechanical calculators and unit record equipment of the time. Selection of the operation to be performed, reading, writing, converting to or from binary to decimal, or reducing a set of equations was made by front-panel switches and, in some cases, jumpers. There were two forms of input and output: primary user input and output and an intermediate results output and input. The intermediate results storage allowed operation on problems too large to be handled entirely within the electronic memory. (The largest problem that could be solved without the use of the intermediate output and input was two simultaneous equations, a trivial problem.) Intermediate results were binary, written onto paper sheets by electrostatically modifying the resistance at 1500 locations to represent 30 of the 50-bit numbers (one equation). Each sheet could be written or read in one second. The reliability of the system was limited to about 1 error in 100,000 calculations by these units, primarily attributed to lack of control of the sheets' material characteristics. In retrospect, a solution could have been to add a parity bit to each number as written. This problem was not solved by the time Atanasoff left the university for war-related work. Primary user input was decimal, via standard IBM 80-column punched cards, and output was decimal, via a front-panel display. Function The ABC was designed for a specific purpose the solution of systems of simultaneous linear equations. It could handle systems with up to 29 equations, a difficult problem for the time. Problems of this scale were becoming common in physics, the department in which John Atanasoff worked. The machine could be fed two linear equations with up to 29 variables and a constant term and eliminate one of the variables. This process would be repeated manually for each of the equations, which would result in a system of equations with one fewer variable. Then the whole process would be repeated to eliminate another variable. George W. Snedecor, the head of Iowa State's Statistics Department, was very likely the first user of an electronic digital computer to solve real-world mathematics problems. He submitted many of these problems to Atanasoff. Patent dispute On June 26, 1947, J. Presper Eckert and John Mauchly were the first to file for patent on a digital computing device (ENIAC), much to the surprise of Atanasoff. The ABC had been examined by John Mauchly in June 1941, and Isaac Auerbach, a former student of Mauchly's, alleged that it influenced his later work on ENIAC, although Mauchly denied this. The ENIAC patent did not issue until 1964, and by 1967 Honeywell sued Sperry Rand in an attempt to break the ENIAC patents, arguing that the ABC constituted prior art. The United States District Court for the District of Minnesota released its judgement on October 19, 1973, finding in Honeywell v. Sperry Rand that the ENIAC patent was a derivative of John Atanasoff's invention. Campbell-Kelly and Aspray conclude: The case was legally resolved on October 19, 1973, when U.S. District Judge Earl R. Larson held the ENIAC patent invalid, ruling that the ENIAC derived many basic ideas from the Atanasoff–Berry computer. Judge Larson explicitly stated: Herman Goldstine, one of the original developers of ENIAC wrote: Replica The original ABC was eventually dismantled in 1948, when the university converted the basement to classrooms, and all of its pieces except for one memory drum were discarded. In 1997, a team of researchers led by Delwyn Bluhm and John Gustafson from Ames Laboratory (located on the Iowa State University campus) finished building a working replica of the Atanasoff–Berry computer at a cost of $350,000 (equivalent to $ in ). The replica ABC was on display in the first floor lobby of the Durham Center for Computation and Communication at Iowa State University and was subsequently exhibited at the Computer History Museum.
Technology
Early computers
null
1358
https://en.wikipedia.org/wiki/Anchor
Anchor
An anchor is a device, normally made of metal, used to secure a vessel to the bed of a body of water to prevent the craft from drifting due to wind or current. The word derives from Latin , which itself comes from the Greek (). Anchors can either be temporary or permanent. Permanent anchors are used in the creation of a mooring, and are rarely moved; a specialist service is normally needed to move or maintain them. Vessels carry one or more temporary anchors, which may be of different designs and weights. A sea anchor is a drag device, not in contact with the seabed, used to minimise drift of a vessel relative to the water. A drogue is a drag device used to slow or help steer a vessel running before a storm in a following or overtaking sea, or when crossing a bar in a breaking sea. Anchoring Anchors achieve holding power either by "hooking" into the seabed, or weight, or a combination of the two. The weight of the anchor chain can be more than that of the anchor and is critical to proper holding. Permanent moorings use large masses (commonly a block or slab of concrete) resting on the seabed. Semi-permanent mooring anchors (such as mushroom anchors) and large ship's anchors derive a significant portion of their holding power from their weight, while also hooking or embedding in the bottom. Modern anchors for smaller vessels have metal flukes that hook on to rocks on the bottom or bury themselves in soft seabed. The vessel is attached to the anchor by the rode (also called a cable or a warp). It can be made of rope, chain or a combination of rope and chain. The ratio of the length of rode to the water depth is known as the scope. Holding ground is the area of sea floor that holds an anchor, and thus the attached ship or boat. Different types of anchor are designed to hold in different types of holding ground. Some bottom materials hold better than others; for instance, hard sand holds well, shell holds poorly. Holding ground may be fouled with obstacles. An anchorage location may be chosen for its holding ground. In poor holding ground, only the weight of an anchor and chain matters; in good holding ground, it is able to dig in, and the holding power can be significantly higher. The basic anchoring consists of determining the location, dropping the anchor, laying out the scope, setting the hook, and assessing where the vessel ends up. The ship seeks a location that is sufficiently protected; has suitable holding ground, enough depth at low tide and enough room for the boat to swing. The location to drop the anchor should be approached from down wind or down current, whichever is stronger. As the chosen spot is approached, the vessel should be stopped or even beginning to drift back. The anchor should initially be lowered quickly but under control until it is on the bottom (see anchor windlass). The vessel should continue to drift back, and the cable should be veered out under control (slowly) so it is relatively straight. Once the desired scope is laid out, the vessel should be gently forced astern, usually using the auxiliary motor but possibly by backing a sail. A hand on the anchor line may telegraph a series of jerks and jolts, indicating the anchor is dragging, or a smooth tension indicative of digging in. As the anchor begins to dig in and resist backward force, the engine may be throttled up to get a thorough set. If the anchor continues to drag, or sets after having dragged too far, it should be retrieved and moved back to the desired position (or another location chosen.) Using an anchor weight, kellet or sentinel Lowering a concentrated, heavy weight down the anchor line – rope or chain – directly in front of the bow to the seabed behaves like a heavy chain rode and lowers the angle of pull on the anchor. If the weight is suspended off the seabed it acts as a spring or shock absorber to dampen the sudden actions that are normally transmitted to the anchor and can cause it to dislodge and drag. In light conditions, a kellet reduces the swing of the vessel considerably. In heavier conditions these effects disappear as the rode becomes straightened and the weight ineffective. Known as an "anchor chum weight" or "angel" in the UK. Forked moor Using two anchors set approximately 45° apart, or wider angles up to 90°, from the bow is a strong mooring for facing into strong winds. To set anchors in this way, first one anchor is set in the normal fashion. Then, taking in on the first cable as the boat is motored into the wind and letting slack while drifting back, a second anchor is set approximately a half-scope away from the first on a line perpendicular to the wind. After this second anchor is set, the scope on the first is taken up until the vessel is lying between the two anchors and the load is taken equally on each cable. This moor also to some degree limits the range of a vessel's swing to a narrower oval. Care should be taken that other vessels do not swing down on the boat due to the limited swing range. Bow and stern (Not to be mistaken with the Bahamian moor, below.) In the bow and stern technique, an anchor is set off each the bow and the stern, which can severely limit a vessel's swing range and also align it to steady wind, current or wave conditions. One method of accomplishing this moor is to set a bow anchor normally, then drop back to the limit of the bow cable (or to double the desired scope, e.g. 8:1 if the eventual scope should be 4:1, 10:1 if the eventual scope should be 5:1, etc.) to lower a stern anchor. By taking up on the bow cable the stern anchor can be set. After both anchors are set, tension is taken up on both cables to limit the swing or to align the vessel. Bahamian moor Similar to the above, a Bahamian moor is used to sharply limit the swing range of a vessel, but allows it to swing to a current. One of the primary characteristics of this technique is the use of a swivel as follows: the first anchor is set normally, and the vessel drops back to the limit of anchor cable. A second anchor is attached to the end of the anchor cable, and is dropped and set. A swivel is attached to the middle of the anchor cable, and the vessel connected to that. The vessel now swings in the middle of two anchors, which is acceptable in strong reversing currents, but a wind perpendicular to the current may break out the anchors, as they are not aligned for this load. Backing an anchor Also known as tandem anchoring, in this technique two anchors are deployed in line with each other, on the same rode. With the foremost anchor reducing the load on the aft-most, this technique can develop great holding power and may be appropriate in "ultimate storm" circumstances. It does not limit swinging range, and might not be suitable in some circumstances. There are complications, and the technique requires careful preparation and a level of skill and experience above that required for a single anchor. Kedging Kedging or warping is a technique for moving or turning a ship by using a relatively light anchor. In yachts, a kedge anchor is an anchor carried in addition to the main, or bower, anchor, and usually stowed aft. Every yacht should carry at least two anchors – the main or bower anchor and a second lighter kedge anchor. It is used occasionally when it is necessary to limit the turning circle as the yacht swings when it is anchored, such as in a narrow river or a deep pool in an otherwise shallow area. Kedge anchors are sometimes used to recover vessels that have run aground. For ships, a kedge may be dropped while a ship is underway, or carried out in a suitable direction by a tender or ship's boat to enable the ship to be winched off if aground or swung into a particular heading, or even to be held steady against a tidal or other stream. Historically, it was of particular relevance to sailing warships that used them to outmaneuver opponents when the wind had dropped but might be used by any vessel in confined, shoal water to place it in a more desirable position, provided she had enough manpower. Club hauling Club hauling is an archaic technique. When a vessel is in a narrow channel or on a lee shore so that there is no room to tack the vessel in a conventional manner, an anchor attached to the lee quarter may be dropped from the lee bow. This is deployed when the vessel is head to wind and has lost headway. As the vessel gathers sternway the strain on the cable pivots the vessel around what is now the weather quarter turning the vessel onto the other tack. The anchor is then normally cut away (the ship's momentum prevents recovery without aborting the maneuver). Multiple anchor patterns When it is necessary to moor a ship or floating platform with precise positioning and alignment, such as when drilling the seabed, for some types of salvage work, and for some types of diving operation, several anchors are set in a pattern which allows the vessel to be positioned by shortening and lengthening the scope of the anchors, and adjusting the tension on the rodes. The anchors are usually laid in prearranged positions by an anchor tender, and the moored vessel uses its own winches to adjust position and tension. Similar arrangements are used for some types of single buoy moorings, like the catenary anchor leg mooring (CALM) used for loading and unloading liquid cargoes. Weighing anchor Since all anchors that embed themselves in the bottom require the strain to be along the seabed, anchors can be broken out of the bottom by shortening the rope until the vessel is directly above the anchor; at this point the anchor chain is "up and down", in naval parlance. If necessary, motoring slowly around the location of the anchor also helps dislodge it. Anchors are sometimes fitted with a trip line attached to the crown, by which they can be unhooked from underwater hazards. The term aweigh describes an anchor when it is hanging on the rope and not resting on the bottom. This is linked to the term to weigh anchor, meaning to lift the anchor from the sea bed, allowing the ship or boat to move. An anchor is described as aweigh when it has been broken out of the bottom and is being hauled up to be stowed. Aweigh should not be confused with under way, which describes a vessel that is not moored to a dock or anchored, whether or not the vessel is moving through the water. Aweigh is also often confused with away, which is incorrect. History Evolution of the anchor The earliest anchors were probably rocks, and many rock anchors have been found dating from at least the Bronze Age. Pre-European Māori waka (canoes) used one or more hollowed stones, tied with flax ropes, as anchors. Many modern moorings still rely on a large rock as the primary element of their design. However, using pure weight to resist the forces of a storm works well only as a permanent mooring; a large enough rock would be nearly impossible to move to a new location. The ancient Greeks used baskets of stones, large sacks filled with sand, and wooden logs filled with lead. According to Apollonius Rhodius and Stephen of Byzantium, anchors were formed of stone, and Athenaeus states that they were also sometimes made of wood. Such anchors held the vessel merely by their weight and by their friction along the bottom. Fluked anchors Iron was afterwards introduced for the construction of anchors, and an improvement was made by forming them with teeth, or "flukes", to fasten themselves into the bottom. This is the iconic anchor shape most familiar to non-sailors. This form has been used since antiquity. The Roman Nemi ships of the 1st century AD used this form. The Viking Ladby ship (probably 10th century) used a fluked anchor of this type, made of iron, which would have had a wooden stock mounted perpendicular to the shank and flukes to make the flukes contact the bottom at a suitable angle to hook or penetrate. Admiralty anchor The Admiralty Pattern anchor, or simply "Admiralty", also known as a "Fisherman", consists of a central shank with a ring or shackle for attaching the rode (the rope, chain, or cable connecting the ship and the anchor). At the other end of the shank there are two arms, carrying the flukes, while the stock is mounted to the shackle end, at ninety degrees to the arms. When the anchor lands on the bottom, it generally falls over with the arms parallel to the seabed. As a strain comes onto the rope, the stock digs into the bottom, canting the anchor until one of the flukes catches and digs into the bottom. The Admiralty Anchor is an entirely independent reinvention of a classical design, as seen in one of the Nemi ship anchors. This basic design remained unchanged for centuries, with the most significant changes being to the overall proportions, and a move from stocks made of wood to iron stocks in the late 1830s and early 1840s. Since one fluke always protrudes up from the set anchor, there is a great tendency of the rode to foul the anchor as the vessel swings due to wind or current shifts. When this happens, the anchor may be pulled out of the bottom, and in some cases may need to be hauled up to be re-set. In the mid-19th century, numerous modifications were attempted to alleviate these problems, as well as improve holding power, including one-armed mooring anchors. The most successful of these patent anchors, the Trotman Anchor, introduced a pivot at the centre of the crown where the arms join the shank, allowing the "idle" upper arm to fold against the shank. When deployed the lower arm may fold against the shank tilting the tip of the fluke upwards, so each fluke has a tripping palm at its base, to hook on the bottom as the folded arm drags along the seabed, which unfolds the downward oriented arm until the tip of the fluke can engage the bottom. Handling and storage of these anchors requires special equipment and procedures. Once the anchor is hauled up to the hawsepipe, the ring end is hoisted up to the end of a timber projecting from the bow known as the cathead. The crown of the anchor is then hauled up with a heavy tackle until one fluke can be hooked over the rail. This is known as "catting and fishing" the anchor. Before dropping the anchor, the fishing process is reversed, and the anchor is dropped from the end of the cathead. Stockless anchor The stockless anchor, patented in England in 1821, represented the first significant departure in anchor design in centuries. Although their holding-power-to-weight ratio is significantly lower than admiralty pattern anchors, their ease of handling and stowage aboard large ships led to almost universal adoption. In contrast to the elaborate stowage procedures for earlier anchors, stockless anchors are simply hauled up until they rest with the shank inside the hawsepipes, and the flukes against the hull (or inside a recess in the hull called the anchor box). While there are numerous variations, stockless anchors consist of a set of heavy flukes connected by a pivot or ball and socket joint to a shank. Cast into the crown of the anchor is a set of tripping palms, projections that drag on the bottom, forcing the main flukes to dig in. Small boat anchors Until the mid-20th century, anchors for smaller vessels were either scaled-down versions of admiralty anchors, or simple grapnels. As new designs with greater holding-power-to-weight ratios were sought, a great variety of anchor designs have emerged. Many of these designs are still under patent, and other types are best known by their original trademarked names. Grapnel anchor / drag A traditional design, the grapnel is merely a shank (no stock) with four or more tines, also known as a drag. It has a benefit in that, no matter how it reaches the bottom, one or more tines are aimed to set. In coral, or rock, it is often able to set quickly by hooking into the structure, but may be more difficult to retrieve. A grapnel is often quite light, and may have additional uses as a tool to recover gear lost overboard. Its weight also makes it relatively easy to move and carry, however its shape is generally not compact and it may be awkward to stow unless a collapsing model is used. Grapnels rarely have enough fluke area to develop much hold in sand, clay, or mud. It is not unknown for the anchor to foul on its own rode, or to foul the tines with refuse from the bottom, preventing it from digging in. On the other hand, it is quite possible for this anchor to find such a good hook that, without a trip line from the crown, it is impossible to retrieve. Herreshoff anchor Designed by yacht designer L. Francis Herreshoff, this is essentially the same pattern as an admiralty anchor, albeit with small diamond-shaped flukes or palms. The novelty of the design lay in the means by which it could be broken down into three pieces for stowage. In use, it still presents all the issues of the admiralty pattern anchor. Northill anchor Originally designed as a lightweight anchor for seaplanes, this design consists of two plough-like blades mounted to a shank, with a folding stock crossing through the crown of the anchor. CQR plough anchor Many manufacturers produce a plough-type anchor, so-named after its resemblance to an agricultural plough. All such anchors are copied from the original CQR (Coastal Quick Release, or Clyde Quick Release, later rebranded as 'secure' by Lewmar), a 1933 design patented in the UK by mathematician Geoffrey Ingram Taylor. Plough anchors stow conveniently in a roller at the bow, and have been popular with cruising sailors and private boaters. Ploughs can be moderately good in all types of seafloor, though not exceptional in any. Contrary to popular belief, the CQR's hinged shank is not to allow the anchor to turn with direction changes rather than breaking out, but actually to prevent the shank's weight from disrupting the fluke's orientation while setting. The hinge can wear out and may trap a sailor's fingers. Some later plough anchors have a rigid shank, such as the Lewmar's "Delta". A plough anchor has a fundamental flaw: like its namesake, the agricultural plough, it digs in but then tends to break out back to the surface. Plough anchors sometimes have difficulty setting at all, and instead skip across the seafloor. By contrast, modern efficient anchors tend to be "scoop" types that dig ever deeper. Delta anchor The Delta anchor was derived from the CQR. It was patented by Philip McCarron, James Stewart, and Gordon Lyall of British marine manufacturer Simpson-Lawrence Ltd in 1992. It was designed as an advance over the anchors used for floating systems such as oil rigs. It retains the weighted tip of the CQR but has a much higher fluke area to weight ratio than its predecessor. The designers also eliminated the sometimes troublesome hinge. It is a plough anchor with a rigid, arched shank. It is described as self-launching because it can be dropped from a bow roller simply by paying out the rode, without manual assistance. This is an oft copied design with the European Brake and Australian Sarca Excel being two of the more notable ones. Although it is a plough type anchor, it sets and holds reasonably well in hard bottoms. Danforth anchor American Richard Danforth invented the Danforth Anchor in the 1940s for use aboard landing craft. It uses a stock at the crown to which two large flat triangular flukes are attached. The stock is hinged so the flukes can orient toward the bottom (and on some designs may be adjusted for an optimal angle depending on the bottom type). Tripping palms at the crown act to tip the flukes into the seabed. The design is a burying variety, and once well set can develop high resistance. Its lightweight and compact flat design make it easy to retrieve and relatively easy to store; some anchor rollers and hawsepipes can accommodate a fluke-style anchor. A Danforth does not usually penetrate or hold in gravel or weeds. In boulders and coral it may hold by acting as a hook. If there is much current, or if the vessel is moving while dropping the anchor, it may "kite" or "skate" over the bottom due to the large fluke area acting as a sail or wing. The FOB HP anchor designed in Brittany in the 1970s is a Danforth variant designed to give increased holding through its use of rounded flukes setting at a 30° angle. The Fortress is an American aluminum alloy Danforth variant that can be disassembled for storage and it features an adjustable 32° and 45° shank/fluke angle to improve holding capability in common sea bottoms such as hard sand and soft mud. This anchor performed well in a 1989 US Naval Sea Systems Command (NAVSEA) test and in an August 2014 holding power test that was conducted in the soft mud bottoms of the Chesapeake Bay. Bruce or claw anchor This claw-shaped anchor was designed by Peter Bruce from Scotland in the 1970s. Bruce gained his early reputation from the production of large-scale commercial anchors for ships and fixed installations such as oil rigs. It was later scaled down for small boats, and copies of this popular design abound. The Bruce and its copies, known generically as "claw type anchors", have been adopted on smaller boats (partly because they stow easily on a bow roller) but they are most effective in larger sizes. Claw anchors are quite popular on charter fleets as they have a high chance to set on the first try in many bottoms. They have the reputation of not breaking out with tide or wind changes, instead slowly turning in the bottom to align with the force. Bruce anchors can have difficulty penetrating weedy bottoms and grass. They offer a fairly low holding-power-to-weight ratio and generally have to be oversized to compete with newer types. Scoop type anchors Three time circumnavigator German Rolf Kaczirek invented the Bügel Anker in the 1980s. Kaczirek wanted an anchor that was self-righting without necessitating a ballasted tip. Instead, he added a roll bar and switched out the plough share for a flat blade design. As none of the innovations of this anchor were patented, copies of it abound. Alain Poiraud of France introduced the scoop type anchor in 1996. Similar in design to the Bügel anchor, Poiraud's design features a concave fluke shaped like the blade of a shovel, with a shank attached parallel to the fluke, and the load applied toward the digging end. It is designed to dig into the bottom like a shovel, and dig deeper as more pressure is applied. The common challenge with all the scoop type anchors is that they set so well, they can be difficult to weigh. Bügelanker, or Wasi: This German-designed bow anchor has a sharp tip for penetrating weed, and features a roll-bar that allows the correct setting attitude to be achieved without the need for extra weight to be inserted into the tip. Spade: This is a French design that has proven successful since 1996. It features a demountable shank (hollow in some instances) and the choice of galvanized steel, stainless steel, or aluminium construction, which means a lighter and more easily stowable anchor. The geometry also makes this anchor self stowing on a single roller. The Spade anchor is the anchor of choice for Rubicon 3, one of Europe's largest adventure sailing companies Rocna: This New Zealand spade design, available in galvanised or stainless steel, has been produced since 2004. It has a roll-bar (similar to that of the Bügel), a large spade-like fluke area, and a sharp toe for penetrating weed and grass. The Rocna sets quickly and holds well. Mantus: This is claimed to be a fast setting anchor with high holding power. It is designed as an all round anchor capable of setting even in challenging bottoms such as hard sand/clay bottoms and grass. The shank is made out of a high tensile steel capable of withstanding high loads. It is similar in design to the Rocna but has a larger and wider roll-bar that reduces the risk of fouling and increases the angle of the fluke that results in improved penetration in some bottoms. Ultra: This is an innovative spade design that dispenses with a roll-bar. Made primarily of stainless steel, its main arm is hollow, while the fluke tip has lead within it. It is similar in appearance to the Spade anchor. Vulcan: A recent sibling to the Rocna, this anchor performs similarly but does not have a roll-bar. Instead the Vulcan has patented design features such as the "V-bulb" and the "Roll Palm" that allow it to dig in deeply. The Vulcan was designed primarily for sailors who had difficulties accommodating the roll-bar Rocna on their bow. Peter Smith (originator of the Rocna) designed it specifically for larger powerboats. Both Vulcans and Rocnas are available in galvanised steel, or in stainless steel. The Vulcan is similar in appearance to the Spade anchor. Knox Anchor: This is produced in Scotland and was invented by Professor John Knox. It has a divided concave large area fluke arrangement and a shank in high tensile steel. A roll bar similar to the Rocna gives fast setting and a holding power of about 40 times anchor weight. Other temporary anchors Mud weight: Consists of a blunt heavy weight, usually cast iron or cast lead, that sinks into the mud and resist lateral movement. It is suitable only for soft silt bottoms and in mild conditions. Sizes range between 5 and 20 kg for small craft. Various designs exist and many are home produced from lead or improvised with heavy objects. This is a commonly used method on the Norfolk Broads in England. Bulwagga: This is a unique design featuring three flukes instead of the usual two. It has performed well in tests by independent sources such as American boating magazine Practical Sailor. Permanent anchors These are used where the vessel is permanently or semi-permanently sited, for example in the case of lightvessels or channel marker buoys. The anchor needs to hold the vessel in all weathers, including the most severe storm, but needs to be lifted only occasionally, at most – for example, only if the vessel is to be towed into port for maintenance. An alternative to using an anchor under these circumstances, especially if the anchor need never be lifted at all, may be to use a pile that is driven into the seabed. Permanent anchors come in a wide range of types and have no standard form. A slab of rock with an iron staple in it to attach a chain to would serve the purpose, as would any dense object of appropriate weight (for instance, an engine block). Modern moorings may be anchored by augers, which look and act like oversized screws drilled into the seabed, or by barbed metal beams pounded in (or even driven in with explosives) like pilings, or by a variety of other non-mass means of getting a grip on the bottom. One method of building a mooring is to use three or more conventional anchors laid out with short lengths of chain attached to a swivel, so no matter which direction the vessel moves, one or more anchors are aligned to resist the force. Mushroom The mushroom anchor is suitable where the seabed is composed of silt or fine sand. It was invented by Robert Stevenson, for use by an 82-ton converted fishing boat, Pharos, which was used as a lightvessel between 1807 and 1810 near to Bell Rock whilst the lighthouse was being constructed. It was equipped with a 1.5-ton example. It is shaped like an inverted mushroom, the head becoming buried in the silt. A counterweight is often provided at the other end of the shank to lay it down before it becomes buried. A mushroom anchor normally sinks in the silt to the point where it has displaced its own weight in bottom material, thus greatly increasing its holding power. These anchors are suitable only for a silt or mud bottom, since they rely upon suction and cohesion of the bottom material, which rocky or coarse sand bottoms lack. The holding power of this anchor is at best about twice its weight until it becomes buried, when it can be as much as ten times its weight. They are available in sizes from about 5 kg up to several tons. Deadweight A deadweight is an anchor that relies solely on being a heavy weight. It is usually just a large block of concrete or stone at the end of the chain. Its holding power is defined by its weight underwater (i.e., taking its buoyancy into account) regardless of the type of seabed, although suction can increase this if it becomes buried. Consequently, deadweight anchors are used where mushroom anchors are unsuitable, for example in rock, gravel or coarse sand. An advantage of a deadweight anchor over a mushroom is that if it does drag, it continues to provide its original holding force. The disadvantage of using deadweight anchors in conditions where a mushroom anchor could be used is that it needs to be around ten times the weight of the equivalent mushroom anchor. Auger Auger anchors can be used to anchor permanent moorings, floating docks, fish farms, etc. These anchors, which have one or more slightly pitched self-drilling threads, must be screwed into the seabed with the use of a tool, so require access to the bottom, either at low tide or by use of a diver. Hence they can be difficult to install in deep water without special equipment. Weight for weight, augers have a higher holding than other permanent designs, and so can be cheap and relatively easily installed, although difficult to set in extremely soft mud. High-holding-types There is a need in the oil-and-gas industry to resist large anchoring forces when laying pipelines and for drilling vessels. These anchors are installed and removed using a support tug and pennant/pendant wire. Some examples are the Stevin range supplied by Vrijhof Ankers. Large plate anchors such as the Stevmanta are used for permanent moorings. Anchoring gear The elements of anchoring gear include the anchor, the cable (also called a rode), the method of attaching the two together, the method of attaching the cable to the ship, charts, and a method of learning the depth of the water. Vessels may carry a number of anchors: bower anchors are the main anchors used by a vessel and normally carried at the bow of the vessel. A kedge anchor is a light anchor used for warping an anchor, also known as kedging, or more commonly on yachts for mooring quickly or in benign conditions. A stream anchor, which is usually heavier than a kedge anchor, can be used for kedging or warping in addition to temporary mooring and restraining stern movement in tidal conditions or in waters where vessel movement needs to be restricted, such as rivers and channels. Charts are vital to good anchoring. Knowing the location of potential dangers, as well as being useful in estimating the effects of weather and tide in the anchorage, is essential in choosing a good place to drop the hook. One can get by without referring to charts, but they are an important tool and a part of good anchoring gear, and a skilled mariner would not choose to anchor without them. Anchor rode The anchor rode (or "cable" or "warp") that connects the anchor to the vessel is usually made up of chain, rope, or a combination of those. Large ships use only chain rode. Smaller craft might use a rope/chain combination or an all chain rode. All rodes should have some chain; chain is heavy but it resists abrasion from coral, sharp rocks, or shellfish beds, whereas a rope warp is susceptible to abrasion and can fail in a short time when stretched against an abrasive surface. The weight of the chain also helps keep the direction of pull on the anchor closer to horizontal, which improves holding, and absorbs part of snubbing loads. Where weight is not an issue, a heavier chain provides better holding by forming a catenary curve through the water and resting as much of its length on the bottom as would not be lifted by tension of the mooring load. Any changes to the tension are accommodated by additional chain being lifted or settling on the bottom, and this absorbs shock loads until the chain is straight, at which point the full load is taken by the anchor. Additional dissipation of shock loads can be achieved by fitting a snubber between the chain and a bollard or cleat on deck. This also reduces shock loads on the deck fittings, and the vessel usually lies more comfortably and quietly. Being strong and elastic, nylon rope is the most suitable as an anchor rode. Polyester (terylene) is stronger but less elastic than nylon. Both materials sink, so they avoid fouling other craft in crowded anchorages and do not absorb much water. Neither breaks down quickly in sunlight. Elasticity helps absorb shock loading, but causes faster abrasive wear when the rope stretches over an abrasive surface, like a coral bottom or a poorly designed chock. Polypropylene ("polyprop") is not suited to rodes because it floats and is much weaker than nylon, being barely stronger than natural fibres. Some grades of polypropylene break down in sunlight and become hard, weak, and unpleasant to handle. Natural fibres such as manila or hemp are still used in developing nations but absorb a lot of water, are relatively weak, and rot, although they do give good handling grip and are often relatively cheap. Ropes that have little or no elasticity are not suitable as anchor rodes. Elasticity is partly a function of the fibre material and partly of the rope structure. All anchors should have chain at least equal to the boat's length. Some skippers prefer an all chain warp for greater security on coral or sharp edged rock bottoms. The chain should be shackled to the warp through a steel eye or spliced to the chain using a chain splice. The shackle pin should be securely wired or moused. Either galvanized or stainless steel is suitable for eyes and shackles, galvanised steel being the stronger of the two. Some skippers prefer to add a swivel to the rode. There is a school of thought that says these should not be connected to the anchor itself, but should be somewhere in the chain. However, most skippers connect the swivel directly to the anchor. Scope Scope is the ratio of length of the rode to the depth of the water measured from the highest point (usually the anchor roller or bow chock) to the seabed, making allowance for the highest expected tide. When making this ratio large enough, one can ensure that the pull on the anchor is as horizontal as possible. This will make it unlikely for the anchor to break out of the bottom and drag, if it was properly embedded in the seabed to begin with. When deploying chain, a large enough scope leads to a load that is entirely horizontal, whilst an anchor rode made only of rope will never achieve a strictly horizontal pull. In moderate conditions, the ratio of rode to water depth should be 4:1 – where there is sufficient swing-room, a greater scope is always better. In rougher conditions it should be up to twice this with the extra length giving more stretch and a smaller angle to the bottom to resist the anchor breaking out. For example, if the water is deep, and the anchor roller is above the water, then the 'depth' is 9 meters (~30 feet). The amount of rode to let out in moderate conditions is thus 36 meters (120 feet). (For this reason, it is important to have a reliable and accurate method of measuring the depth of water.) When using a rope rode, there is a simple way to estimate the scope: The ratio of bow height of the rode to length of rode above the water while lying back hard on the anchor is the same or less than the scope ratio. The basis for this is simple geometry (Intercept Theorem): The ratio between two sides of a triangle stays the same regardless of the size of the triangle as long as the angles do not change. Generally, the rode should be between 5 and 10 times the depth to the seabed, giving a scope of 5:1 or 10:1; the larger the number, the shallower the angle is between the cable and the seafloor, and the less upwards force is acting on the anchor. A 10:1 scope gives the greatest holding power, but also allows for much more drifting about due to the longer amount of cable paid out. Anchoring with sufficient scope and/or heavy chain rode brings the direction of strain close to parallel with the seabed. This is particularly important for light, modern anchors designed to bury in the bottom, where scopes of 5:1 to 7:1 are common, whereas heavy anchors and moorings can use a scope of 3:1, or less. Some modern anchors, such as the Ultra holds with a scope of 3:1; but, unless the anchorage is crowded, a longer scope always reduces shock stresses. A major disadvantage of the concept of scope is that it does not take into account the fact that a chain is forming a catenary when hanging between two points (i.e., bow roller and the point where the chain hits the seabed), and thus is a non-linear curve (in fact, a cosh() function), whereas scope is a linear function. As a consequence, in deep water the scope needed will be less, whilst in very shallow water the scope must be chosen much larger to achieve the same pulling angle at the anchor shank. For this reason, the British Admiralty does not use a linear scope formula, but a square root formula instead. A couple of online calculators exist to work out the amount of chain and rope needed to achieve a (possibly nearly) horizontal pull at the anchor shank, and the associated anchor load. As symbol An anchor frequently appears on the flags and coats of arms of institutions involved with the sea, as well as of port cities and seacoast regions and provinces in various countries. There also exists in heraldry the "Anchored Cross", or Mariner's Cross, a stylized cross in the shape of an anchor. The symbol can be used to signify 'fresh start' or 'hope'. The Mariner's Cross is also referred to as St. Clement's Cross, in reference to the way this saint was killed (being tied to an anchor and thrown from a boat into the Black Sea in 102). Anchored crosses are occasionally a feature of coats of arms in which context they are referred to by the heraldic terms anchry or ancre. The Unicode anchor (Miscellaneous Symbols) is represented by: .
Technology
Naval transport
null
1365
https://en.wikipedia.org/wiki/Ammonia
Ammonia
Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinctive pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil. Ammonia, either directly or indirectly, is also a building block for the synthesis of many chemicals. Ammonia occurs in nature and has been detected in the interstellar medium. In many countries, it is classified as an extremely hazardous substance. Ammonia is produced biologically in a process called nitrogen fixation, but even more is generated industrially by the Haber process. The process helped revolutionize agriculture by providing cheap fertilizers. The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is transported by road in tankers, by rail in tank wagons, by sea in gas carriers, or in cylinders. Ammonia boils at at a pressure of one atmosphere, but the liquid can often be handled in the laboratory without external cooling. Household ammonia or ammonium hydroxide is a solution of ammonia in water. Etymology Pliny, in Book XXXI of his Natural History, refers to a salt named hammoniacum, so called because of the proximity of its source to the Temple of Jupiter Amun (Greek Ἄμμων Ammon) in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name. Natural occurrence (abiological) Traces of ammonia/ammonium are found in rainwater. Ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano. Ammonia is found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal. Properties Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, but the liquid-vapor critical point occurs at 405 K and 11.35 MPa. Solid The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm. Liquid Liquid ammonia possesses strong ionising powers reflecting its high ε of 22 at . Liquid ammonia has a very high standard enthalpy change of vapourization (23.5 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can be transported in pressurized or refrigerated vessels; however, at standard temperature and pressure liquid anhydrous ammonia will vaporize. Solvent properties Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic, and may be described as aqueous ammonia or ammonium hydroxide. The maximum concentration of ammonia in water (a saturated solution) has a specific gravity of 0.880 and is often known as '.880 ammonia'. Liquid ammonia is a widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity. These differences are attributed at least in part to the weaker hydrogen bonding in . The ionic self-dissociation constant of liquid at −50 °C is about 10−33. Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at . However, few oxyanion salts with other cations dissolve. Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules. These solutions are strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases. Redox properties of liquid ammonia The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts. Structure The ammonia molecule has a trigonal pyramidal shape, as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.7°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane. The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser. Amphotericity One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction. As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles. The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion (). Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide: Self-dissociation Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates: Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature, K = = 10−30. Combustion Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–28% ammonia by volume in air. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed. The combustion of ammonia to form nitrogen and water is exothermic: , ΔH°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of ) The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid: A subsequent reaction leads to : The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain. Precursor to organonitrogen compounds Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds. It is the precursor to nitric acid, which is the source for most N-substituted aromatic compounds. Amines can be formed by the reaction of ammonia with alkyl halides or, more commonly, with alcohols: Its ring-opening reaction with ethylene oxide give ethanolamine, diethanolamine, and triethanolamine. Amides can be prepared by the reaction of ammonia with carboxylic acid and their derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present. Amino acids, using Strecker amino-acid synthesis Acrylonitrile, in the Sohio process Other organonitrogen compounds include alprazolam, ethanolamine, ethyl carbamate and hexamethylenetetramine. Precursor to inorganic nitrogenous compounds Nitric acid is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide and nitrogen dioxide are intermediate in this conversion: Nitric acid is used for the production of fertilisers, explosives, and many organonitrogen compounds. The hydrogen in ammonia is susceptible to replacement by a myriad substituents. Ammonia gas reacts with metallic sodium to give sodamide, . With chlorine, monochloramine is formed. Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride decomposes spontaneously into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966. Ammonia is also used to make the following compounds: Hydrazine, in the Olin Raschig process and the peroxide process Hydrogen cyanide, in the BMA process and the Andrussow process Hydroxylamine and ammonium carbonate, in the Raschig process Urea, in the Bosch–Meiser urea process and in Wöhler synthesis ammonium perchlorate, ammonium nitrate, and ammonium bicarbonate Ammonia is a ligand forming metal ammine complexes. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. One notable ammine complex is cisplatin (, a widely used anticancer drug. Ammine complexes of chromium(III) formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron. Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. Detection and determination Ammonia in solution Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, . Gaseous ammonia Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm by volume. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume. In a laboratorial setting, gaseous ammonia can be detected by using concentrated hydrochloric acid or gaseous hydrogen chloride. A dense white fume (which is ammonium chloride vapor) arises from the reaction between ammonia and HCl(g). Ammoniacal nitrogen (NH3–N) Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre). History The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is equivalent to the more modern sal ammoniac (ammonium chloride). The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth. In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists. It was mentioned in the Book of Stones, likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia. Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition. The production of ammonia from nitrogen in the air (and hydrogen) was invented by Fritz Haber and Robert LeRossignol. The patent was sent in 1909 (USPTO Nr 1,202,995) and awarded in 1916. Later, Carl Bosch developed the industrial method for ammonia production (Haber–Bosch process). It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts. The Nobel Prize in Chemistry 1918 was awarded to Fritz Haber "for the synthesis of ammonia from its elements". Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process. With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal. Applications Fertiliser In the US , approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year. Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation. Refrigeration–R717 Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture. Ammonia coolant is also used in the radiators aboard the International Space Station in loops that are used to regulate the internal temperature and enable temperature-dependent experiments. The ammonia is under sufficient pressure to remain liquid throughout the process. Single-phase ammonia cooling systems also serve the power electronics in each pair of solar arrays. The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are potent and stable greenhouse gases. Antimicrobial agent for food products As early as in 1895, it was known that ammonia was 'strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef. Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef. Fuel Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen. Being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen. Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel. Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts. Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium. Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot. Ammonia production currently creates 1.8% of global emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis with electricity from renewable energy), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming (= SMR) where the carbon dioxide has been captured and stored (cfr. carbon capture and storage = CCS). Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design. In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020. Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored. Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality. In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held. In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly -free power generation. In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. , however, significant amounts of are produced. Nitrous oxide may also be a problem as it is a "greenhouse gas that is known to possess up to 300 times the Global Warming Potential (GWP) of carbon dioxide". The IEA forecasts that ammonia will meet approximately 45% of shipping fuel demands by 2050. At high temperature and in the presence of a suitable catalyst ammonia decomposes into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas. Other Remediation of gaseous emissions Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst. Ammonia may be used to mitigate gaseous spills of phosgene. Stimulant Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added. Textile Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool. Lifting gas At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast). Fuming Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour. Safety The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Health or Life, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 ppm to 300 ppm based on recent more conservative interpretations of original research in 1943. The 1 hour IDLH limit is still 500 ppm. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm. Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than . Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information. Toxicity The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as "dangerous for the environment". Atmospheric ammonia plays a key role in the formation of fine particulate matter. Ammonia is a constituent of tobacco smoke. Coking wastewater Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla Steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters. Aquaculture Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L. During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment. Storage information Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released. Laboratory The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table. The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880'–see ) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions. Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed. Laboratory use of anhydrous ammonia (gas or liquid) Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics. Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides. Production Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%. Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime: For small scale laboratory synthesis, one can heat urea and calcium hydroxide or sodium hydroxide: Haber–Bosch Electrochemical The electrochemical synthesis of ammonia involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar, with ethanol acting as the proton source. Beyond simply mediating proton transfer to the nitrogen reduction reaction, ethanol has been found to play a multifaceted role, influencing electrolyte transformations and contributing to the formation of the solid electrolyte interphase, which enhances overall reaction efficiency In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Subsequent studies have further explored the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis. In 2020, a solvent-agnostic gas diffusion electrode was shown to improve nitrogen transport to the reactive lithium. production rates of up to and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure were achieved. In 2021, it was demonstrated that ethanol could be replaced with a tetraalkyl phosphonium salt. The study observed production rates of at 69 ± 1% Faradaic efficiency experiments under 0.5 bar hydrogen and 19.5 bar nitrogen partial pressure at ambient temperature. Technology based on this electrochemistry is being developed for commercial fertiliser and fuel production. In 2022, ammonia was produced via the lithium mediated process in a continuous-flow electrolyzer also demonstrating the hydrogen gas as proton source. The study synthesized ammonia at 61 ± 1% Faradaic efficiency at a current density of −6 mA/cm2 at 1 bar and room temperature. Biochemistry and medicine Ammonia is essential for life. For example, it is required for the formation of amino acids and nucleic acids, fundamental building blocks of life. Ammonia is however quite toxic. Nature thus uses carriers for ammonia. Within a cell, glutamate serves this role. In the bloodstream, glutamine is a source of ammonia. Ethanolamine, required for cell membranes, is the substrate for ethanolamine ammonia-lyase, which produces ammonia: Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen. In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature. Biosynthesis In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble. Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste. Physiology Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurological disease common in people with urea cycle defects and organic acidurias. Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion. Excretion Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss. Extraterrestrial occurrence Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos–the two moons of Mars. Interstellar space Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected. The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium. The following isotopic species of ammonia have been detected: ,, , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate. Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected–its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia. The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer. Interstellar formation mechanisms The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction: The rate constant, k, of this reaction depends on the temperature of the environment, with a value of at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an abundance of and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density . All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction: has a rate constant of 2.2. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than three orders of magnitude slower than the primary reaction above. Some of the other possible formation reactions are: Interstellar destruction mechanisms There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms: with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of . Single antenna detections Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components–a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds. Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy. Interferometric studies VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region. Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region. Infrared detections Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines. A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze. A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars. Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk. Observations of nearby dark clouds By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation. UC HII regions Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars. Extragalactic detection Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
Physical sciences
Inorganic compounds
null
1366
https://en.wikipedia.org/wiki/Amethyst
Amethyst
Amethyst is a violet variety of quartz. The name comes from the Koine Greek from - , "not" and (Ancient Greek) / (Modern Greek), "intoxicate", a reference to the belief that the stone protected its owner from drunkenness. Ancient Greeks wore amethyst and carved drinking vessels from it in the belief that it would prevent intoxication. Amethyst, a semiprecious stone, is often used in jewelry. Structure Amethyst is a violet variety of quartz () and owes its violet color to irradiation, impurities of iron () and in some cases other transition metals, and the presence of other trace elements, which result in complex crystal lattice substitutions. The irradiation causes the iron ions that replace Si in the lattice to lose an electron and form a color center. Amethyst is a three-dimensional network of tetrahedra where the silicon atoms are in the center and are surrounded by four oxygen atoms located at the vertices of a tetrahedron. This structure is quite rigid and results in quartz's hardness and resistance to weathering. The hardness of the mineral is the same as quartz, thus making it suitable for use in jewelry. Hue and tone Amethyst occurs in primary hues from a light lavender or pale violet to a deep purple. Amethyst may exhibit one or both secondary hues, red and blue. High-quality amethyst can be found in Siberia, Sri Lanka, Brazil, Uruguay, and the Far East. The ideal grade, called "Deep Siberian", has a primary purple hue of around 75–80%, with 15–20% blue and (depending on the light source) red secondary hues. "Rose de France" is defined by its markedly light shade of the purple, reminiscent of a lavender / lilac shade. These pale colors were once considered undesirable, but have recently become popular due to intensive marketing. Green quartz is sometimes called green amethyst; the scientific name is prasiolite. Other names for green quartz are vermarine and lime citrine. Amethyst frequently shows color zoning, with the most intense color typically found at the crystal terminations. One of gem cutters' tasks is to make a finished product with even color. Sometimes, only a thin layer of a natural, uncut amethyst is violet colored, or the color is very uneven. The uncut gem may have only a small portion that is suitable for faceting. The color of amethyst has been demonstrated to result from substitution by irradiation of trivalent iron (Fe3+) for silicon in the structure, in the presence of trace elements of large ionic radius, and to a certain extent, the amethyst color can naturally result from displacement of transition elements even if the iron concentration is low. Natural amethyst is dichroic in reddish violet and bluish violet, but when heated, turns yellow-orange, yellow-brown, or dark brownish and may resemble citrine, but loses its dichroism, unlike genuine citrine. When partially heated, amethyst can result in ametrine. Amethyst can fade in tone if overexposed to light sources, and can be artificially darkened with adequate irradiation. It does not fluoresce under either short-wave or long-wave UV light. Geographic distribution Amethyst is found in many locations around the world. Between 2000 and 2010, the greatest production was from Marabá and Pau d'Arco, Pará, and the Paraná Basin, Rio Grande do Sul, Brazil; Sandoval, Santa Cruz, Bolivia; Artigas, Uruguay; Kalomo, Zambia; and Thunder Bay, Ontario. Lesser amounts are found in many other locations in Africa, Brazil, Spain, Argentina, Russia, Afghanistan, South Korea, Mexico, and the United States. Amethyst is produced in abundance in the state of Rio Grande do Sul in Brazil where it occurs in large geodes within volcanic rocks. Many of the hollow agates of southwestern Brazil and Uruguay contain a crop of amethyst crystals in the interior. Artigas, Uruguay and neighboring Brazilian state Rio Grande do Sul are large world producers, with lesser quantities mined in Minas Gerais and Bahia states. The largest amethyst geode found as of 2007 was the Empress of Uruguay, found in Artigas, Uruguay in 2007. It stands at a height of 3.27 meters, lies open along its length, and weighs 2.5 tons. Amethyst is also found and mined in South Korea. The large opencast amethyst vein at Maissau, Lower Austria, was historically important, but is no longer included among significant producers. Much fine amethyst comes from Russia, especially near Mursinka in the Ekaterinburg district, where it occurs in drusy cavities in granitic rocks. Amethyst was historically mined in many localities in south India, though these are no longer significant producers. One of the largest global amethyst producers is Zambia in southern Africa, with an annual production around 1000 tons. Amethyst occurs at many localities in the United States. The most important production is at Four Peaks, Gila and Maricopa Counties, Arizona, and Jackson's Crossroads, Wilkes County, Georgia. Smaller occurrences have been reported in the Red Feather Lakes, near Fort Collins, Colorado; Amethyst Mountain, Texas; Yellowstone National Park; Delaware County, Pennsylvania; Haywood County, North Carolina; Deer Hill and Stow, Maine, and in the Lake Superior region of Minnesota, Wisconsin, and Michigan. Amethyst is relatively common in the Canadian provinces of Ontario and Nova Scotia. The largest amethyst mine in North America is located in Thunder Bay, Ontario. Amethyst is the official state gemstone of South Carolina. Several South Carolina amethysts are on display at the Smithsonian Museum of Natural History. History Amethyst was used as a gemstone by the ancient Egyptians and was largely employed in antiquity for intaglio engraved gems. The ancient Greeks believed amethyst gems could prevent intoxication, while medieval European soldiers wore amethyst amulets as protection in battle in the belief that amethysts heal people and keep them cool-headed. Beads of amethyst were found in Anglo-Saxon graves in England. Anglican bishops wear an episcopal ring often set with an amethyst, an allusion to the description of the Apostles as "not drunk" at Pentecost in Acts 2:15. A large geode, or "amethyst-grotto", from near Santa Cruz in southern Brazil was presented at a 1902 exhibition in Düsseldorf, Germany. Synthetic amethyst Synthetic (laboratory-grown) amethyst is produced by a synthesis method called hydrothermal growth, which grows the crystals inside a high-pressure autoclave. Synthetic amethyst is made to imitate the best quality amethyst. Its chemical and physical properties are the same as those of natural amethyst, and it cannot be differentiated with absolute certainty without advanced gemmological testing (which is often cost-prohibitive). One test based on "Brazil law twinning" (a form of quartz twinning where right- and left-hand quartz structures are combined in a single crystal) can be used to identify most synthetic amethyst rather easily. Synthesizing twinned amethyst is possible, but this type is not available in large quantities in the market. Treated amethyst is produced by gamma ray, X-ray, or electron-beam irradiation of clear quartz (rock crystal), which has been first doped with ferric impurities. Exposure to heat partially cancels the irradiation effects and amethyst generally becomes yellow or even green. Much of the citrine, cairngorm, or yellow quartz of jewelry is said to be merely "burnt amethyst". Cultural history Ancient Greece The Greek word may be translated as "not drunken", from Greek , "not" + , "intoxicated". Amethyst was considered to be a strong antidote against drunkenness. In his poem "L'Amethyste, ou les Amours de Bacchus et d'Amethyste" (Amethyst or the loves of Bacchus and Amethyste), the French poet Rémy Belleau (1528–1577) invented a myth in which Bacchus, the god of intoxication, of wine, and grapes was pursuing a maiden named Amethyste, who refused his affections. Amethyste prayed to the gods to remain chaste, a prayer which the chaste goddess Diana answered, transforming her into a white stone. Humbled by Amethyste's desire to remain chaste, Bacchus poured wine over the stone as an offering, dyeing the crystals purple. Variations of the story include that Dionysus had been insulted by a mortal and swore to slay the next mortal who crossed his path, creating fierce tigers to carry out his wrath. The mortal turned out to be a beautiful young woman, Amethystos, who was on her way to pay tribute to Artemis. Her life was spared by Artemis, who transformed the maiden into a statue of pure crystalline quartz to protect her from the brutal claws. Dionysus wept tears of wine in remorse for his action at the sight of the beautiful statue. The god's tears then stained the quartz purple. This myth and its variations are not found in classical sources. However, the goddess Rhea does present Dionysus with an amethyst stone to preserve the wine-drinker's sanity in historical text. Other cultural associations Tibetans consider amethyst sacred to the Buddha and make prayer beads from it. Amethyst is considered the birthstone of February. In the Middle Ages, it was considered a symbol of royalty and used to decorate English regalia. In the Old World, amethyst was considered one of the cardinal gems, in that it was one of the five gemstones considered precious above all others, until large deposits were found in Brazil. Value Until the 18th century, amethyst was included in the cardinal, or most valuable, gemstones (along with diamond, sapphire, ruby, and emerald), but since the discovery of extensive deposits in locations such as Brazil, it has lost most of its value. It is now considered a semiprecious stone. Collectors look for depth of color, possibly with red flashes if cut conventionally. As amethyst is readily available in large structures, the value of the gem is not primarily defined by carat weight. This is different from most gemstones, since the carat weight typically exponentially increases the value of the stone. The biggest factor in the value of amethyst is the color displayed. The highest-grade amethyst (called deep Russian) is exceptionally rare. When one is found, its value is dependent on the demand of collectors; however, the highest-grade sapphires or rubies are still orders of magnitude more expensive than amethyst. Handling and care The most suitable setting for gem amethyst is a prong or a bezel setting. The channel method must be used with caution. Amethyst has a good hardness, and handling it with proper care will prevent any damage to the stone. Amethyst is sensitive to strong heat and may lose or change its colour when exposed to prolonged heat or light. Polishing the stone or cleaning it by ultrasonic or steamer must be done with caution.
Physical sciences
Silicate minerals
Earth science
1368
https://en.wikipedia.org/wiki/Assembly%20language
Assembly language
In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture. Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling. In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In "No Silver Bullet", Fred Brooks summarised the effects of the switch away from assembly language programming: "Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility." Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C. Assembly language syntax Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built-in and some user-defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging. Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column-oriented syntax in the 1960s. Terminology A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code. Open code refers to any assembler input outside of a macro definition. A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record). A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets. A microassembler is a program that helps prepare a microprogram to control the low level operation of a computer. A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series. inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware. Key concepts Assembler An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines. Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible. Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples. There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming). Number of passes There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file. One-pass assemblers process the source code once. For symbols used before they are defined, the assembler will emit "errata" after the eventual definition, telling the linker or the loader to patch the locations where the as yet undefined symbols had been used. Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code. In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more "no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target. The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster. Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2. B ... EQU * ... EQU * ... B High-level assemblers More sophisticated high-level assemblers provide language abstractions such as: High-level procedure/function declarations and invocations Advanced control structures (IF/THEN/ELSE, SWITCH) High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines) Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance See Language design below for more details. Assembly language A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed. For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001. 10110000 01100001 This binary computer code can be made more human-readable by expressing it in hexadecimal as follows. B0 61 Here, B0 means "Move a copy of the following value into AL", and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember. MOV AL, 61h ; Load AL with 97 decimal (61 hex) In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a. direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc. If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is: 88 E0 The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL. In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable. Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".) Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow. MOV AL, 1h ; Load AL with immediate value 1 MOV CL, 2h ; Load CL with immediate value 2 MOV DL, 3h ; Load DL with immediate value 3 The syntax of MOV can also be more complex as the following examples show. MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX MOV DS, DX ; Move the contents of DX into segment register DS In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which. Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments. Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences. Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation. Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products. "Hello, world!" on x86 Linux In 32-bit assembly language for Linux on an x86 processor, "Hello, world!" can be printed like this. section .text global _start _start: mov edx,len ; length of string, third argument to write() mov ecx,msg ; address of string, second argument to write() mov ebx,1 ; file descriptor (standard output), first argument to write() mov eax,4 ; system call number for write() int 0x80 ; system call trap mov ebx,0 ; exit code, first argument to exit() mov eax,1 ; system call number for exit() int 0x80 ; system call trap section .data msg db 'Hello, world!', 0xa len equ $ - msg Language design Basic elements There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations: Opcode mnemonics Data definitions Assembly directives Opcode mnemonics and extended mnemonics Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0. Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as synthetic instructions. Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as pseudo-opcodes. Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn. Data directives There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops. Assembly directives Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data. The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values. Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination). Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses. Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made. Macros Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s. Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM. In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly. Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time. Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today. It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements. This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop. Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers. Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters. Support for structured programming Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package. Another design was A-Natural, a "stream-oriented" assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans. There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages. Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program): include \masm32\include\masm32rt.inc ; use the Masm32 library .code demomain: REPEAT 20 switch rv(nrandom, 9) ; generate a number between 0 and 8 mov ecx, 7 case 0 print "case 0" case ecx ; in contrast to most other programming languages, print "case 7" ; the Masm32 switch allows "variable cases" case 1 .. 3 .if eax==1 print "case 1" .elseif eax==2 print "case 2" .else print "cases 1 to 3: other" .endif case 4, 6, 8 print "cases 4, 6 or 8" default mov ebx, 19 ; print 20 stars .Repeat print "*" dec ebx .Until Sign? ; loop until the sign flag is set endsw print chr$(13, 10) ENDM exit end demomain Use of assembly language When the stored-program computer was introduced programs were written in machine code, and loaded into the computer from punched paper tape or toggled directly into memory from console switches. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study. In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955. Assembly languages eliminated much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. By the late 1950s their use had largely been supplanted by higher-level languages in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems (see ). Numerous programs were written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software developed by large corporations. COBOL, FORTRAN and some PL/I eventually displaced assembly language, although a number of large organizations retained assembly-language application infrastructures well into the 1990s. Assembly language was the primary development language for 8-bit home computers such as the Apple II, Atari 8-bit computers, ZX Spectrum, and Commodore 64. Interpreted BASIC on these systems did not offer maximum execution speed and full use of facilities to take full advantage of the available hardware. Assembly language was the default choice for programming 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System. Key software for IBM PC compatibles such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet was written in assembly language. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to maximise performance from systems such as the Sega Saturn, and as the primary language for arcade hardware using the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam. Current usage There has been debate over the usefulness and performance of assembly language relative to high-level languages. Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization. , the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite some counter-examples. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers and assembly programmers alike. Increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging, making raw code execution speed a non-issue for many programmers. There are still certain computer programming domains in which the use of assembly programming is more common: Writing code for systems with that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures. Code that must interact directly with the hardware, for example in device drivers and interrupt handlers. In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second. Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition. Stand-alone executables that are required to execute without recourse to the run-time components or libraries associated with a high-level language, such as the firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems,and security systems. Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264). Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor. Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details. Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks. Video encoders and decoders such as rav1e (an encoder for AV1) and dav1d (the reference decoder for AV1) contain assembly to leverage AVX2 and ARM Neon instructions when available. Modify and extend legacy code written for IBM mainframe computers. Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted. Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system. Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum. Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available. Reverse engineering and modifying program files such as: existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software. Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level. Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behaviour is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn the basic concepts, recognize situations where the use of assembly language might be appropriate, and to see how efficient executable code can be created from high-level languages. Typical applications Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.) Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running. Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes. Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface. Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies. Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM. Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
Technology
Programming
null
1372
https://en.wikipedia.org/wiki/Amber
Amber
Amber is fossilized tree resin. Examples of it have been appreciated for its color and natural beauty since the Neolithic times, and worked as a gemstone since antiquity. Amber is used in jewelry and as a healing agent in folk medicine. There are five classes of amber, defined on the basis of their chemical constituents. Because it originates as a soft, sticky tree resin, amber sometimes contains animal and plant material as inclusions. Amber occurring in coal seams is also called resinite, and the term ambrite is applied to that found specifically within New Zealand coal seams. Etymology The English word amber derives from Arabic via Middle Latin ambar and Middle French ambre. The word referred to what is now known as ambergris (ambre gris or "gray amber"), a solid waxy substance derived from the sperm whale. The word, in its sense of "ambergris," was adopted in Middle English in the 14th century. In the Romance languages, the sense of the word was extended to Baltic amber (fossil resin) from as early as the late 13th century. At first called white or yellow amber (ambre jaune), this meaning was adopted in English by the early 15th century. As the use of ambergris waned, this became the main sense of the word. The two substances ("yellow amber" and "gray amber") conceivably became associated or confused because they both were found washed up on beaches. Ambergris is less dense than water and floats, whereas amber is too dense to float, though less dense than stone. The classical names for amber, Ancient Greek (ēlektron) and one of* its Latin names, electrum, are connected to a term ἠλέκτωρ (ēlektōr) meaning "beaming Sun". According to myth, when Phaëton son of Helios (the Sun) was killed, his mourning sisters became poplar trees, and their tears became elektron, amber. The word elektron gave rise to the words electric, electricity, and their relatives because of amber's ability to bear a charge of static electricity. (*In Latin the name succinum was unambiguously used for amber while electrum was also used for an alloy of gold and silver). Varietal names A number of regional and varietal names have been applied to ambers over the centuries, including Allingite, Beckerite, Gedanite, Kochenite, Krantzite, and Stantienite. History Theophrastus discussed amber in the 4th century BCE, as did Pytheas (), whose work "On the Ocean" is lost, but was referenced by Pliny, according to whose Natural History: Earlier Pliny says that Pytheas refers to a large island—three days' sail from the Scythian coast and called Balcia by Xenophon of Lampsacus (author of a fanciful travel book in Greek)—as Basilia—a name generally equated with Abalus. Given the presence of amber, the island could have been Heligoland, Zealand, the shores of Gdańsk Bay, the Sambia Peninsula or the Curonian Lagoon, which were historically the richest sources of amber in northern Europe. There were well-established trade routes for amber connecting the Baltic with the Mediterranean (known as the "Amber Road"). Pliny states explicitly that the Germans exported amber to Pannonia, from where the Veneti distributed it onwards. The ancient Italic peoples of southern Italy used to work amber; the National Archaeological Museum of Siritide (Museo Archeologico Nazionale della Siritide) at Policoro in the province of Matera (Basilicata) displays important surviving examples. It has been suggested that amber used in antiquity, as at Mycenae and in the prehistory of the Mediterranean, came from deposits in Sicily. Pliny also cites the opinion of Nicias ( 470–413 BCE), according to whom amber Besides the fanciful explanations according to which amber is "produced by the Sun", Pliny cites opinions that are well aware of its origin in tree resin, citing the native Latin name of succinum (sūcinum, from sucus "juice"). In Book 37, section XI of Natural History, Pliny wrote: He also states that amber is also found in Egypt and India, and he even refers to the electrostatic properties of amber, by saying that "in Syria the women make the whorls of their spindles of this substance, and give it the name of harpax [from ἁρπάζω, "to drag"] from the circumstance that it attracts leaves towards it, chaff, and the light fringe of tissues". The Romans traded for amber from the shores of the southern Baltic at least as far back as the time of Nero. Amber has a long history of use in China, with the first written record from 200 BCE. Early in the 19th century, the first reports of amber found in North America came from discoveries in New Jersey along Crosswicks Creek near Trenton, at Camden, and near Woodbury. Composition and formation Amber is heterogeneous in composition, but consists of several resinous more or less soluble in alcohol, ether and chloroform, associated with an insoluble bituminous substance. Amber is a macromolecule formed by free radical polymerization of several precursors in the labdane family, for example, communic acid, communol, and biformene. These labdanes are diterpenes (C20H32) and trienes, equipping the organic skeleton with three alkene groups for polymerization. As amber matures over the years, more polymerization takes place as well as isomerization reactions, crosslinking and cyclization. Most amber has a hardness between 2.0 and 2.5 on the Mohs scale, a refractive index of 1.5–1.6, a specific gravity between 1.06 and 1.10, and a melting point of 250–300 °C. Heated above , amber decomposes, yielding an oil of amber, and leaves a black residue which is known as "amber colophony", or "amber pitch"; when dissolved in oil of turpentine or in linseed oil this forms "amber varnish" or "amber lac". Molecular polymerization, resulting from high pressures and temperatures produced by overlying sediment, transforms the resin first into copal. Sustained heat and pressure drives off terpenes and results in the formation of amber. For this to happen, the resin must be resistant to decay. Many trees produce resin, but in the majority of cases this deposit is broken down by physical and biological processes. Exposure to sunlight, rain, microorganisms, and extreme temperatures tends to disintegrate the resin. For the resin to survive long enough to become amber, it must be resistant to such forces or be produced under conditions that exclude them. Fossil resins from Europe fall into two categories, the Baltic ambers and another that resembles the Agathis group. Fossil resins from the Americas and Africa are closely related to the modern genus Hymenaea, while Baltic ambers are thought to be fossil resins from plants of the family Sciadopityaceae that once lived in north Europe. The abnormal development of resin in living trees (succinosis) can result in the formation of amber. Impurities are quite often present, especially when the resin has dropped onto the ground, so the material may be useless except for varnish-making. Such impure amber is called firniss. Such inclusion of other substances can cause the amber to have an unexpected color. Pyrites may give a bluish color. Bony amber owes its cloudy opacity to numerous tiny bubbles inside the resin. However, so-called black amber is really a kind of jet. In darkly clouded and even opaque amber, inclusions can be imaged using high-energy, high-contrast, high-resolution X-rays. Extraction and processing Distribution and mining Amber is globally distributed in or around all continents, mainly in rocks of Cretaceous age or younger. Historically, the coast west of Königsberg in Prussia was the world's leading source of amber. The first mentions of amber deposits there date back to the 12th century. Juodkrantė in Lithuania was established in the mid-19th century as a mining town of amber. About 90% of the world's extractable amber is still located in that area, which was transferred to the Russian Soviet Federative Socialist Republic of the USSR in 1946, becoming the Kaliningrad Oblast. Pieces of amber torn from the seafloor are cast up by the waves and collected by hand, dredging, or diving. Elsewhere, amber is mined, both in open works and underground galleries. Then nodules of blue earth have to be removed and an opaque crust must be cleaned off, which can be done in revolving barrels containing sand and water. Erosion removes this crust from sea-worn amber. Dominican amber is mined through bell pitting, which is dangerous because of the risk of tunnel collapse. An important source of amber is Kachin State in northern Myanmar, which has been a major source of amber in China for at least 1,800 years. Contemporary mining of this deposit has attracted attention for unsafe working conditions and its role in funding internal conflict in the country. Amber from the Rivne Oblast of Ukraine, referred to as Rivne amber, is mined illegally by organised crime groups, who deforest the surrounding areas and pump water into the sediments to extract the amber, causing severe environmental deterioration. Treatment The Vienna amber factories, which use pale amber to manufacture pipes and other smoking tools, turn it on a lathe and polish it with whitening and water or with rotten stone and oil. The final luster is given by polishing with flannel. When gradually heated in an oil bath, amber "becomes soft and flexible. Two pieces of amber may be united by smearing the surfaces with linseed oil, heating them, and then pressing them together while hot. Cloudy amber may be clarified in an oil bath, as the oil fills the numerous pores that cause the turbidity. Small fragments, formerly thrown away or used only for varnish are now used on a large scale in the formation of "ambroid" or "pressed amber". The pieces are carefully heated with exclusion of air and then compressed into a uniform mass by intense hydraulic pressure, the softened amber being forced through holes in a metal plate. The product is extensively used for the production of cheap jewelry and articles for smoking. This pressed amber yields brilliant interference colors in polarized light." Amber has often been imitated by other resins like copal and kauri gum, as well as by celluloid and even glass. Baltic amber is sometimes colored artificially but also called "true amber". Appearance Amber occurs in a range of different colors. As well as the usual yellow-orange-brown that is associated with the color "amber", amber can range from a whitish color through a pale lemon yellow, to brown and almost black. Other uncommon colors include red amber (sometimes known as "cherry amber"), green amber, and even blue amber, which is rare and highly sought after. Yellow amber is a hard fossil resin from evergreen trees, and despite the name it can be translucent, yellow, orange, or brown colored. Known to the Iranians by the Pahlavi compound word kah-ruba (from kah "straw" plus rubay "attract, snatch", referring to its electrical properties), which entered Arabic as kahraba' or kahraba (which later became the Arabic word for electricity, كهرباء kahrabā), it too was called amber in Europe (Old French and Middle English ambre). Found along the southern shore of the Baltic Sea, yellow amber reached the Middle East and western Europe via trade. Its coastal acquisition may have been one reason yellow amber came to be designated by the same term as ambergris. Moreover, like ambergris, the resin could be burned as an incense. The resin's most popular use was, however, for ornamentation—easily cut and polished, it could be transformed into beautiful jewelry. Much of the most highly prized amber is transparent, in contrast to the very common cloudy amber and opaque amber. Opaque amber contains numerous minute bubbles. This kind of amber is known as "bony amber". Although all Dominican amber is fluorescent, the rarest Dominican amber is blue amber. It turns blue in natural sunlight and any other partially or wholly ultraviolet light source. In long-wave UV light it has a very strong reflection, almost white. Only about is found per year, which makes it valuable and expensive. Sometimes amber retains the form of drops and stalactites, just as it exuded from the ducts and receptacles of the injured trees. It is thought that, in addition to exuding onto the surface of the tree, amber resin also originally flowed into hollow cavities or cracks within trees, thereby leading to the development of large lumps of amber of irregular form. Classification Amber can be classified into several forms. Most fundamentally, there are two types of plant resin with the potential for fossilization. Terpenoids, produced by conifers and angiosperms, consist of ring structures formed of isoprene (C5H8) units. Phenolic resins are today only produced by angiosperms, and tend to serve functional uses. The extinct medullosans produced a third type of resin, which is often found as amber within their veins. The composition of resins is highly variable; each species produces a unique blend of chemicals which can be identified by the use of pyrolysis–gas chromatography–mass spectrometry. The overall chemical and structural composition is used to divide ambers into five classes. There is also a separate classification of amber gemstones, according to the way of production. Class I This class is by far the most abundant. It comprises labdatriene carboxylic acids such as communic or ozic acids. It is further split into three sub-classes. Classes Ia and Ib utilize regular labdanoid diterpenes (e.g. communic acid, communol, biformenes), while Ic uses enantio labdanoids (ozic acid, ozol, enantio biformenes). Class Ia includes Succinite (= 'normal' Baltic amber) and Glessite. They have a communic acid base, and they also include much succinic acid. Baltic amber yields on dry distillation succinic acid, the proportion varying from about 3% to 8%, and being greatest in the pale opaque or bony varieties. The aromatic and irritating fumes emitted by burning amber are mainly from this acid. Baltic amber is distinguished by its yield of succinic acid, hence the name succinite. Succinite has a hardness between 2 and 3, which is greater than many other fossil resins. Its specific gravity varies from 1.05 to 1.10. It can be distinguished from other ambers via infrared spectroscopy through a specific carbonyl absorption peak. Infrared spectroscopy can detect the relative age of an amber sample. Succinic acid may not be an original component of amber but rather a degradation product of abietic acid. Class Ib ambers are based on communic acid; however, they lack succinic acid. Class Ic is mainly based on enantio-labdatrienonic acids, such as ozic and zanzibaric acids. Its most familiar representative is Dominican amber,. which is mostly transparent and often contains a higher number of fossil inclusions. This has enabled the detailed reconstruction of the ecosystem of a long-vanished tropical forest. Resin from the extinct species Hymenaea protera is the source of Dominican amber and probably of most amber found in the tropics. It is not "succinite" but "retinite". Class II These ambers are formed from resins with a sesquiterpenoid base, such as cadinene. Class III These ambers are polystyrenes. Class IV Class IV is something of a catch-all: its ambers are not polymerized, but mainly consist of cedrene-based sesquiterpenoids. Class V Class V resins are considered to be produced by a pine or pine relative. They comprise a mixture of diterpinoid resins and n-alkyl compounds. Their main variety is Highgate copalite. Geological record The oldest amber recovered dates to the late Carboniferous period (). Its chemical composition makes it difficult to match the amber to its producers – it is most similar to the resins produced by flowering plants; however, the first flowering plants appeared in the Early Cretaceous, about 200 million years after the oldest amber known to date, and they were not common until the Late Cretaceous. Amber becomes abundant long after the Carboniferous, in the Early Cretaceous, when it is found in association with insects. The oldest amber with arthropod inclusions comes from the Late Triassic (late Carnian 230 Ma) of Italy, where four microscopic (0.2–0.1 mm) mites, Triasacarus, Ampezzoa, Minyacarus and Cheirolepidoptus, and a poorly preserved nematoceran fly were found in millimetre-sized droplets of amber. The oldest amber with significant numbers of arthropod inclusions comes from Lebanon. This amber, referred to as Lebanese amber, is roughly 125–135 million years old, is considered of high scientific value, providing evidence of some of the oldest sampled ecosystems. In Lebanon, more than 450 outcrops of Lower Cretaceous amber were discovered by Dany Azar, a Lebanese paleontologist and entomologist. Among these outcrops, 20 have yielded biological inclusions comprising the oldest representatives of several recent families of terrestrial arthropods. Even older Jurassic amber has been found recently in Lebanon as well. Many remarkable insects and spiders were recently discovered in the amber of Jordan including the oldest zorapterans, clerid beetles, umenocoleid roaches, and achiliid planthoppers. Burmese amber from the Hukawng Valley in northern Myanmar is the only commercially exploited Cretaceous amber. Uranium–lead dating of zircon crystals associated with the deposit have given an estimated depositional age of approximately 99 million years ago. Over 1,300 species have been described from the amber, with over 300 in 2019 alone. Baltic amber is found as irregular nodules in marine glauconitic sand, known as blue earth, occurring in Upper Eocene strata of Sambia in Prussia. It appears to have been partly derived from older Eocene deposits and it occurs also as a derivative phase in later formations, such as glacial drift. Relics of an abundant flora occur as inclusions trapped within the amber while the resin was yet fresh, suggesting relations with the flora of eastern Asia and the southern part of North America. Heinrich Göppert named the common amber-yielding pine of the Baltic forests Pinites succiniter, but as the wood does not seem to differ from that of the existing genus it has been also called Pinus succinifera. It is improbable that the production of amber was limited to a single species; and indeed a large number of conifers belonging to different genera are represented in the amber-flora. Paleontological significance Amber is a unique preservational mode, preserving otherwise unfossilizable parts of organisms; as such it is helpful in the reconstruction of ecosystems as well as organisms; the chemical composition of the resin, however, is of limited utility in reconstructing the phylogenetic affinity of the resin producer. Amber sometimes contains animals or plant matter that became caught in the resin as it was secreted. Insects, spiders and even their webs, annelids, frogs, crustaceans, bacteria and amoebae, marine microfossils, wood, flowers and fruit, hair, feathers and other small organisms have been recovered in Cretaceous ambers (deposited c. ). There is even an ammonite Puzosia (Bhimaites) and marine gastropods found in Burmese amber. The preservation of prehistoric organisms in amber forms a key plot point in Michael Crichton's 1990 novel Jurassic Park and the 1993 movie adaptation by Steven Spielberg. In the story, scientists are able to extract the preserved blood of dinosaurs from prehistoric mosquitoes trapped in amber, from which they genetically clone living dinosaurs. Scientifically this is as yet impossible, since no amber with fossilized mosquitoes has ever yielded preserved blood. Amber is, however, conducive to preserving DNA, since it dehydrates and thus stabilizes organisms trapped inside. One projection in 1999 estimated that DNA trapped in amber could last up to 100 million years, far beyond most estimates of around 1 million years in the most ideal conditions, although a later 2013 study was unable to extract DNA from insects trapped in much more recent Holocene copal. In 1938, 12-year-old David Attenborough (brother of Richard who played John Hammond in Jurassic Park) was given a piece of amber containing prehistoric creatures from his adoptive sister; it would be the focus of his 2004 BBC documentary The Amber Time Machine. Use Amber has been used since prehistory (Solutrean) in the manufacture of jewelry and ornaments, and also in folk medicine. Jewelry Amber has been used as jewelry since the Stone Age, from 13,000 years ago. Amber ornaments have been found in Mycenaean tombs and elsewhere across Europe. To this day it is used in the manufacture of smoking and glassblowing mouthpieces. Amber's place in culture and tradition lends it a tourism value; Palanga Amber Museum is dedicated to the fossilized resin. Historical medicinal uses Amber has long been used in folk medicine for its purported healing properties. Amber and extracts were used from the time of Hippocrates in ancient Greece for a wide variety of treatments through the Middle Ages and up until the early twentieth century. Traditional Chinese medicine uses amber to "tranquilize the mind". Amber necklaces are a traditional European remedy for colic or teething pain with purported analgesic properties of succinic acid, although there is no evidence that this is an effective remedy or delivery method. The American Academy of Pediatrics and the FDA have warned strongly against their use, as they present both a choking and a strangulation hazard. Scent of amber and amber perfumery In ancient China, it was customary to burn amber during large festivities. If amber is heated under the right conditions, oil of amber is produced, and in past times this was combined carefully with nitric acid to create "artificial musk" – a resin with a peculiar musky odor. Although when burned, amber does give off a characteristic "pinewood" fragrance, modern products, such as perfume, do not normally use actual amber because fossilized amber produces very little scent. In perfumery, scents referred to as "amber" are often created and patented to emulate the opulent golden warmth of the fossil. The scent of amber was originally derived from emulating the scent of ambergris and/or the plant resin labdanum, but since sperm whales are endangered, the scent of amber is now largely derived from labdanum. The term "amber" is loosely used to describe a scent that is warm, musky, rich and honey-like, and also somewhat earthy. Benzoin is usually part of the recipe. Vanilla and cloves are sometimes used to enhance the aroma. "Amber" perfumes may be created using combinations of labdanum, benzoin resin, copal (a type of tree resin used in incense manufacture), vanilla, Dammara resin and/or synthetic materials. In Arab Muslim tradition, popular scents include amber, jasmine, musk and oud (agarwood). Imitation substances Young resins used as imitations: Kauri resin from Agathis australis trees in New Zealand. The copals (subfossil resins). The African and American (Colombia) copals from Leguminosae trees family (genus Hymenaea). Amber of the Dominican or Mexican type (Class I of fossil resins). Copals from Manilia (Indonesia) and from New Zealand from trees of the genus Agathis (family Araucariaceae) Other fossil resins: burmite in Burma, rumenite in Romania, and simetite in Sicily. Other natural resins — cellulose or chitin, etc. Plastics used as imitations: Stained glass (inorganic material) and other ceramic materials Celluloid Cellulose nitrate (first obtained in 1833) — a product of treatment of cellulose with nitration mixture. Acetylcellulose (not in the use at present) Galalith or "artificial horn" (condensation product of casein and formaldehyde), other trade names: Alladinite, Erinoid, Lactoid. Casein — a conjugated protein forming from the casein precursor – caseinogen. Resolane (phenolic resins or phenoplasts, not in the use at present) Bakelite resine (resol, phenolic resins), product from Africa are known under the misleading name "African amber". Carbamide resins — melamine, formaldehyde and urea-formaldehyde resins. Epoxy novolac (phenolic resins), unofficial name "antique amber", not in the use at present Polyesters (Polish amber imitation) with styrene. For example, unsaturated polyester resins (polymals) are produced by Chemical Industrial Works "Organika" in Sarzyna, Poland; estomal are produced by Laminopol firm. Polybern or sticked amber is artificial resins the curled chips are obtained, whereas in the case of amber – small scraps. "African amber" (polyester, synacryl is then probably other name of the same resine) are produced by Reichhold firm; Styresol trade mark or alkid resin (used in Russia, Reichhold, Inc. patent, 1948. Polyethylene Epoxy resins Polystyrene and polystyrene-like polymers (vinyl polymers). The resins of acrylic type (vinyl polymers), especially polymethyl methacrylate PMMA (trade mark Plexiglass, metaplex).
Physical sciences
Organic gemstones
null
1383
https://en.wikipedia.org/wiki/Alder
Alder
Alders are trees of the genus Alnus in the birch family Betulaceae. The genus includes about 35 species of monoecious trees and shrubs, a few reaching a large size, distributed throughout the north temperate zone with a few species extending into Central America, as well as the northern and southern Andes. Description With a few exceptions, alders are deciduous, and the leaves are alternate, simple, and serrated. The flowers are catkins with elongate male catkins on the same plant as shorter female catkins, often before leaves appear; they are mainly wind-pollinated, but also visited by bees to a small extent. These trees differ from the birches (Betula, another genus in the family) in that the female catkins are woody and do not disintegrate at maturity, opening to release the seeds in a similar manner to many conifer cones. The largest species are red alder (A. rubra) on the west coast of North America, and black alder (A. glutinosa), native to most of Europe and widely introduced elsewhere, both reaching over . By contrast, the widespread Alnus alnobetula (green alder) is rarely more than a shrub. Phylogeny Classification The genus is divided into three subgenera: Subgenus Alnus Trees with stalked shoot buds, male and female catkins produced in autumn (fall) but stay closed over winter, pollinating in late winter or early spring, about 15–25 species, including: Alnus acuminata subsp. acuminata subsp. arguta subsp. glabrata Alnus cordata Alnus cremastogyne Alnus firma Alnus glutinosa subsp. barbata subsp. glutinosa subsp. incisa subsp. laciniata Alnus hirsuta Alnus incana subsp. incana subsp. kolaensis subsp. rugosa subsp. tenuifolia Alnus japonica Alnus jorullensis subsp. lutea subsp. jorullensis Alnus lusitanica Alnus matsumurae Alnus nepalensis Alnus oblongifolia Alnus orientalis Alnus rhombifolia Alnus rohlenae Alnus rubra Alnus serrulata Alnus subcordata Alnus tenuifolia Alnus trabeculosa Subgenus Clethropsis Trees or shrubs with stalked shoot buds, male and female catkins produced in autumn (fall) and expanding and pollinating then, three species: Alnus formosana Alnus maritima Alnus nitida Subgenus Alnobetula Shrubs with shoot buds not stalked, male and female catkins produced in late spring (after leaves appear) and expanding and pollinating then, one to four species: Alnus alnobetula (synonym-Alnus viridis) subsp. alnobetula subsp. crispa subsp. fruticosa subsp. sinuata subsp. suaveolens Alnus firma Alnus mandshurica Alnus maximowiczii Alnus pendula Alnus sieboldiana Not assigned to a subgenus Alnus fauriei Alnus ferdinandi-coburgii Alnus glutipes Alnus hakkodensis Alnus henryi Alnus lanata Alnus mairei Alnus paniculata Alnus serrulatoides Alnus vermicularis Species names with uncertain taxonomic status The status of the following species is unresolved: Alnus balatonialis Alnus cuneata Alnus dimitrovii Alnus djavanshirii – Iran Alnus dolichocarpa – Iran Alnus figerti Alnus frangula Alnus gigantea Alnus glandulosa Alnus henedae Alnus hybrida Alnus laciniata Alnus lobata Alnus microphylla Alnus obtusifolia Alnus oxyacantha Alnus subrotunda Alnus vilmoriana Alnus washingtonia Hybrids The following hybrids have been described: Alnus × elliptica (A. cordata × A. glutinosa) Alnus × fallacina (A. incana subsp. rugosa × A. serrulata) Alnus × hanedae (A. firma × A. sieboldiana) Alnus × hosoii (A. maximowiczii × A. pendula) Alnus × mayrii (A. hirsuta × A. japonica) Alnus × peculiaris (A. firma × A. pendula) Alnus × pubescens (A. glutinosa × A. incana) Alnus × suginoi The status of the following hybrids is unresolved: Alnus × aschersoniana Alnus × koehnei Alnus × ljungeri Alnus × purpusii Alnus × silesiaca Alnus × spaethii (A. japonica × A. subcordata) Fossil record The oldest fossil pollen that can be identified as Alnus is from northern Bohemia, dating to the late Paleocene, around 58 million years ago. †Alnus fairi - Miocene; Western North America †Alnus heterodonta – Oligocene; Fossil, Oregon †Alnus hollandiana - Miocene; Western North America †Alnus largei - Miocene; Western North America †Alnus parvifolia - Ypresian; Okanagan Highlands †Alnus relatus - Miocene; Western North America Etymology The common name alder evolved from the Old English word alor, which in turn is derived from Proto-Germanic root aliso. The generic name Alnus is the equivalent Latin name, from whence French aulne and Spanish Alamo (Spanish term for "poplar"). Ecology Alders are commonly found near streams, rivers, and wetlands. Sometimes where the prevalence of alders is particularly prominent these are called alder carrs. In the Pacific Northwest of North America, the white alder (Alnus rhombifolia) unlike other northwest alders, has an affinity for warm, dry climates, where it grows along watercourses, such as along the lower Columbia River east of the Cascades and the Snake River, including Hells Canyon. Alder leaves and sometimes catkins are used as food by numerous butterflies and moths. A. glutinosa and A. viridis are classed as environmental weeds in New Zealand. Alder leaves and especially the roots are important to the ecosystem because they enrich the soil with nitrogen and other nutrients. Nitrogen fixation and succession of woodland species Alder is particularly noted for its important symbiotic relationship with Frankia alni, an actinomycete, filamentous, nitrogen-fixing bacterium. This bacterium is found in root nodules, which may be as large as a human fist, with many small lobes, and light brown in colour. The bacterium absorbs nitrogen from the air and makes it available to the tree. Alder, in turn, provides the bacterium with sugars, which it produces through photosynthesis. As a result of this mutually beneficial relationship, alder improves the fertility of the soil where it grows, and as a pioneer species, it helps provide additional nitrogen for the successional species to follow. Because of its abundance, red alder delivers large amounts of nitrogen to enrich forest soils. Red alder stands have been found to supply between of nitrogen annually to the soil. From Alaska to Oregon, Alnus viridis subsp. sinuata (A. sinuata, Sitka Alder or Slide Alder), characteristically pioneer fresh, gravelly sites at the foot of retreating glaciers. Studies show that Sitka alder, a more shrubby variety of alder, adds nitrogen to the soil at an average rate of per year, helping convert the sterile glacial terrain to soil capable of supporting a conifer forest. Alders are common among the first species to colonize disturbed areas from floods, windstorms, fires, landslides, etc. Alder groves often serve as natural firebreaks since these broad-leaved trees are much less flammable than conifers. Their foliage and leaf litter does not carry a fire well, and their thin bark is sufficiently resistant to protect them from light surface fires. In addition, the light weight of alder seedsnumbering allows for easy dispersal by the wind. Although it outgrows coastal Douglas-fir for the first 25 years, it is very shade intolerant and seldom lives more than 100 years. Red alder is the Pacific Northwest's largest alder and the most plentiful and commercially important broad-leaved tree in the coastal Northwest. Groves of red alder in diameter intermingle with young Douglas-fir forests west of the Cascades, attaining a maximum height of in about sixty years and then are afflicted by heart rot. Alders largely help create conditions favorable for giant conifers that replace them. Parasites Alder roots are parasitized by northern groundcone. Uses The catkins of some alder species have a degree of edibility, and may be rich in protein. Reported to have a bitter and unpleasant taste, they are more useful for survival purposes. The wood of certain alder species is often used to smoke various food items such as coffee, salmon, and other seafood. Alder is notably stable when immersed, and has been used for millennia as a material for pilings for piers and wharves. Most of the pilings that form the foundation of Venice were made from alder trees. Alder bark contains the anti-inflammatory salicin, which is metabolized into salicylic acid in the body. Some Native American cultures use red alder bark (Alnus rubra) to treat poison oak, insect bites, and skin irritations. Blackfeet Indians have traditionally used an infusion made from the bark of red alder to treat lymphatic disorders and tuberculosis. Recent clinical studies have verified that red alder contains betulin and lupeol, compounds shown to be effective against a variety of tumors. The inner bark of the alder, as well as red osier dogwood, or chokecherry, is used by some Indigenous peoples of the Americas in smoking mixtures, known as kinnikinnick, to improve the taste of the bearberry leaf. Alder is illustrated in the coat of arms for the Austrian town of Grossarl. Electric guitars, most notably those manufactured by the Fender Musical Instruments Corporation, have been built with alder bodies since the 1950s. Alder is appreciated for its tone that is claimed to be tight and evenly balanced, especially when compared to mahogany, and has been adopted by many electric guitar manufacturers. It usually is finished in opaque lacquer (nitrocellulose, polyurethane, or polyester), as it does not have a prominent grain. As a hardwood, alder is used in making furniture, cabinets, and other woodworking products. In these applications, its aforementioned lack of prominent grain means that it is often veneered, either by stained light woods such as oak, ash, or figured maple, or by darker woods such as teak or walnut. Alder bark and wood (like oak and sweet chestnut) contain tannin and are traditionally used to tan leather. A red dye can also be extracted from the outer bark, and a yellow dye from the inner bark. Culture Ermanno Olmi's movie The Tree of Wooden Clogs (L' Albero Degli Zoccoli, 1978) refers in its title to alder, typically used to make clogs as in this movie's plot.
Biology and health sciences
Fagales
null
1394
https://en.wikipedia.org/wiki/Algol
Algol
Algol , designated Beta Persei (β Persei, abbreviated Beta Per, β Per), known colloquially as the Demon Star, is a bright multiple star in the constellation of Perseus and one of the first non-nova variable stars to be discovered. Algol is a three-star system, consisting of Beta Persei Aa1, Aa2, and Ab – in which the hot luminous primary β Persei Aa1 and the larger, but cooler and fainter, β Persei Aa2 regularly pass in front of each other, causing eclipses. Thus Algol's magnitude is usually near-constant at 2.1, but regularly dips to 3.4 every 2.86 days during the roughly 10-hour-long partial eclipses. The secondary eclipse when the brighter primary star occults the fainter secondary is very shallow and can only be detected photoelectrically. Algol gives its name to its class of eclipsing variable, known as Algol variables. Observation history An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago is said to be the oldest historical documentation of the discovery of Algol. The association of Algol with a demon-like creature (Gorgon in the Greek tradition, ghoul in the Arabic tradition) suggests that its variability was known long before the 17th century, but there is still no indisputable evidence for this. The Arabic astronomer al-Sufi said nothing about any variability of the star in his Book of Fixed Stars published c.964. The variability of Algol was noted in 1667 by Italian astronomer Geminiano Montanari, but the periodic nature of its variations in brightness was not recognized until more than a century later, when the British amateur astronomer John Goodricke also proposed a mechanism for the star's variability. In May 1783, he presented his findings to the Royal Society, suggesting that the periodic variability was caused by a dark body passing in front of the star (or else that the star itself has a darker region that is periodically turned toward the Earth). For his report he was awarded the Copley Medal. In 1881, the Harvard astronomer Edward Charles Pickering presented evidence that Algol was actually an eclipsing binary. This was confirmed a few years later, in 1889, when the Potsdam astronomer Hermann Carl Vogel found periodic doppler shifts in the spectrum of Algol, inferring variations in the radial velocity of this binary system. Thus, Algol became one of the first known spectroscopic binaries. Joel Stebbins at the University of Illinois Observatory used an early selenium cell photometer to produce the first-ever photoelectric study of a variable star. The light curve revealed the second minimum and the reflection effect between the two stars. Some difficulties in explaining the observed spectroscopic features led to the conjecture that a third star may be present in the system; four decades later this conjecture was found to be correct. System Algol is a multiple-star system with three confirmed and two suspected stellar components. From the point of view of the Earth, Algol Aa1 and Algol Aa2 form an eclipsing binary because their orbital plane contains the line of sight to the Earth. The eclipsing binary pair is separated by only 0.062 astronomical units (au) from each other, whereas the third star in the system (Algol Ab) is at an average distance of 2.69 au from the pair, and the mutual orbital period of the trio is 681 Earth days. The total mass of the system is about 5.8 solar masses, and the mass ratios of Aa1, Aa2, and Ab are about 4.5 to 1 to 2. The three components of the bright triple star used to be, and still sometimes are, referred to as β Per A, B, and C. The Washington Double Star Catalog lists them as Aa1, Aa2, and Ab, with two very faint stars B and C about one arcmin distant. A further five faint stars are also listed as companions. The close pair consists of a B8 main sequence star and a much less massive K0 subgiant, which is highly distorted by the more massive star. These two orbit every 2.9 days and undergo the eclipses that cause Algol to vary in brightness. The third star orbits these two every 680 days and is an A or F-type main sequence star. It has been classified as an Am star, but this is now considered doubtful. Studies of Algol led to the Algol paradox in the theory of stellar evolution: although components of a binary star form at the same time, and massive stars evolve much faster than the less massive stars, the more massive component Algol Aa1 is still in the main sequence, but the less massive Algol Aa2 is a subgiant star at a later evolutionary stage. The paradox can be solved by mass transfer: when the more massive star became a subgiant, it filled its Roche lobe, and most of the mass was transferred to the other star, which is still in the main sequence. In some binaries similar to Algol, a gas flow can be seen. The gas flow between the primary and secondary stars in Algol has been imaged using Doppler Tomography. This system also exhibits x-ray and radio wave flares. The x-ray flares are thought to be caused by the magnetic fields of the A and B components interacting with the mass transfer. The radio-wave flares might be created by magnetic cycles similar to those of sunspots, but because the magnetic fields of these stars are up to ten times stronger than the field of the Sun, these radio flares are more powerful and more persistent. The secondary component was identified as the radio emitting source in Algol using Very-long-baseline interferometry by Lestrade and co-authors. Magnetic activity cycles in the chromospherically active secondary component induce changes in its radius of gyration that have been linked to recurrent orbital period variations on the order of  ≈  via the Applegate mechanism. Mass transfer between the components is small in the Algol system but could be a significant source of period change in other Algol-type binaries. The distance to Algol has been measured using very-long baseline interferometry, giving a value of 94 light-years. About 7.3 million years ago it passed within 9.8 light-years of the Solar System and its apparent magnitude was about −2.5, which is considerably brighter than the star Sirius is today. Because the total mass of the Algol system is about 5.8 solar masses, at the closest approach this might have given enough gravity to perturb the Oort cloud of the Solar System somewhat and hence increase the number of comets entering the inner Solar System. However, the actual increase in net cometary collisions is thought to have been quite small. Names Beta Persei is the star's Bayer designation. The official name Algol The name Algol derives from Arabic raʾs al-ghūl : head (raʾs) of the ogre (al-ghūl) (see "ghoul"). The English name Demon Star was taken from the Arabic name. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Algol for this star. It is so entered on the IAU Catalog of Star Names. Ghost and demon star Algol was called Rōsh ha Sāṭān or "Satan's Head" in Hebrew folklore, as stated by Edmund Chilmead, who called it "Divels head" or Rosch hassatan. A Latin name for Algol from the 16th century was Caput Larvae or "the Spectre's Head". Hipparchus and Pliny made this a separate, though connected, constellation. First star of Medusa's head Earlier the name of the constellation Perseus was Perseus and Medusa's Head where an asterism representing the head of Medusa after Perseus has cut it off already known in ancient Rome. Medusa is a gorgon so the star is also called Gorgonea Prima meaning the first star of the gorgon. Chinese names In Chinese, (), meaning Mausoleum, refers to an asterism consisting of β Persei, 9 Persei, τ Persei, ι Persei, κ Persei, ρ Persei, 16 Persei and 12 Persei. Consequently, the Chinese name for β Persei itself is (, English: The Fifth Star of Mausoleum.). According to R.H. Allen the star bore the grim name of Tseih She (), meaning "Piled up Corpses" but this appears to be a misidentification, and Dié Shī is correctly π Persei, which is inside the Mausoleum. Observing Algol The Algol system usually has an apparent magnitude of 2.1, similar to those of Mirfak (α Persei) at 1.9 and Almach (γ Andromedae) at 2.2, with whom it forms a right triangle. During eclipses it dims to 3.4, making it as faint as nearby ρ Persei at 3.3. Listed are the first eclipse dates and times of each month, with all times in UT. β Persei Aa2 eclipses β Persei Aa1 every 2.867321 days (2 days 20 hours 49 min). To determine subsequent eclipses, add this interval to each listed date and time. For example, the Jan 2 eclipse at 8h will result in consecutive eclipse times on Jan 5 at 5h, Jan 8 at 1h, Jan 10 at 22h, and so on (all times approximate). Cultural significance Historically, the star has received a strong association with bloody violence across a wide variety of cultures. In the Tetrabiblos, the 2nd-century astrological text of the Alexandrian astronomer Ptolemy, Algol is referred to as "the Gorgon of Perseus" and associated with death by decapitation: a theme which mirrors the myth of the hero Perseus's victory over the snake-haired Gorgon Medusa. In the astrology of fixed stars, Algol is considered one of the unluckiest stars in the sky, and was listed as one of the 15 Behenian stars.
Physical sciences
Notable stars
Astronomy
1400
https://en.wikipedia.org/wiki/Anno%20Domini
Anno Domini
The terms (AD) and before Christ (BC) are used when designating years in the Gregorian and Julian calendars. The term is Medieval Latin and means "in the year of the Lord" but is often presented using "our Lord" instead of "the Lord", taken from the full original phrase "anno Domini nostri Jesu Christi", which translates to "in the year of our Lord Jesus Christ". The form "BC" is specific to English, and equivalent abbreviations are used in other languages: the Latin form, rarely used in English, is (ACN) or (AC). This calendar era takes as its epoch the traditionally reckoned year of the conception or birth of Jesus. Years AD are counted forward since that epoch and years BC are counted backward from the epoch. There is no year zero in this scheme; thus the year AD 1 immediately follows the year 1 BC. This dating system was devised in 525 by Dionysius Exiguus but was not widely used until the 9th century. (Modern scholars believe that the actual date of birth of Jesus was about 5 BC.) Terminology that is viewed by some as being more neutral and inclusive of non-Christian people is to call this the Common Era (abbreviated as CE), with the preceding years referred to as Before the Common Era (BCE). Astronomical year numbering and ISO 8601 avoid words or abbreviations related to Christianity, but use the same numbers for AD years (but not for BC years in the case of astronomical years; e.g., 1 BC is year 0, 45 BC is year −44). Usage Traditionally, English follows Latin usage by placing the "AD" abbreviation before the year number, though it is also found after the year. In contrast, "BC" is always placed after the year number (for example: 70 BC but AD 70), which preserves syntactic order. The abbreviation "AD" is also widely used after the number of a century or millennium, as in "fourth century AD" or "second millennium AD" (although conservative usage formerly rejected such expressions). Since "BC" is the English abbreviation for Before Christ, it is sometimes incorrectly concluded that AD means After Death (i.e., after the death of Jesus), which would mean that the approximately 33 years commonly associated with the life of Jesus would be included in neither the BC nor the AD time scales. History The anno Domini dating system was devised in 525 by Dionysius Exiguus to enumerate years in his Easter table. His system was to replace the Diocletian era that had been used in older Easter tables, as he did not wish to continue the memory of a tyrant who persecuted Christians. The last year of the old table, Diocletian Anno Martyrium 247, was immediately followed by the first year of his table, anno Domini 532. When Dionysius devised his table, Julian calendar years were identified by naming the consuls who held office that year— Dionysius himself stated that the "present year" was "the consulship of Probus Junior", which was 525 years "since the incarnation of our Lord Jesus Christ". Thus, Dionysius implied that Jesus' incarnation occurred 525 years earlier, without stating the specific year during which his birth or conception occurred. "However, nowhere in his exposition of his table does Dionysius relate his epoch to any other dating system, whether consulate, Olympiad, year of the world, or regnal year of Augustus; much less does he explain or justify the underlying date." Bonnie J. Blackburn and Leofranc Holford-Strevens briefly present arguments for 2 BC, 1 BC, or AD 1 as the year Dionysius intended for the Nativity or incarnation. Among the sources of confusion are: In modern times, incarnation is synonymous with the conception, but some ancient writers, such as Bede, considered incarnation to be synonymous with the Nativity. The civil or consular year began on 1 January, but the Diocletian year began on 29 August (30 August in the year before a Julian leap year). There were inaccuracies in the lists of consuls. There were confused summations of emperors' regnal years. It is not known how Dionysius established the year of Jesus's birth. One major theory is that Dionysius based his calculation on the Gospel of Luke, which states that Jesus was "about thirty years old" shortly after "the fifteenth year of the reign of Tiberius Caesar", and hence subtracted thirty years from that date, or that Dionysius counted back 532 years from the first year of his new table. This method was probably the one used by ancient historians such as Tertullian, Eusebius or Epiphanius, all of whom agree that Jesus was born in 2 BC, probably following this statement of Jesus' age (i.e. subtracting thirty years from AD 29). Alternatively, Dionysius may have used an earlier unknown source. The Chronograph of 354 states that Jesus was born during the consulship of Caesar and Paullus (AD 1), but the logic behind this is also unknown. It has also been speculated by Georges Declercq that Dionysius' desire to replace Diocletian years with a calendar based on the incarnation of Christ was intended to prevent people from believing the imminent end of the world. At the time, it was believed by some that the resurrection of the dead and end of the world would occur 500 years after the birth of Jesus. The old Anno Mundi calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that, based on the Anno Mundi calendar, Jesus was born in the year 5500 (5500 years after the world was created) with the year 6000 of the Anno Mundi calendar marking the end of the world. Anno Mundi 6000 (approximately AD 500) was thus equated with the end of the world but this date had already passed in the time of Dionysius. The "Historia Brittonum" attributed to Nennius written in the 9th century makes extensive use of the Anno Passionis (AP) dating system which was in common use as well as the newer AD dating system. The AP dating system took its start from 'The Year of The Passion'. It is generally accepted by experts there is a 27-year difference between AP and AD reference. The date of birth of Jesus of Nazareth is not stated in the gospels or in any secular text, but most scholars assume a date of birth between 6 BC and 4 BC. The historical evidence is too fragmentary to allow a definitive dating, but the date is estimated through two different approaches—one by analyzing references to known historical events mentioned in the Nativity accounts in the Gospels of Luke and Matthew and the second by working backwards from the estimation of the start of the ministry of Jesus. Popularization The Anglo-Saxon historian Bede, who was familiar with the work of Dionysius Exiguus, used anno Domini dating in his Ecclesiastical History of the English People, which he completed in AD 731. In the History he also used the Latin phrase ante [...] incarnationis dominicae tempus anno sexagesimo ("in the sixtieth year before the time of the Lord's incarnation"), which is equivalent to the English "before Christ", to identify years before the first year of this era. Both Dionysius and Bede regarded anno Domini as beginning at the incarnation of Jesus Christ, but "the distinction between Incarnation and Nativity was not drawn until the late 9th century, when in some places the Incarnation epoch was identified with Christ's conception, i. e., the Annunciation on March 25" ("Annunciation style" dating). On the continent of Europe, anno Domini was introduced as the era of choice of the Carolingian Renaissance by the English cleric and scholar Alcuin in the late eighth century. Its endorsement by Emperor Charlemagne and his successors popularizing the use of the epoch and spreading it throughout the Carolingian Empire ultimately lies at the core of the system's prevalence. According to the Catholic Encyclopedia, popes continued to date documents according to regnal years for some time, but usage of AD gradually became more common in Catholic countries from the 11th to the 14th centuries. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius. Eastern Orthodox countries only began to adopt AD instead of the Byzantine calendar in 1700 when Russia did so, with others adopting it in the 19th and 20th centuries. Although anno Domini was in widespread use by the 9th century, the term "Before Christ" (or its equivalent) did not become common until much later. Bede used the expression "anno [...] ante incarnationem Dominicam" (in the year before the incarnation of the Lord) twice. "Anno ante Christi nativitatem" (in the year before the birth of Christ) is found in 1474 in a work by a German monk. In 1627, the French Jesuit theologian Denis Pétau (Dionysius Petavius in Latin), with his work De doctrina temporum, popularized the usage ante Christum (Latin for "Before Christ") to mark years prior to AD. New year When the reckoning from Jesus' incarnation began replacing the previous dating systems in western Europe, various people chose different Christian feast days to begin the year: Christmas, Annunciation, or Easter. Thus, depending on the time and place, the year number changed on different days in the year, which created slightly different styles in chronology: From 25 March 753 AUC (1 BC), i.e., notionally from the incarnation of Jesus. That first "Annunciation style" appeared in Arles at the end of the 9th century then spread to Burgundy and northern Italy. It was not commonly used and was called calculus pisanus since it was adopted in Pisa and survived there until 1750. From 25 December 753 AUC (1 BC), i.e., notionally from the birth of Jesus. It was called "Nativity style" and had been spread by Bede together with the anno Domini in the early Middle Ages. That reckoning of the Year of Grace from Christmas was used in France, England and most of western Europe (except Spain) until the 12th century (when it was replaced by Annunciation style) and in Germany until the second quarter of the 13th century. From 25 March 754 AUC (AD 1). That second "Annunciation style" may have originated in Fleury Abbey in the early 11th century, but it was spread by the Cistercians. Florence adopted that style in opposition to that of Pisa, so it got the name of calculus florentinus. It soon spread in France and also in England where it became common in the late 12th century and lasted until 1752. From Easter. That mos gallicanus (French custom) bound to a moveable feast was introduced in France by king Philip Augustus (r. 1180–1223), maybe to establish a new style in the provinces reconquered from England. However, it never spread beyond the ruling élite. With these various styles, the same day could, in some cases, be dated in 1099, 1100 or 1101. Other Christian and European eras During the first six centuries of what would come to be known as the Christian era, European countries used various systems to count years. Systems in use included consular dating, imperial regnal year dating, and Creation dating. Although the last non-imperial consul, Basilius, was appointed in 541 by Emperor Justinian I, later emperors through to Constans II (641–668) were appointed consuls on the first of January after their accession. All of these emperors, except Justinian, used imperial post-consular years for the years of their reign, along with their regnal years. Long unused, this practice was not formally abolished until Novell XCIV of the law code of Leo VI did so in 888. Another calculation had been developed by the Alexandrian monk Annianus around the year AD 400, placing the Annunciation on 25 March AD 9 (Julian)—eight to ten years after the date that Dionysius was to imply. Although this incarnation was popular during the early centuries of the Byzantine Empire, years numbered from it, an Era of Incarnation, were exclusively used and are still used in Ethiopia. This accounts for the seven- or eight-year discrepancy between the Gregorian and Ethiopian calendars. Byzantine chroniclers like Maximus the Confessor, George Syncellus, and Theophanes dated their years from Annianus' creation of the world. This era, called Anno Mundi, "year of the world" (abbreviated AM), by modern scholars, began its first year on 25 March 5492 BC. Later Byzantine chroniclers used Anno Mundi years from 1 September 5509 BC, the Byzantine Era. No single Anno Mundi epoch was dominant throughout the Christian world. Eusebius of Caesarea in his Chronicle used an era beginning with the birth of Abraham, dated in 2016 BC (AD 1 = 2017 Anno Abrahami). Spain and Portugal continued to date by the Spanish Era (also called Era of the Caesars), which began counting from 38 BC, well into the Middle Ages. In 1422, Portugal became the last Catholic country to adopt the anno Domini system. The Era of Martyrs, which numbered years from the accession of Diocletian in 284, who launched the most severe persecution of Christians, was used by the Church of Alexandria and is still officially used by the Coptic Orthodox and Coptic Catholic churches. It was also used by the Ethiopian and Eritrean churches. Another system was to date from the crucifixion of Jesus, which as early as Hippolytus and Tertullian was believed to have occurred in the consulate of the Gemini (AD 29), which appears in some medieval manuscripts. CE and BCE Alternative names for the anno Domini era include vulgaris aerae (found 1615 in Latin), "Vulgar Era" (in English, as early as 1635), "Christian Era" (in English, in 1652), "Common Era" (in English, 1708), and "Current Era". Since 1856, the alternative abbreviations CE and BCE (sometimes written C.E. and B.C.E.) are sometimes used in place of AD and BC. The "Common/Current Era" ("CE") terminology is often preferred by those who desire a term that does not explicitly make religious references but still uses the same epoch as the anno Domini notation. For example, Cunningham and Starr (1998) write that "B.C.E./C.E. […] do not presuppose faith in Christ and hence are more appropriate for interfaith dialog than the conventional B.C./A.D." Upon its foundation, the Republic of China adopted the Minguo Era but used the Western calendar for international purposes. The translated term was (). Later, in 1949, the People's Republic of China adopted () for all purposes domestic and foreign. No year zero: start and end of a century In the AD year numbering system, whether applied to the Julian or Gregorian calendars, AD 1 is immediately preceded by 1 BC, with nothing in between them (there was no year zero). There are debates as to whether a new decade, century, or millennium begins on a year ending in zero or one. For computational reasons, astronomical year numbering and the ISO 8601 standard designate years so that AD 1 = year 1, 1 BC = year 0, 2 BC = year −1, etc. In common usage, ancient dates are expressed in the Julian calendar, but ISO 8601 uses the Gregorian calendar and astronomers may use a variety of time scales depending on the application. Thus dates using the year 0 or negative years may require further investigation before being converted to BC or AD.
Technology
Calendars
null
1412
https://en.wikipedia.org/wiki/Amine
Amine
In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Formally, amines are derivatives of ammonia ((in which the bond angle between the nitrogen and hydrogen is 170°), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine (). The substituent is called an amino group. The chemical notation for amines contains the letter "R", where "R" is not an element, but an "R-group", which in amines could be a single hydrogen or carbon atom, or could be a hydrocarbon chain. Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines. Classification of amines Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring. Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen (how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups): Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include methylamine, most amino acids, and the buffering agent tris, while primary aromatic amines include aniline. Secondary (2°) amines—Secondary amines have two organic substituents (alkyl, aryl or both) bound to the nitrogen together with one hydrogen. Important representatives include dimethylamine, while an example of an aromatic amine would be diphenylamine. Tertiary (3°) amines—In tertiary amines, nitrogen has three organic substituents. Examples include trimethylamine, which has a distinctively fishy smell, and EDTA. A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen: Cyclic amines—Cyclic amines are either secondary or tertiary amines. Examples of cyclic amines include the 3-membered ring aziridine and the six-membered ring piperidine. N-methylpiperidine and N-phenylpiperidine are examples of cyclic tertiary amines. It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions. Naming conventions Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix "N-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth. Lower amines are named with the suffix -amine. Higher amines have the prefix amino as a functional group. IUPAC however does not recommend this convention, but prefers the alkanamine form, e.g. butan-2-amine. Physical properties Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" and foul smell. The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low. Spectroscopic identification Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one. In their IR spectra, primary and secondary amines exhibit distinctive N-H stretching bands near 3300 cm−1. Somewhat less distinctive are the bands appearing below 1600 cm−1, which are weaker and overlap with C-C and C-H modes. For the case of propyl amine, the H-N-H scissor mode appears near 1600 cm−1, the C-N stretch near 1000 cm−1, and the R2N-H bend near 810 cm−1. Structure Alkyl amines Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind. Amines of the type NHRR' and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR' cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R', and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable). Aromatic amines In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances. Basicity Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker. The basicity of amines depends on: The electronic properties of the substituents (alkyl groups enhance the basicity, aryl groups diminish it). The degree of solvation of the protonated amine, which includes steric hindrance by the groups on nitrogen. Electronic effects Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalizes into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table. Solvation effects Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution. In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects. Synthesis From alcohols Industrially significant alkyl amines are prepared from ammonia by alkylation with alcohols: ROH + NH3 -> RNH2 + H2O From alkyl and aryl halides Unlike the reaction of amines with alcohols the reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory: RX + 2 R'NH2 -> RR'NH + [RR'NH2]X In such reactions, which are more useful for alkyl iodides and bromides, the degree of alkylation is difficult to control such that one obtains mixtures of primary, secondary, and tertiary amines, as well as quaternary ammonium salts. Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale. Selectivity is also assured in the Gabriel synthesis, which involves organohalide reacting with potassium phthalimide. Aryl halides are much less reactive toward amines and for that reason are more controllable. A popular way to prepare aryl amines is the Buchwald-Hartwig reaction. From alkenes Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, is used industrially to produce tertiary amines such as tert-octylamine. Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids. Reductive routes Via the process of hydrogenation, unsaturated N-containing functional groups are reduced to amines using hydrogen in the presence of a nickel catalyst. Suitable groups include nitriles, azides, imines including oximes, amides, and nitro. In the case of nitriles, reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the group. is more commonly employed for the reduction of these same groups on the laboratory scale. Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically. Aniline () and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed. Specialized methods Many methods exist for the preparation of amines, many of these methods being rather specialized. Reactions Alkylation, acylation, and sulfonation, etc. Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction"). Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines. Because amines are basic, they neutralize acids to form the corresponding ammonium salts . When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides. Amines undergo sulfamation upon treatment with sulfur trioxide or sources thereof: R2NH + SO3 -> R2NSO3H Diazotization Amines reacts with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl): ANH2 + HNO2 + HX -> AN2+ + X- + 2 H2O Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the group with anions. For example, cuprous cyanide gives the corresponding nitriles: AN2+ + Y- -> AY + N2 Aryldiazoniums couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes. Conversion to imines Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R'  H), these products typically exist as cyclic trimers: RNH2 + R'_2C=O -> R'_2C=NR + H2O Reduction of these imines gives secondary amines: R'_2C=NR + H2 -> R'_2CH-NHR Similarly, secondary amines react with ketones and aldehydes to form enamines: R2NH + R'(R''CH2)C=O -> R''CH=C(NR2)R' + H2O Mercuric ions reversibly oxidize tertiary amines with an α hydrogen to iminium ions: Hg^2+ + R2NCH2R' <=> Hg + [R2N=CHR']+ + H+ Overview An overview of the reactions of amines is given below: Biological activity Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins. Amine hormones Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the , or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine. Application of amines Dyes Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as: Methyl orange Direct brown 138 Sunset yellow FCF Ponceau Drugs Most drugs and drug candidates contain amine functional groups: Chlorpheniramine is an antihistamine that helps to relieve allergic disorders due to cold, hay fever, itchy skin, insect bites and stings. Chlorpromazine is a tranquilizer that sedates without inducing sleep. It is used to relieve anxiety, excitement, restlessness or even mental disorder. Ephedrine and phenylephrine, as amine hydrochlorides, are used as decongestants. Amphetamine, methamphetamine, and methcathinone are psychostimulant amines that are listed as controlled substances by the US DEA. Thioridazine, an antipsychotic drug, is an amine which is believed to exhibit its antipsychotic effects, in part, due to its effects on other amines. Amitriptyline, imipramine, lofepramine and clomipramine are tricyclic antidepressants and tertiary amines. Nortriptyline, desipramine, and amoxapine are tricyclic antidepressants and secondary amines. (The tricyclics are grouped by the nature of the final amino group on the side chain.) Substituted tryptamines and phenethylamines are key basic structures for a large variety of psychedelic drugs. Opiate analgesics such as morphine, codeine, and heroin are tertiary amines. Gas treatment Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening. Epoxy resin curing agents Amines are often used as epoxy resin curing agents. These include dimethylethylamine, cyclohexylamine, and a variety of diamines such as 4,4-diaminodicyclohexylmethane. Multifunctional amines such as tetraethylenepentamine and triethylenetetramine are also widely used in this capacity. The reaction proceeds by the lone pair of electrons on the amine nitrogen attacking the outermost carbon on the oxirane ring of the epoxy resin. This relieves ring strain on the epoxide and is the driving force of the reaction. Molecules with tertiary amine functionality are often used to accelerate the epoxy-amine curing reaction and include substances such as 2,4,6-Tris(dimethylaminomethyl)phenol. It has been stated that this is the most widely used room temperature accelerator for two-component epoxy resin systems. Safety Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine.
Physical sciences
Carbon–nitrogen bond
null
1418
https://en.wikipedia.org/wiki/Absolute%20zero
Absolute zero
Absolute zero is the lowest limit of the thermodynamic temperature scale; a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as 0 kelvin (International System of Units), which is −273.15 degrees on the Celsius scale, and equals −459.67 degrees on the Fahrenheit scale (United States customary units or imperial units). The Kelvin and Rankine temperature scales set their zero points at absolute zero by definition. It is commonly thought of as the lowest temperature possible, but it is not the lowest enthalpy state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, and then to solid; and the sum of the enthalpy of vaporization (gas to liquid) and enthalpy of fusion (liquid to solid) exceeds the ideal gas's change in enthalpy to absolute zero. In the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy. The laws of thermodynamics show that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent asymptotically. Even a system at absolute zero, if it could somehow be achieved, would still possess quantum mechanical zero-point energy, the energy of its ground state at absolute zero; the kinetic energy of the ground state cannot be removed. Scientists and technologists routinely achieve temperatures close to absolute zero, where matter exhibits quantum effects such as superconductivity, superfluidity, and Bose–Einstein condensation. Thermodynamics near absolute zero At temperatures near , nearly all molecular motion ceases and ΔS = 0 for any adiabatic process, where S is the entropy. In such a circumstance, pure substances can (ideally) form perfect crystals with no structural imperfections as T → 0. Max Planck's strong form of the third law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero. The original Nernst heat theorem makes the weaker and less controversial claim that the entropy change for any isothermal process approaches zero as T → 0: The implication is that the entropy of a perfect crystal approaches a constant value. An adiabat is a state with constant entropy, typically represented on a graph as a curve in a manner similar to isotherms and isobars. The Nernst postulate identifies the isotherm T = 0 as coincident with the adiabat S = 0, although other isotherms and adiabats are distinct. As no two adiabats intersect, no other adiabat can intersect the T = 0 isotherm. Consequently no adiabatic process initiated at nonzero temperature can lead to zero temperature (≈ Callen, pp. 189–190). A perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions. The perfect order can be represented by translational symmetry along three (not usually orthogonal) axes. Every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two (or more) stable crystalline forms, such as diamond and graphite for carbon, there is a kind of chemical degeneracy. The question remains whether both can have zero entropy at T = 0 even though each is perfectly ordered. Perfect crystals never occur in practice; imperfections, and even entire amorphous material inclusions, can and do get "frozen in" at low temperatures, so transitions to more stable states do not occur. Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4 (Guggenheim, p. 111). These quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated. Since the relation between changes in Gibbs free energy (G), the enthalpy (H) and the entropy is thus, as T decreases, ΔG and ΔH approach each other (so long as ΔS is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required; endothermic reactions can proceed spontaneously if the TΔS term is large enough. Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0. This ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e., an actual process is the most exothermic one (Callen, pp. 186–187). One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being fermions, must be in different quantum states, which leads the electrons to get very high typical velocities, even at absolute zero. The maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi temperature is defined as this maximum energy divided by the Boltzmann constant, and is on the order of 80,000 K for typical electron densities found in metals. For temperatures significantly below the Fermi temperature, the electrons behave in almost the same way as at absolute zero. This explains the failure of the classical equipartition theorem for metals that eluded classical physicists in the late 19th century. Relation with Bose–Einstein condensate A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of weakly interacting bosons confined in an external potential and cooled to temperatures very near absolute zero. Under such conditions, a large fraction of the bosons occupy the lowest quantum state of the external potential, at which point quantum effects become apparent on a macroscopic scale. This state of matter was first predicted by Satyendra Nath Bose and Albert Einstein in 1924–1925. Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons). Einstein was impressed, translated the paper from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it. Einstein then extended Bose's ideas to material particles (or matter) in two other papers. Seventy years later, in 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST-JILA lab, using a gas of rubidium atoms cooled to (). In 2003, researchers at the Massachusetts Institute of Technology (MIT) achieved a temperature of () in a BEC of sodium atoms. The associated black-body (peak emittance) wavelength of 6.4 megameters is roughly the radius of Earth. In 2021, University of Bremen physicists achieved a BEC with a temperature of only , the current coldest temperature record. Absolute temperature scales Absolute, or thermodynamic, temperature is conventionally measured in kelvin (Celsius-scaled increments) and in the Rankine scale (Fahrenheit-scaled increments) with increasing rarity. Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the degree, so the ratios of two absolute temperatures, T2/T1, are the same in all scales. The most transparent definition of this standard comes from the Maxwell–Boltzmann distribution. It can also be found in Fermi–Dirac statistics (for particles of half-integer spin) and Bose–Einstein statistics (for particles of integer spin). All of these define the relative numbers of particles in a system as decreasing exponential functions of energy (at the particle level) over kT, with k representing the Boltzmann constant and T representing the temperature observed at the macroscopic level. Negative temperatures Temperatures that are expressed as negative numbers on the familiar Celsius or Fahrenheit scales are simply colder than the zero points of those scales. Certain systems can achieve truly negative temperatures; that is, their thermodynamic temperature (expressed in kelvins) can be of a negative quantity. A system with a truly negative temperature is not colder than absolute zero. Rather, a system with a negative temperature is hotter than any system with a positive temperature, in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system. Most familiar systems cannot achieve negative temperatures because adding energy always increases their entropy. However, some systems have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease. Because temperature is defined by the relationship between energy and entropy, such a system's temperature becomes negative, even though energy is being added. As a result, the Boltzmann factor for states of systems at negative temperature increases rather than decreases with increasing state energy. Therefore, no complete system, i.e. including the electromagnetic modes, can have negative temperatures, since there is no highest energy state, so that the sum of the probabilities of the states would diverge for negative temperatures. However, for quasi-equilibrium systems (e.g. spins out of equilibrium with the electromagnetic field) this argument does not apply, and negative effective temperatures are attainable. On 3 January 2013, physicists announced that for the first time they had created a quantum gas made up of potassium atoms with a negative temperature in motional degrees of freedom. History One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 New Experiments and Observations touching Cold, articulated the dispute known as the primum frigidum. The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality." Limit to the "degree of cold" The question of whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1703, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the pressure, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly. The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740. This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that might be regarded as absolute cold. Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his Chemical Philosophy gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature. Charles's law From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100° C. This suggested that the volume of a gas cooled at about −273 °C would reach zero. Lord Kelvin's work After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer, where the air volume would reach "nothing". This value was not immediately accepted; values ranging from to , derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century. The race to absolute zero With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching . Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air at . This was followed in 1883 by the production of liquid oxygen by the Polish professors Zygmunt Wróblewski and Karol Olszewski. Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was the first to liquefy hydrogen, reaching a new low-temperature record of . However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium . By reducing the pressure of the liquid helium, he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time. Very low temperatures The average temperature of the universe today is approximately , based on measurements of cosmic microwave background radiation. Standard models of the future expansion of the universe predict that the average temperature of the universe is decreasing over time. This temperature is calculated as the mean density of energy in space; it should not be confused with the mean electron temperature (total energy divided by particle count) which has increased over time. Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of evaporative cooling, cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures of less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures. In November 2000, nuclear spin temperatures below were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom—a quantum property called nuclear spin—not the overall average thermodynamic temperature for all possible degrees in freedom. In February 2003, the Boomerang Nebula was observed to have been releasing gases at a speed of for the last 1,500 years. This has cooled it down to approximately 1 K, as deduced by astronomical observation, which is the lowest natural temperature ever recorded. In November 2003, 90377 Sedna was discovered and is one of the coldest known objects in the Solar System, with an average surface temperature of , due to its extremely far orbit of 903 astronomical units. In May 2005, the European Space Agency proposed research in space to achieve femtokelvin temperatures. In May 2006, the Institute of Quantum Optics at the University of Hannover gave details of technologies and benefits of femtokelvin research in space. In January 2013, physicist Ulrich Schneider of the University of Munich in Germany reported to have achieved temperatures formally below absolute zero ("negative temperature") in gases. The gas is artificially forced out of equilibrium into a high potential energy state, which is, however, cold. When it then emits radiation it approaches the equilibrium, and can continue emitting despite reaching formal absolute zero; thus, the temperature is formally negative. In September 2014, scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume. In June 2015, experimental physicists at MIT cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvin, and it is expected to exhibit an exotic state of matter by cooling these molecules somewhat further. In 2017, Cold Atom Laboratory (CAL), an experimental instrument was developed for launch to the International Space Station (ISS) in 2018. The instrument has created extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose–Einstein condensates. In this space-based laboratory, temperatures as low as are projected to be achievable, and it could further the exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics. The current world record for effective temperatures was set in 2021 at through matter-wave lensing of rubidium Bose–Einstein condensates.
Physical sciences
Thermodynamics
null