meta
dict
text
stringlengths
2
641k
{ "pile_set_name": "HackerNews" }
Specialized Evolution of the General Purpose CPU - adamnemecek http://blog.acolyer.org/2015/02/06/specialized-evolution-of-the-general-purpose-cpu/ ====== vardump X86 instructions are getting a bit more specialized over time. Like Intel's new SSD (solid state disk, non-volatile storage) instructions. Seriously. [http://danluu.com/clwb-pcommit/](http://danluu.com/clwb-pcommit/) Description for PCOMMIT from Intel's documentation: The PCOMMIT instruction causes certain store-to-memory operations to persistent memory ranges to become persistent (power failure protected).1 Specifically, PCOMMIT applies to those stores that have been accepted to memory. ... Intel's description about CLWB: Writes back to memory the cache line (if dirty) that contains the linear address specified with the memory operand from any level of the cache hierarchy in the cache coherence domain. The line may be retained in the cache hierarchy in non-modified state. Retaining the line in the cache hierarchy is a performance optimization (treated as a hint by hardware) to reduce the possibility of cache miss on a subsequent access. Hardware may choose to retain the line at any of the levels in the cache hierarchy, and in some cases, may invalidate the line from the cache hierarchy. The source operand is a byte memory location. [https://software.intel.com/sites/default/files/managed/0d/53...](https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf) ~~~ walterbell Do you know which CPUs have or will have these instructions? ------ nn3 Direct link to the paper CIDR paper with all the data [http://www.cidrdb.org/cidr2015/CIDR15_KeyNote.pdf](http://www.cidrdb.org/cidr2015/CIDR15_KeyNote.pdf) The original link is really only blog spam about it and should be replaced Yes it's a fascinating paper. Contradicts the conventional "CPUs have stopped getting faster" story you often read
{ "pile_set_name": "Wikipedia (en)" }
Don Abel Don Abel may refer to: Donald Abel (born 1952), politician in Ontario, Canada Don G. Abel (1894–1980), American judge
{ "pile_set_name": "Github" }
// // PTKComponent.m // Stripe // // Created by Phil Cohen on 12/18/13. // // #import "PTKComponent.h" @implementation PTKComponent - (id)initWithString:(NSString *)string { return (self = [super init]); } - (NSString *)string { @throw [NSException exceptionWithName:NSInternalInconsistencyException reason:[NSString stringWithFormat:@"You must override %@ in a subclass", NSStringFromSelector(_cmd)] userInfo:nil]; } - (BOOL)isValid { @throw [NSException exceptionWithName:NSInternalInconsistencyException reason:[NSString stringWithFormat:@"You must override %@ in a subclass", NSStringFromSelector(_cmd)] userInfo:nil]; } - (BOOL)isPartiallyValid { @throw [NSException exceptionWithName:NSInternalInconsistencyException reason:[NSString stringWithFormat:@"You must override %@ in a subclass", NSStringFromSelector(_cmd)] userInfo:nil]; } - (NSString *)formattedString { return [self string]; } - (NSString *)formattedStringWithTrail { return [self string]; } @end
{ "pile_set_name": "Wikipedia (en)" }
Charles A. Gieschen Charles A. Gieschen is Christian theologian who currently serves as Professor of Exegetical Theology and Dean of Academics at Concordia Theological Seminary in Fort Wayne, Indiana. His Ph.D. is from the Department of Near Eastern Studies at the University of Michigan where he studied the literature of Second Temple Judaism and Early Christianity under Jarl Fossum and Gabriele Boccaccini, and alongside April DeConick. Gieschen's dissertation, entitled Angelomorphic Christology: Antecedents and Early Evidence, was published by Brill Academic Publishers in 1998. He also holds a Master of Theology degree in New Testament from Princeton Theological Seminary, where he studied under James H. Charlesworth and Martinus de Boer, and a Master of Divinity degree from Concordia Theological Seminary. He is a member of the Society of Biblical Literature and the International Enoch seminar. He is the Associate Editor of the journal Concordia Theological Quarterly and on the American Editorial Board of Henoch, a journal dealing with the literature of Second Temple Judaism and Early Christianity. Gieschen teaches courses primarily in New Testament and is a specialist in early Christology. He is also an ordained minister in the Lutheran Church–Missouri Synod, and served at Trinity Lutheran Church in Traverse City, Michigan from 1985–1996, during which time he obtained his Ph.D. Scholarly Publications Thesis Books - publication of PhD thesis Book chapters and journal articles ] References Category:American Lutheran theologians Category:American religion academics Category:American biblical scholars Category:American theologians Category:University of Michigan alumni Category:Princeton Theological Seminary alumni Category:Christian writers Category:Living people Category:Concordia Theological Seminary alumni Category:Lutheran Church–Missouri Synod Category:Year of birth missing (living people) Category:Lutheran biblical scholars
{ "pile_set_name": "PubMed Abstracts" }
Direct Fick cardiac output: are assumed values of oxygen consumption acceptable? The use of assumed values of oxygen consumption has become an accepted practice in the calculation of direct Fick cardiac output. A survey showed that the assumed values in common use were derived from basal metabolic rate studies on normal subjects, a use which may not be valid. We have compared previous assumed values based on basal metabolic rate or cardiac catheterization studies with those obtained by direct measurement in 80 patients (age range 38-78 years) with various cardiac disorders. Comparison of the assumed and directly measured values of indexed oxygen consumption and the cardiac index showed large discrepancies, with over half the values differing by more than +/- 10% and many by more than +/- 25% from the measured value. Assumed values of oxygen consumption should be used with caution when calculating cardiac output during cardiac catheterization procedures, because large errors can result. The equations of LaFarge and Miettinen gave the closest approximation to the measured data and their use is recommended in preference to values predicted from basal metabolic rate studies.
{ "pile_set_name": "USPTO Backgrounds" }
Height or distance measurement has a wide variety of possible applications. However, the environment where the height measurement is being made can present a wide variety of challenges. This is particularly the case in situations where height or distance measurements are being made in automotive applications. For example, in measuring the height of a vehicle frame above the surface of a road, challenges are typically presented by road noise, dirt, dust, and vibrations which are normally present in the environment surrounding the vehicle where the measurement is being taken. DE 10 2006 017 275 A1 and EP 1845278 A1 describe an air spring having an integrated positioning device, wherein the distance between two parts of the air spring can be measured by an analogue proximity sensor. Commonly used proximity sensors are, for example, based on an ultrasonic measurement principle which is very sensitive in noisy and vibrating environments, as the acoustic noise and the ultrasonic measurement principle are based on the same physical principle, i.e. sound propagation. These pneumatic air springs have an integrated height measuring device, a pressure chamber or an inner chamber. The exterior of the inner chamber is aligned in the analog proximity sensor and a metal plate is arranged opposite to the interior of the proximity sensor. The proximity sensor and the metal plate are formed pre-adjustable to each other. Further, DE 10 2008 064 647 A1 describes an air spring for a vehicle having a measuring device, which measuring device may transmit data and energy via predetermined and fixed distance contactless. This pneumatic cushioning equipment has a base unit which has a pressure source and a valve unit which has an air supply made of non-metallic material, particularly plastic. A switching valve of the base unit is provided between the pressure source and appropriate valve unit of the arranged air supply. EP 2 366 972 and United States Patent Publication No. 2012/0056616 A1 describe a sensor device for height measurement in an air spring and a corresponding method allowing determining changes in a working stroke of the air spring. These publications more specifically disclose a sensor device for a height measurement, comprising: a transceiving coil arrangement including at least one transceiving coil; a transmitting drive unit; a receiver unit; a reference coil arrangement; and a reference control unit, wherein the transceiving coil arrangement is coupled to both the transmitting drive circuit and the receiver unit, wherein the reference control unit is coupled to the reference coil arrangement, wherein the reference coil arrangement is movably positioned with respect to the transceiving coil arrangement, wherein the drive unit is adapted to drive the transceiving coil arrangement with an AC power signal of a predetermined duration for generating a magnetic field, wherein the reference control unit is adapted for accumulating energy out of the generated magnetic field and for generating a reference signal based on an amount of the accumulated energy, and wherein the receiver unit is adapted for receiving the reference signal and for outputting a signal for determining a distance between the transceiving coil arrangement and the reference coil arrangement based on at least one out of a group, the group consisting of the reference signal and the duration of the AC power signal.
{ "pile_set_name": "USPTO Backgrounds" }
In many regions of the world, cycles, for example, bicycles or tricycles are still the predominant mode of transportation other than walking. Due to depressed economic conditions, automobiles and the fuel to run them are simply not within the budget of the average person. Cycles are used to carry loads including livestock, agricultural products, other wares (e.g., in baskets, panniers, on platforms, or in wagons or trailers pulled by the cycles), and even people as passengers (e.g., tricycle rickshaws). Further, even in wealthier regions of the world, the increased cost of fuel, traffic congestion, and limited parking may make cycling a desired mode of commuter transportation in densely populated areas. Cycles are also used heavily for exercise and recreation in more affluent regions as well. Several forces resist the forward motion of a cycle, for example, a bicycle or a tricycle. A first force is gravity when a rider propels a cycle up an incline. A second force is friction between the cycle and the surface along which the cycle travels. A third force is wind resistance created by the forward motion of the cycle, blowing wind, or both. Improvements to cycling technology that reduce the effort of a rider to propel a cycle would likely significantly benefit riders of cycles in any of the potential uses described above. The information included in this Background section of the specification, including any references cited herein and any description or discussion thereof, is included for technical reference purposes only and is not to be regarded subject matter by which the scope of the invention is to be bound.
{ "pile_set_name": "StackExchange" }
Q: What must be causing a decrease in sparrows and an increase in pigeons in my town? Has your city seen a noticeable decrease in population of common sparrows? And, on the contrary an increased population of common pigeons? As per my opinion prime suspects appear to be stronger cellular networks and pollution? Details: City - Mumbai and Pune. Country - India. Continent - Asia. A: Common Sparrows, which are the State Bird of Delhi, and also of Bihar, have indeed been on a massive decline in India in recent years. What you're seeing is not a fluke. It's happening in many urban areas in India, and is a serious problem. Although many studies have been done over the years, the largest group in your area which is currently involved in the study and preservation of sparrows is The Nature Forever Society. Since 2005, they've been working for the conservation of house sparrows and other common flora and fauna in urban habitats. Their mission is to involve the citizens in the conservation movement of India, especially in urban landscapes. I recommend spending a lot of time on their website. There's a tremendous amount to learn, and things you can do to become involved. For instance, each year on March 20, they sponsor World Sparrow Day. They also give out Sparrow Awards to regular citizens who are making a real impact in the lives of sparrows. Their research is the focus of an article published in March of 2016 in the Daily News Analysis, entitled 10 reasons why the sparrow is fast disappearing from Mumbai. In the words of the author Shraddha Shirodkar: While no one can be singled out for the declining sparrow population, it is humans who are collectively responsible for it, explains Mohammed Dilawar, President, Nature Forever Society (NFS), a crusader for the sparrow as well as other common flora and fauna in urban habitats. Through his non-governmental, non-profit organisation, he has been championing the cause of sparrows, by involving citizens in the conservation movement, especially in urban areas. She compiled a list called "10 reasons why Mumbaikars need to save the sparrow". I removed the tenth because I found conflicting evidence from other sources. The bullet points are hers, but also include information from other sources that said the same thing. Felling of trees It is common knowledge that more the number of trees, more the number of birds. The spike in the felling of trees in Mumbai is a major reason why sparrows and other birds are facing a loss of habitat, especially their natural nesting spots. Lack of cavity nesting The ubiquitous glass buildings of Mumbai—the corporate dens—have replaced many older structures that were built with a façade that had nooks and crannies, even bricked roofs, which allow sparrows to nest. Absence of native plants Native plants such as adulsa, mehndi and many others are outdone by fancy non-native ones like Duranta Erecta, Dumb Cane and others as the trend of modern landscaping catches on. Native plants are the natural habitats of sparrows, providing them insects such as aphids to feed on. Sparrows need a diet of insects in their formative years to grow into healthy adults. Absence of hedgerows Contemporary landscaping is also doing away with hedges, which are preferred by sparrows for nesting. Thick hedgerows are known to protect nesting birds such as the sparrow from predation. Widespread use of concrete Sparrows are known to take two types of bath—one with water and one with dust. With the extensive use of concrete in Mumbai, the species is unable to take dust baths. Modern grocery storage Sparrows are known to feed on tiny grains like bajra, which used to be freely available from pecking at gunny bags stored outside older-style grocery stores. They were able to spill seeds on the ground. Also, grocers and street vendors purposely dropped grain and seeds to encourage the sparrows. Modern grocery stores with air-conditioning and plastic packaging take away any chance of finding food grains to feed on. Chemical fertilisers in agricultural produce Heavy use of chemical fertilisers leads to agricultural produce being laced by them, hence ruining the food of sparrows. Cell phone radiation The electromagnetic fields and radiation created by mobile towers are known to affect sparrows. The effects range from damage to the immune and nervous system of sparrows to interference with their navigating sensors. They tend to get confused and leave the area within a week. Since the eggs they've laid nearby need two weeks to incubate, they don't end up hatching. (I found this in many publications, as it's a very common theory at the moment.) High litter index in Mumbai There is a rise in the population of crows and stray cats due to the high litter index in Mumbai. Simply put, more the garbage, more the predators that prey on sparrows. Additional sources for interesting information: Disappearing Sparrows:Common Bird Goes Uncommon House sparrow listed as endangered species Citizen Sparrow. This is an interactive site with opportunities to help keep track of sparrows. It has reports of sparrow sightings all over India, with maps and other fun stuff for sparrow-lovers! Sparrows of India A: At a guess? A change in architecture. Pigeons favor cliff ledges as nesting sites; they'll happily build nests on window ledges as a substitute for natural cliffs. Sparrows favor cavities: tree hollows, the eaves of roofs, and other somewhat-enclosed areas. If there's been a trend of replacing traditional houses with high-rise apartments, I'd expect a corresponding shift from sparrows to pigeons.
{ "pile_set_name": "OpenWebText2" }
LONDON — Brussels does legal texts, Britain does speeches. In the ornate surroundings of London’s Mansion House, with snow keeping her within the capital’s Remainiac square mile, Theresa May gave her third big Brexit intervention designed to give new impetus to the negotiations. First there was Lancaster House, then Florence and now Mansion House. For all the noise in the run up — from former prime ministers to disgruntled mandarins — May had three key audiences to please: Brexiteers, pro-EU rebels in her own party, and Brussels. The speech went some way to pushing things on, laying out a series of “hard truths” for each side of the Brexit debate, while offering just enough to keep the Brexit show on the road. “Life is going to be different,” she told the assembled audience of business leaders, ambassadors and journalists. “In certain ways, our access to each other’s markets will be less than it is now.” May is prepared to soften Brexit any way she can. Sometimes the biggest admissions are the most obvious. Given U.K. Brexit Secretary David Davis’s previous assurances that Britain would enjoy “exactly the same” benefits after Brexit as before, this was one of those. Labour has promised to vote against anything which didn’t meet the Brexit secretary’s commitment. The central message of the speech was simple enough: May is prepared to soften Brexit any way she can as long as it is consistent with the government’s promise to take “control of our borders, laws and money.” On the most difficult truth of all, however, — the Irish border — the prime minister did not offer a grand, all-encompassing solution. That is not her style. She repeated the two customs options laid out last summer and announced a new U.K.-EU-Ireland working group had been established to solve the conundrum. The most important move on the border was actually smuggled into the speech elsewhere. Britain would in future align its standards with the EU on goods, May announced. This was a concession, no doubt. But to make this work for the Irish border the EU must agree to a system of mutual recognition which is far from certain. Here’s how each key audience heard the prime minister's third big Brexit speech: Pro-EU rebel Tories There was much for soft-Brexiteers to cheer. The hard truths were ones they had been fond of telling for months. “It was much more realistic,” one of the leading pro-EU Tory rebels said. “Much more realistic as to what leaving entails, in particular that there is going to be a significant loss in terms of our ability to sell goods and services into the European market.” This realism, the Tory MP said, was a sounder basis for negotiation than what has gone before. May’s pledge for Britain to sign up to “binding commitments” on state aid and competition rules also met with approval from pro-EU rebels seeking to wrestle the closest possible economic relationship with the continent that they can. “It was what we needed. Realistic, pragmatic, more detailed and setting out some hard facts” — Treasury select committee chair Nicky Morgan The slightly weaker “strong commitment” to maintaining regulatory standards for goods was a bit of a blow, however, giving them a score draw on commitments with the Brexiteers. Another big win for the pro-Europeans was the list of EU regulators May called for British associate membership of: the medicines, chemicals and aviation agencies. May’s calculation was clear — nobody voted to leave the EU to take back control of regulating chemicals and medicines. Rebels also sounded notes of caution. Leading rebel Anna Soubry said her concern was the proposal remained unrealistic. “The EU has made it very clear the sort of deal it will be doing with us… The Brexit we are heading for is very, very different to the one we were promised.” Treasury select committee chair Nicky Morgan, another rebel, welcomed the speech, however: “It was what we needed. Realistic, pragmatic, more detailed and setting out some hard facts.” Michel Barnier and friends There was no applause in Brussels — not that anyone expected any. May’s insistence that “every trade agreement is cherry-picking” was received with a mix of annoyance and disdain, while her lack of detail on Ireland generated serious alarm. But perhaps more than anything else, it was her exhortation of “let’s get on with it,” that left many in the EU capital huffing with indignation. If she wasn't holding things up, who was? “Theresa May needed to move beyond vague aspirations,” the European Parliament’s Brexit coordinator, Guy Verhofstadt told POLITICO. “We can only hope that serious proposals have been put in the post. While I welcome the call for a deep and special partnership, this cannot be achieved by putting a few extra cherries on the Brexit cake.” “She shows she hasn’t understood the principles” — EU diplomat Snarkiness aside, to ears in Brussels, the speech offered little new certainty about Britain’s objectives in the negotiations, and delivered no reassurance that the talks were going to speed up or become more productive. Instead, officials said May had confirmed the U.K.’s longstanding “red lines” and, in a small step forward, finally stated aloud what many had long expected: She wants a free-trade agreement. And while her demand may be for the best free-trade deal the world has ever seen, at least the framework is certain. EU chief negotiator, Michel Barnier, reacted on Twitter with no emotion, saying May’s remarks would be taken into account when the European Council next week tables draft guidelines for negotiating the future relationship. May, speaking directly to her “friends in Europe,” said: “We understand your principles.” But by demanding a “customs arrangement” that would effectively allow the U.K. to reap many benefits, it didn't sound like it to Brussels. “No border, no Canada, no Norway, which leaves us where?” one EU diplomat said. “She shows she hasn’t understood the principles.” In the EU institutions, many officials had their ears perked for concrete details. They were left largely disappointed, especially on the Ireland problem, which May — after promising in December to put forward ideas — seemed to throw back at Brussels to help solve, saying “we can’t do it on our own.” Ireland's Deputy Prime Minister Simon Coveney echoed the call for more detail: "These commitments now need to be translated into concrete proposals on how a hard border can be avoided and the Good Friday Agreement and North-South co-operation protected." He reiterated that the fallback mechanism of "full alignment" of Northern Ireland with the Republic to the South was still on the table if the U.K. didn't come up with a workable alternative. Brexiteers For the Brexiteers, nearly every major speech and announcement since Lancaster House in January 2017 has represented a retreat — and left them hypersensitive to anything that sounds like back-sliding. They also have the numbers to topple her as leader. May knows this and thus ensured there was a smorgasbord of red Brexit meat for them to pick at. She reminded her listeners that the U.K. is “preparing for every scenario” — code for the "no deal" outcome the most hardline Brexiteers crave. "The EU must now consider whether they want to put rigid doctrine ahead of the mutual interests of their people and those of the U.K.” — David Jones, former Brexit minister Two of her “five tests” for a good Brexit deal — that it must “respect the referendum” and strengthen the Union — were also pitched directly at Brexiteers and her Democratic Unionist Party partners in government. Less pleasing to Brexiteers would have been the numerous references to close regulatory alignment with the EU. On goods trade, May said U.K. law should “achieve the same outcomes” as EU law. U.K. and EU standards would “remain substantially similar in the future.” Divergence, setting British laws for the British economy, has been deferred. But this possibility of future divergence — as well as immediate divergence in a handful areas (May identified digital industries in her speech) — seems to be enough for Brexiteers, for now. “This was a clear, coherent speech from a prime minister keen to demonstrate that the U.K. will be positive and pragmatic in the forthcoming negotiations,” David Jones, the former Brexit minister, now backbench scrutineer of the government’s policy, told POLITICO. “She challenged the EU to respond in similar terms. The EU must now consider whether they want to put rigid doctrine ahead of the mutual interests of their people and those of the U.K.” The leader of the backbench Brexit faction, Jacob Rees-Mogg, also kept his powder dry, calling the speech “a clear statement of how we can leave the EU and maintain friendly relations with our neighbors.” Whether they remain content if the EU pushes back and forces the U.K. to alter its position, remains to be seen. As for Brexiteers in May’s Cabinet, they were claiming last week that “divergence” had “won the day.” While May’s speech did not entirely bear that out, Foreign Secretary Boris Johnson, who was detained in Hungary because of snow-related disruption, expressed his consent via an airfield photo of him holding the speech, with a grin and a thumbs up. Alas I have not been able to listen in person as I hoped as I have been delayed by our common European winter weather - on which we will remain in full alignment #RoadtoBrexit pic.twitter.com/X5RoFXpmir — Boris Johnson (@BorisJohnson) March 2, 2018 “We will remain extremely close to our EU friends and partners — but able to innovate, to set our own agenda, to make our own laws and to do ambitious free trade deals around the world,” he tweeted. His emphasis on trade was key. One thing Remainers had held out a slim hope of hearing from this speech was a commitment to something like a customs union with the EU. Instead, May fell back on two complex customs proposals set out last summer and largely ignored by the EU. More than anything Brexiteers will be pleased that the idea of "Global Britain," able to strike trade deals around the world, is still alive.
{ "pile_set_name": "PubMed Abstracts" }
New fluorescence evidence that each peptide of fatty acid synthetase has a keto and an enoyl reductase domain with different affinities for NADPH. A new graphical analysis of fluorescence enhancement produced by NADPH binding to fatty acid synthetase from the uropygial gland of goose showed that the enzyme contains two binding sites per monomer with different Kd values. The site with the lower Kd (1.3 microM) showed lower enhancement than that with the higher Kd (7 microM). After specific inactivation of the enoyl reductase of the enzyme with pyridoxal phosphate (Poulose, a. J., and Kolattukudy, P. E. (1980) ARch. Biochem. Biophys. 201, 313-321) only the low affinity binding site was found. Graphical analyses of the data strongly suggest that each peptide of fatty acid synthetase contains one keto reductase domain with low affinity for NADPH and one enoyl reductase domain with high affinity for NADPH.
{ "pile_set_name": "StackExchange" }
Q: Form doesn't validate when passing form value to the next view In my index view I have a ModelChoiceField which allows me to choose various equipments. When I have submitted my choice, I keep the selected value in a variable like this: if form.is_valid(): form.save() request.session["eq"] = form.cleaned_data['equipment'] Then I am redirected to a new view(reservation) in which I have a form with three fields: "equipment", "date" and "reserved_by". Since I have already chosen the equipment i want in the previous view I want it to be filled in automatically. I managed this by doing the following in my reservation view: form = ReservationForm(initial={'equipment': request.session.get('eq')}) So when I run my site, the equipment field in the reservation view does get automatically filled in, but now the form suddenly won't validate(form.is_valid is false). Been struggling with this for a while now so any help would be highly appreciative. A: This form from your question is unbound. form = ReservationForm(initial={'equipment': request.session.get('eq')}) Unbound forms are never valid. To get a valid form, you need to bind it to post or get data.
{ "pile_set_name": "Pile-CC" }
How to Get Paid When You Work For Yourself This post may contain affiliate links. See our full disclosurefor more info. Freelance graphic designers, writers, web designers, and entrepreneurs all know that it can be tricky to get a paycheck each month from clients — especially if you’re just starting out. It can be scary starting your own business, and you want clients to like you. However, if you want to run a successful business and get paid each month, you’ll need to buckle down and become an expert in a few things: learn how to create a perfect contract Set up a payment schedule Recurring invoice Build a reputation. Note: By no means is this legal advice. Before structuring a company, it is best to consult with an attorney if you deem it necessary. Discuss the Terms Beforehand with Your Client When you offer goods and services outside of the traditional norm, it can be difficult to land on specific terms for your transaction with a client. First things first, always write up a contract. Never rely on spoken word contracts to be held up, even if it’s for a friend. If you’re doing substantial work that takes time out of your work schedule, then you deserve to be paid for your time and expertise What Should Be Included in My Contract? Sit down and make a list of all the facets of your agreement that are important to you. This contract should include several aspects of your relationship with the client. First, be confident in the service you’re providing and the price you are charging, “I’m providing X service that costs X amount each month (or hour or biweekly)”. If you’re not sure how much you should charge, then I would suggest starting at square one and do some market research to determine your experience versus what others charge. Next, you’ll want to include a length of contract. Is this a month to month arrangement where they can cancel at any time or is it a 6 month contract? Your client will need to clearly understand what will happen if they choose to terminate this contract early. You should still get paid for the amount of work you’ve put in thus far. In addition, you’ll need to include what happens if they pay late or don’t pay at all. Set a specific day each month that they need to send you a payment and stick to it. You may decide to charge them a flat fee or possibly a percentage each day they are late. That’s completely up to you. Get Your Contract Looked at By a Professional & Stick By Your Terms Those are the main points of a very basic contract, but you may also want to include other things like a deposit on your services for new clients or other rewards and discounts for being a long time client. Do your research on what a professional contract in your field should look like and it’s always a good idea to get it checked out by an expert before you start sending it to clients. Lastly, your clients are going to ask you to change your terms. You need to know when to bend and when to say no (in a respectful manner). If someone wants a discount on their services just because they want to challenge how much your time is worth, say no. You have researched the market, you know your level of expertise, and your price is firm. If you are able to let your client pick their date, don’t let them change it. If they are going to be late they need to abide by the terms of your late fee agreement. Sadly, if a client thinks they can take advantage of you, they just might. Set Up a Payment Schedule Your contract should include an exact date that your client needs to pay you according to their schedule laid out in the agreement. When meeting with your client to discuss this part of the contract, be a little bit lenient on the particular day your client pays depending on their schedule, if possible. I understand that bills happen and sometimes you need the money at a particular time each month. For occasions like this, I would strongly suggest building up enough savings to cover any costs that simply cannot wait. This way, you can let your client pick the day that they pay. Remember, they have bills too and will greatly appreciate you working with their preferences. Reward Those Who Pay Early If you have the financial means to do so, reward your client for early payment. It doesn’t have to be a huge discount, but even just a small percentage could provide enough incentive to get you the payment a little bit early each month. If you’re not sure where to start, maybe try it out with your most loyal client(s) first. Let them know that you want to start a new incentive program that will give them a discount if they pay early each month and that it’s completely optional. Learn How to Send Organized Invoices & Reminders Your payment schedule should also include a monthly invoice that clearly details the work you performed, exactly how much they paid, and the dates they paid for. Try to get this to them as quickly as possible. It shows organization as well as professionalism when you are able to get them their bill at the same time each month, and far before the bill is actually due. It also doesn’t hurt to send payment reminders. Let your client know that you will send a reminder, via their preferred method of contact, on X day each month before they are due. This is a great time to remind them that you sent their invoice, just in case they didn’t get it the first time. Build Up a Reputation When you are firm and direct, but also attentive and thoughtful, your clients will notice. Make yourself and your work known online. Use all the resources at your disposal. Create strictly work related pages, like LinkedIn, but also create a website for you to show off your work (in a professional manner). The best way would be to create a website that features all of your work and past client comments and recommendations, but I know that can be expensive. Make a Portfolio If you’re just starting out, maybe think about a Facebook page or Twitter account where you can post links and pictures to the work you’re doing. Again, I highly suggest to keep these strictly to your professional portfolio and nothing else. You can make your own accounts for friends and family, but keep those professional as well. Clients will dig around to find any and all information about you and if they find anything online that is less than appealing, they will take their business elsewhere. Offer Discounts for Recommendations One of the best ways to get more business, and repeat business, is to offer discounts. I wouldn’t suggest spamming potential clients on your Facebook or Twitter account or via email. That will only discredit your professionalism and possibly make you look desperate. Instead, only let clients or potential clients (who contact you) know about the discount or post it on your website under an appropriate section. Think about offering a percentage or dollar amount off services for the person who recommended the new client and for the new client as well. Choosing to work for yourself means that you have a lot of responsibilities to your client. If you’re thinking about starting your own freelancing business or other type of self-employment, stay organized. Keep files of everything. Set reminders, appointments, and due dates to keep you on track. Most importantly, if you keep your client happy, you’ll get paid on time.
{ "pile_set_name": "PubMed Central" }
Introduction {#Sec1} ============ Why do we often remember events that are not particularly important to us, yet we forget the faces of acquaintances we desperately try to remember? Much of memory research focuses on answering this question through an *observer-centric* focus, exploring the steps the brain undergoes when forming a memory and trying to explain the factors that lead some subjects to perform better than others. However, an equally important factor is *item-centric* effects -- how do stimulus-driven factors influence these steps in making a memory? In fact, stimulus-driven factors have been found to determine the ultimate mnemonic fate of a given face as much as all other factors combined, including individual differences, environment, and noise^[@CR1]^. Stimulus-driven factors are intrinsic properties of a stimulus that influence observer behavior. For example, stimuli that trigger bottom-up attention orienting (e.g., threatening faces^[@CR2]--[@CR4]^, emotional faces^[@CR5],[@CR6]^ or even just brighter stimuli^[@CR7],[@CR8]^ may tend to cause biases in tasks such as spatial cueing and visual search^[@CR9],[@CR10]^. Another example is with statistical learning, where the relative frequencies and ordering of stimuli affect learned representations of those stimuli^[@CR11],[@CR12]^. One recently characterized stimulus property of relevance to memory in particular is *memorability* -- an intrinsic stimulus property reflecting the probability of a given stimulus to be successfully encoded into memory. Stimulus memorability is highly consistent and reliable across observers^[@CR13]--[@CR21]^; that is, people tend to remember and forget the same scenes^[@CR13]^ and faces^[@CR1]^ as each other, in spite of the diversity in perceptual experience across people. Memorability has been found to be its own stimulus property; for faces, it cannot be modeled as a composite measure of other face attributes such as attractiveness, trustworthiness, or subjective ratings of distinctiveness or familiarity^[@CR1]^. This consistency of memorability has been found to persist within a face identity across emotional and viewpoint changes in the face, unlike other face attributes^[@CR14]^; if a person is memorable when they are smiling, then they are also likely to be memorable when facing away. Memorability is thus intrinsic not only to images, but also to dynamic entities such as a person's identity. Using computer vision metrics, memorability can thus be quantified and even manipulated for a face image^[@CR15]^. Memorability has also been found to be resilient to different time lags^[@CR16],[@CR17]^, stimulus contexts^[@CR18]^, and experimental paradigms^[@CR19],[@CR20]^, showing that memorability as a stimulus property generalizes to alternate contexts. And, memorability effects have been shown to be independent from bottom-up attention, top-down attention, and perceptual priming^[@CR21]^, indicating that memorability is not a mere proxy for other cognitive processes known to be related to memory. If stimulus-driven memorability is known to play such a strong role in memory behavior, to what degree are previously identified neural effects of memory partially explained by stimulus memorability? Previous work has found strong, stereotyped patterns of neural activity based on stimulus memorability during incidental encoding of an image^[@CR22]^; however, no work has examined memorability during active memory retrieval. As memorability is easily measurable for a given set of stimuli, previous memory studies could be re-examined with a framework based on stimulus memorability. Several functional magnetic resonance imaging (fMRI) studies have been able to successfully classify brain activation patterns based on individual memory behavior at the stage of retrieval^[@CR23],[@CR24]^. To what degree are these successful classifications influenced by the memorability of the stimuli, or might there even be separate signatures for stimulus-driven memorability versus observer-driven memory behavior? In this paper, we examine the interaction of stimulus-driven memorability and memory retrieval through a reanalysis of the face stimuli and blood-oxygen-level dependent (BOLD) data from Rissman *et al*.^[@CR23]^, one of the first fMRI studies to decode memory recognition behavior from brain activity patterns. We find that we are able to significantly classify whether individual face stimuli have high or low memorability based on the BOLD data, and the regions supporting this classification significantly differ from regions able to classify memory behavior. We find these memorability-related patterns remain relatively consistent across both trials where participants successfully recognized old images (hits) and successfully rejected new images (correct rejections). Additionally, we find that support vector regressions trained on activity patterns from these same regions are able to predict the memorability score of individual faces, providing the first evidence for a continuous representation of memorability in the brain. Taken together, these results provide support for memorability as a measurable stimulus property that can be used to analyze neural correlates of memory, and provide new insights into the information that may be relevant to the brain when retrieving an item from memory. Results {#Sec2} ======= Characterizing the Range and Consistency of Memorability {#Sec3} -------------------------------------------------------- Our first step was to quantify the memorability of the face stimuli from Rissman *et al*.^[@CR23]^ and see whether these memorability scores had sufficient variance and consistency to be taken as an intrinsic property of each image. In the original experiment^[@CR23]^, participants (N = 16) studied 200 face images outside of an MRI scanner and then were tested for memory retrieval approximately 1 h later during fMRI scanning with the same 200 images and an additional 200 foil face images. For the current experiment, memorability scores were obtained for these 400 face stimuli using a large-scale Amazon Mechanical Turk (AMT) memory test. 740 AMT workers viewed a pseudorandom sequence of the face images presented for 1 s at a time and pressed a button whenever they saw a repeated face image (see Methods). For each stimulus, we used the group-level AMT data to calculate a hit rate (*HR*; percentage of participants successfully identifying a repeat) and a false alarm rate (*FA*; percentage of participants falsely identifying the first presentation as a repeat). While previous memorability work has focused on *HR* while selecting for low *FA* across stimulus conditions^[@CR22]^, here we have stimuli that vary in both *HR* and *FA*. For example, while an image with a high *HR* and low *FA* may be truly memorable, an image with a high *HR* and also high *FA* may be one that causes a sense of familiarity. To account for the effects of *FA*, we define memorability henceforth as the corrected recognition score *Pr*, where *Pr* = *HR* − *FA*^[@CR25]^. Note that *d-prime* was not used here due to the fact *HR* and *FA* were not necessarily derived from the same set of participants. These stimuli showed a wide range in memorability score *Pr* (*M* = 0.35, *SD* = 0.12, *min* = 0.06, *max* = 0.76). *HR* showed similarly wide range (*M* = 0.49, *SD* = 0.11, *min* = 0.22, *max* = 0.82) as well as *FA* (*M* = 0.14, *SD* = 0.09, *min* = 0, *max* = 0.52). As a point of comparison, *d-prime* also showed a wide range (*M* = 1.15, *SD* = 0.45, *min* = 0.23, *max* = 2.67). A consistency analysis correlating the stimulus memorability rankings of random split halves of participants (see Methods) found significantly high consistency in *Pr* across images (*r*(398) = 0.46, *p* \< 0.0001). *HR* was also significantly consistent across observers (*r*(398) = 0.49, *p* \< 0.0001) as well as *FA* (*r*(398) = 0.67, *p* \< 0.0001) and *d-prime* (*r*(398) = 0.53, *p* \< 0.0001); see Fig. [1](#Fig1){ref-type="fig"}. This means that while there was a wide range in memory performance across images, observers tended to remember and forget the same images as each other, and so memorability is consistent within an image. These face stimuli are thus an ideal testbed to look at the relationship of various levels of memorability to BOLD data as well as individual memory performance.Figure 1(Top) Histogram distributions of hit rate (*HR*), false alarm rate (*FA*), and memorability score corrected recognition (*Pr*) for an online memory test with the 400 face stimuli. These histograms show a wide range of memorability scores, making them an ideal set for probing the extent of memorability effects in the brain. (Bottom). Split-half consistency analyses (Spearman's correlations of randomly split halves of the participants, over 10,000 iterations) for *HR*, *FA*, and *Pr*. The blue line indicates the average memorability scores (sorted from highest to lowest) for the first random participant half, while the green dotted line shows the average memorability scores for the same ordering from the second half. The grey line shows an estimate of chance, where scores for the second half are instead shuffled. The r-values indicate average correlation. Overall, these graphs show that participants generally remember (as well as false alarm to) the same images as each other (all *p* \< 0.0001). The scanned participants' memory ratings during the retrieval task also highly corresponded to these memorability scores collected online (Fig. [2](#Fig2){ref-type="fig"}). While retrieving memories in the scanner, participants had five response options to indicate their memory and confidence: (1) recollected the image (i.e., they could think back to the actual experience of first encountering that photograph), (2) high confidence the image was old, (3) low confidence the image was old, (4) low confidence the image was new, or (5) high confidence the image was new. As expected, for old images, recollected images had the highest memorability scores, with high confidence hits having lower memorability scores, and low confidence hits having even lower memorability scores. A repeated measures within-subjects 1-way ANOVA for participant response for old trials was conducted to test this pattern, excluding the "high confidence new" response for old trials since only five participants had at least five trials of that type. There was a significant main effect of confidence rating on memorability for old images (*F*(3, 45) = 47.11, *p* = 6.22 × 10^−14^, effect size *η*^2^ = 0.759; Mauchly's test indicates an assumption of sphericity is preserved: *χ*^2^(5) = 3.61, *p* = 0.608), where Bonferroni-corrected post-hoc pairwise comparisons show significantly higher memorability for recollected than high confidence old trials (*p* = 1.46 × 10^−4^), and high confidence old trials than low confidence old trials (*p* = 7.00 × 10^−4^). Likewise, new images that were judged to be new (i.e., correct rejections) with high confidence were also those that tended to get the highest memorability scores in the online sample. A repeated measures within-subjects 1-way ANOVA for participant response for new trials was conducted, excluding the "recollection" response for new trials, since only one participant had at least five trials of that type. There was a significant main effect of confidence rating on memorability for new images (the assumption of sphericity was violated *χ*^2^(5) = 16.94, *p* = 0.005, so the ANOVA was Greenhouse-Geisser corrected (ε = 0.633): *F*(1.90, 26.57) = 18.40, *p* = 1.30 × 10^−5^, effect size *η*^2^ = 0.568). A Bonferroni-corrected post-hoc pairwise comparison showed significantly higher memorability for high confidence new trials than low confidence new trials (*p* = 3.00 × 10^−6^). The patterns for trials where participants' responses were incorrect (e.g., rating new images as old or vice versa) are less clear, as there were fewer trials of this type (e.g., only 2 trials on average per participant rating a new image as being "recollected"). Overall, these results indicate that memorability scores determined in a separate large-scale online experiment can be successfully applied to account for behavioral performance in an independent set of participants (from the smaller-scale fMRI experiment), even when the images were studied under a very different paradigm and setting.Figure 2Average *Pr* memorability score (from the online memory test) for each image type (old or new) and response type of the participants in the scanner experiment. Error bars indicate standard error of the mean. Asterisks (\*) indicate a significant Bonferroni-corrected pairwise difference between conditions (p \< 0.001). All correct trials (e.g., responding "old" for an old image or "new" for a new image) showed significantly higher memorability for images with more confident responses (R = Recollect, HC = high confidence, LC = low confidence). Note that large error bars are seen for trial types with small numbers of trials; all correct response types had on average over 35 trials per participant, while incorrect response types had fewer (e.g., HC New for old images had on average only 11 trials per participant, R for new images had only 2 per participant, and HC Old for new images had only 11 per participant). Representational Geometries of Stimulus Memorability and Subject Memory across the Brain {#Sec4} ---------------------------------------------------------------------------------------- Since the face stimuli show strong consistency and variance in their memorability, we can then explore their neural correlates during the scanned retrieval task. Specifically, representational similarity analyses (RSAs) allow one to examine the representational geometry of information in the brain^[@CR26]^, by looking at where hypothesized models of stimulus similarity correlate with neural data. For example, are there regions in the brain that show more similar neural patterns for a pair of memorable images than a pair of forgettable images? Also, while this hypothesis is based on *stimulus memorability*, might there also be regions with a representational geometry based on an individual subject's *memory* behavior (e.g., where there are more similar neural patterns for two remembered images than two forgotten images), and how might these two sets of regions compare? RSAs were conducted in a whole-brain searchlight using a model based on stimulus memorability and a model based on participant memory behavior and then compared (see Fig. [3](#Fig3){ref-type="fig"} and Methods for further details).Figure 3A diagram of how the Representational Similarity Analysis (RSA) was conducted in this study, using toy data. Essentially, voxel data were extracted from searchlights across the brain (real representational similarity matrix, RSM) and then correlated with a RSM based on stimulus memorability (i.e., more memorable stimuli are more similar) and a RSM based on individual subject memory performance (i.e., stimuli an individual remembered are more similar). Stimuli in the matrices are ordered from most memorable (M) to most forgettable (F). Note that the Memory Model RSM only has three colors to represent the three possible values in the matrix (blue = both stimuli were remembered; green = one was remembered and one was forgotten; red = both were forgotten). In contrast, the Memorability Model RSM shows a continuous representation of memorability. The diagonals (colored black) in the RSMs were not used in the analyses, as they will always be 1 (i.e., the similarity of a stimulus to itself). From the correlations of the Model RSMs to the Real RSM, we can form maps of which regions in the brain showed significant correlations across the sixteen participants to the stimulus-based memorability model, and which showed significant correlations to the subject-based memory model. We find a dissociation of two strikingly different sets of regions that emerge as being maximally significant for the memorability-based model and the memory-based model (Fig. [4](#Fig4){ref-type="fig"}). Specifically, the memorability-based model shows its highest significant correlations with BOLD activity patterns in the ventral visual stream and medial temporal lobe (MTL), overlapping with regions such as the parahippocampal cortex (PHC), fusiform gyrus (FG), and perirhinal cortex (PRC). In contrast, the memory-based model shows its highest significant correlations in frontal and parietal regions. Of the top 1000 voxels of each contrast, only 9 voxels (or 0.9%) are shared between the two contrasts (Refer to Supplementary Figure [1](#MOESM1){ref-type="media"} for a chart of how the amount of voxel overlap varies by number of voxels included in each map). These results indicate that different sets of regions preferentially support representations of stimulus-driven memorability versus the actual memory fate of that stimulus.Figure 4Group *t*-maps of the top 1000 voxels whose local activity patterns (i.e., 7-voxel diameter spherical searchlight) correlated with the stimulus memorability-based representational similarity matrix (RSM, red/pink) and the top 1000 voxels whose local activity patterns correlated with the subject memory-based RSM (blue), shown on a Talairach-space example brain. The two maps have very little overlap (9 voxels, or 0.9%); the memorability-based voxels span the ventral visual stream, the posterior temporal lobe (pPHC = posterior parahippocampal cortex, pFG = posterior fusiform gyrus), and anterior medial temporal lobe (PRC = perirhinal cortex). In contrast, the memory-based voxels span frontal and parietal regions. The lowest cluster-threshold corrected significance level that all 1000 voxels pass for both maps is *p* \< 0.001. Memorability Across Image Types {#Sec5} ------------------------------- While different neural systems may be active for quickly calculating the memorability of an image versus retrieving a specific memory, might memorability-related activity be modulated by the history of the image? That is, will we see different patterns for old images that are successfully remembered versus new images that are correctly rejected? For the former trial type, participants are retrieving a specific item from memory successfully, while for the latter trial type, participants are viewing a novel item and identifying it as being novel. Recognition and novelty detection are different memory behaviors, shown to depend largely upon separate neural substrates^[@CR27]--[@CR29]^, and so might there also be similar differences in how the brain processes the memorability of an old stimulus versus a new one? We explored this question using RSA searchlights of memorability for only hit trials, and then also only for correct rejection trials. The number of trials used was downsampled (as described in the Methods) in order to ensure that trial counts were balanced across trial types for each participant. From these searchlights, similar regions for memorability emerge for both hit and correct rejection trials, particularly within the ventral visual stream and posterior temporal lobe (Fig. [5](#Fig5){ref-type="fig"}), as seen in the regions that emerge for general memorability (Fig. [4](#Fig4){ref-type="fig"}). An analysis of voxel overlap (Supplementary Figure [2](#MOESM1){ref-type="media"}) revealed that the maps for hit and correct rejection trials had 93 overlapping voxels (9.3%) in the top 1000 voxels. Hit trials additionally show significant correlations in the anterior temporal pole (encompassing regions within the MTL), while correct rejection trials show correlations in the parietal lobe. Both trial types also show significant correlations with regions in the ventrolateral prefrontal cortex. These maps indicate that hit and correct rejection trials largely show the same regional patterns of memorability (although not necessarily overlapping in the precise anatomical coordinates), indicating that memorability effects during retrieval exist regardless of individual memory behavior. However, there may be subtle differences in the regions that use memorability-based information depending on the specific retrieval process.Figure 5A comparison of the top 1000 significant voxels whose local activity patterns correlated with the memorability-based representational similarity matrix (RSM) constructed only with hit trials (red/pink) and only with correct rejection trials (green). The lowest cluster-threshold corrected significance level that all 1000 voxels pass for both maps is *p* \< 0.005. Both maps show significant regions in the bilateral ventral stream and posterior temporal lobe, as well as some regions in the frontal and parietal cortices. Representational Geometries within Regions of Interest {#Sec6} ------------------------------------------------------ Representational similarity analyses were conducted within perceptual and memory-related regions of interest (ROIs) in order to examine more specifically which regions are sensitive to measures of memorability, and to probe the differences between hit and correct rejection trials. These ROIs were defined by probabilistic maps made from over 40 participants for MTL regions and 70 participants for ventral visual stream regions^[@CR22]^, with ROIs taken as the voxels shared by at least 25% of participants (as in^[@CR30]^). For the MTL, we investigated representational similarity within the perirhinal cortex, entorhinal cortex (ERC), amygdala, hippocampus, and parahippocampal cortex. For the ventral visual stream, we looked at early visual cortex (EVC), and higher-level commonly category-specific perceptual regions of the fusiform face area (FFA), parahippocampal place area (PPA), and lateral occipital complex (LOC). We find that several perceptual ventral visual stream regions are significantly correlated with a memorability-based RSM (corrected for multiple comparisons using False Discovery Rate FDR, corrected *α* = 0.007). Specifically, significant correlations emerge in the right PPA (*r*(14) = 0.015, *p* = 0.007), the right LOC (*r*(14) = 0.022, *p* = 0.006), and marginal significance for the left FFA (*r*(14) = 0.006, *p* = 0.030; does not survive FDR correction). No significant correlation is found in the EVC (*p* \> 0.08). In contrast, no perceptual regions show a significant correlation with the memory-based RSM (all *p* \> 0.10). These results mirror the results in the whole-brain searchlight analysis, where memorability is correlated with ventral neural signals, while memory is not. These results implicate memorability as being a high-level perceptual signal, as higher-level perceptual regions (FFA, PPA, LOC) show a correlation with memorability. It is also interesting to note that while only face images were viewed, non-face selective regions (PPA, LOC) show a significant correlation with memorability signals, indicating that sensitivity to memorability may be content-generic across the brain. In terms of memory-related regions within the MTL, the memory-based RSM shows significant correlations (FDR corrected *α*** = **0.010) with several regions in the MTL: the left hippocampus (*r*(14) = 0.039, *p* = 0.0005), the right hippocampus (*r*(14) = 0.054, *p* = 0.0009), the left PHC (*r*(14) = 0.015, *p* = 0.010), the right PHC (*r*(14) = 0.016, *p* = 0.005), and with marginal significance (failing FDR correction) in the right PRC (*r*(14) = 0.008, *p* = 0.044) and the right amygdala (*r*(14) = 0.010, *p* = 0.026). The memorability-based RSM shows no significant correlations with any of the regions in the MTL (all *p* \> 0.10). However, when RSAs are conducted only on trials split by memory performance, then we see these regions do contain memorability-related information for certain trial types. Several regions of the MTL do show a significant correlation (FDR corrected *α* = 0.039) with the memorability RSM for hit trials, specifically the right PRC (*r*(14) = 0.009, *p* = 0.023), the left PRC (*r*(14) = 0.013, *p* = 0.011), the left amygdala (*r*(14) = 0.015, *p* = 0.003), the left ERC (*r*(14) = 0.013, *p* = 0.006), the right ERC (*r*(14) = 0.012, *p* = 0.005), the left hippocampus (*r*(14) = 0.27, *p* = 0.015), the right hippocampus (*r*(14) = 0.025, *p* = 0.039), and the left PHC (*r*(14) = 0.024, *p* = 0.001). No significant effects were found for correct rejection trials; however, no significant difference was found between hit and correct rejection trials (*p* \> 0.05). These results indicate that memorability is processed at the perceptual level regardless of image history or memory type. It is less clear what patterns in the MTL indicate about memorability, however these regions do appear to contain information about the memorability of a stimulus when successfully retrieving an old memory. Predicting Memorability Score from BOLD Activity Patterns {#Sec7} --------------------------------------------------------- A searchlight analysis of support vector regressions (80% training/20% testing split, 100 iterations) predicting memorability score from local multi-voxel activity patterns finds several similar regions significantly predictive of memorability score (Fig. [6](#Fig6){ref-type="fig"}), particularly two overlapping regions along the ventral visual stream and temporal lobe. ROI support vector regression analyses in the MTL regions find significant decoding accuracy bilaterally in the PHC (left: *r*(14) = 0.04, *p* = 0.034; right: *r*(14) = 0.02, *p* = 0.039) and the left hippocampus (*r*(14) = 0.06, *p* = 0.011), including its head (*r*(14) = 0.05, *p* = 0.008), body (*r*(14) = 0.06, *p* = 0.004), and tail (*r*(14) = 0.03, *p* = 0.026). No significant decoding accuracy was found in the PRC, ERC, amygdala, or right hippocampus (all *p* \> 0.15). Frontal and parietal regions also appear here, similar to the RSA map for memory behavior, possibly because memorability score tracks with memory confidence (see Behavioral Results), and so regions sensitive to a subject's memory strength may also be sensitive to a continuous metric of memorability. Importantly, previous decoding work has only focused on binary classifications of memory (i.e., remembered versus forgotten, memorable versus forgettable). However, these results indicate that the representation of memorability in the brain may be continuous rather than binary.Figure 6The top 1000 voxels whose local activity patterns showed significant correlations between a support vector regression's predictions of items' memorability and actual memorability scores. The lowest cluster-threshold corrected significance level that all 1000 voxels pass is *p* \< 0.005. Discussion {#Sec8} ========== These results are the first to examine the interaction of item-centric memorability and observer-centric memory at the stage of memory retrieval. The two phenomena show separate areas of sensitivity, where sensitivity to stimulus memorability is observed in more ventral regions and encompasses the MTL, while sensitivity to memory retrieval outcomes is more apparent in frontal and parietal regions. In particular, the ventral visual stream holds a representational geometry where distributed neural ensembles respond more similarly for images of higher memorability, and less similarly for more forgettable images. This sensitivity to memorability for faces exists even in non-face selective regions (e.g., the PPA and LOC). Additionally, these results show that memorability is coded as a continuous metric within the brain, as memorability score can be predicted by several regions through support vector regressions. These results mirror a dissociation recently found with memorability and memory *encoding* during perception of an image, where ventral perceptual and medial temporal lobe regions show sensitivity to stimulus-based memorability, while fronto-parietal regions show sensitivity to a participant's successful encoding of an image^[@CR22]^. While this previous study looked at memorability during an incidental encoding task (participants did a perceptual categorization task with no knowledge their memory would later be tested), in the current study, participants were performing an explicit retrieval task. The current results also show similar memorability processing in ventral perceptual regions regardless of the image's memory history - whether it is an old image that is successfully remembered or a new image successfully identified as novel. Given both the current results and these previous results, the processing of the memorability of visual input may be a continuous and automatic process, that occurs during both encoding and retrieval, and during recollection/recognition and novelty detection. Memorability processing has been shown to temporally occur between early perception and the encoding of a stimulus^[@CR31]^. Further, the current and past results show no sensitivity to memorability in early visual areas, reinforcing that the memorability of an image is not predicted merely by low level visual processing. The brain's sensitivity to memorability may thus reflect a late perceptual process that determines what stimuli should be ultimately encoded into memory. Such a process may be supported in a feedforward manner by the ventral visual stream regions in the current study (regions along the inferotemporal cortex, overlapping with the FFA, PPA, and LOC); these regions may be identifying the memorability of images through a set of high-level perceptual features. Or, regions within the medial temporal lobe (PRC, PHC, hippocampus) could have an initial sensitivity to memorability, then causing a feedback-driven attentional signal, resulting in heightened response in these late perceptual regions (possibly explaining why scene-selective regions like the PPA show a sensitivity to memorable face images in the current study, due to a generally heightened response for memorable stimuli). Regardless of the region from which memorability sensitivity originates, the current results show a specific memorability-centric representational geometry in these sets of regions, where memorable stimuli are more neurally similar to one another, while more forgettable stimuli are more neurally dissimilar. These regions may support a statistical representation of our sensory experience that can distinguish certain stimuli as being more distinctive or memorable than others, and thus important to encode for later. While the current study gives strong evidence for stimulus memorability as an important influence on neural patterns during a memory retrieval task, several new questions open up about the representation of memorability in the brain. While memorability has been shown to be an intrinsic property to a stimulus^[@CR1],[@CR13]--[@CR21]^ that is processed early during perception in the brain^[@CR19],[@CR31]^, what are the perceptual features of an image that make it memorable? Memorability has been shown to be inexplicable by several other attributes such as attractiveness and subjective distinctiveness for faces^[@CR1]^ and aesthetics and object content for scenes^[@CR16]^. Viewing memorable and forgettable face and scene images controlled for several perceptual and other mid-level attributes (e.g., color, spatial frequency, gender, race, age, attractiveness, emotion, confidence, object content, scene category) during memory encoding still results in similar patterns in the brain as the current results^[@CR22]^. This means that, for the time being, memorability-based brain effects cannot be explained by straightforward low-level visual and perceptual differences (e.g., differential symmetry based on attractiveness, differences in brightness between images). Computational models are beginning to gain moderate success at quantifying memorability from images^[@CR15]^, and additional work will be necessary to fully understand the perceptual features that cause an image to be memorable. Faces, specifically, could serve as the perfect testbed as their features are often more physically quantifiable and manipulable in comparison to more unconstrained image categories such as scenes. For example, memorability could be tested in a data-driven manner for a large-scale set of face images that vary along all of the above-mentioned perceptual features. Further work is also required to probe the precise statistical representations of these regions as they relate to the memorability of a stimulus, and the current RSAs could be expanded through varying other feature dimensions of the stimuli and examining the variance in memorability they can explain, or by looking at other distance metrics in the representational geometry. Similarly, while the current study finds generally similar effects of memorability for both hit and correct rejection trials, future experiments with higher trial counts for incorrectly performed trials (e.g., false alarms) could facilitate the investigation of how false recognition is influenced by stimulus memorability (e.g., what might be happening in the brain when certain novel stimuli consistently feel familiar?). Also, while the current study showed memorability sensitivity during correct rejection trials, might there be differential sensitivity for those novel stimuli that are correctly rejected but then successfully encoded versus those that are not (as could be assessed by a subsequent recognition test, like in^[@CR32]^)? In other words, it is conceivable that the memorability effects we observed during retrieval may at least in part be a reflection of continuous encoding-related processes. Finally, as the current work expands our framework of memorability beyond a binary metric (i.e., memorable versus forgettable) to a continuous one (i.e., an image is 64% memorable), how might this continuous representation of memorability relate to a more fine-grained continuous metric of observer memory (e.g., a 20-point scale of memory strength as in^[@CR33]^)? In addition to furthering our understanding of the neural interaction between the memorability of a stimulus and the ultimate memory fate of that stimulus for an individual subject, these results also showcase how novel insights can be gained by reanalyzing existing fMRI datasets using new technologies and frameworks. While this dataset originally was used to examine the decoding of subject-specific memory performance from BOLD data^[@CR23]^, the current experiment leverages the power of online crowd-sourced studies to collect large amounts of data about the stimuli themselves, in order to analyze item-level effects within the same neural data. Until now, most neuroimaging studies treat all exemplars of a stimulus category (e.g., faces, scenes, objects, words) as an equally good member. However, with recent strides in Big Data collection and analysis, researchers have begun to uncover subtle item-driven differences that may account for effects beyond just the stimulus condition. In the era of increased data sharing, so long as researchers share their full stimulus sets along with the fMRI data, many interesting and previously unforeseen explorations of their data may be possible. Methods {#Sec9} ======= fMRI experiment and data analysis {#Sec10} --------------------------------- Please refer to^[@CR23]^ for specific details on the data used in this experiment. Generally, 16 participants (10 female) were recruited for a face memory test where they studied 200 novel face images (4 s each) outside of the scanner, and then were tested for recognition with an event-related design inside the scanner approximately 1 h later. In the recognition test, the 200 original images were intermixed with an additional 200 novel foil images. Each image was shown for 2 s and participants noted their memory with five different options: (1) recollected the specific face, (2) high confidence for having seen the face, (3) low confidence for having seen the face, (4) low confidence for it being novel, and (5) high confidence for it being novel. There was a 8 s fixation interval between each image. Written informed consent was obtained for all participants under a protocol approved by the Institutional Review Board (IRB) at Stanford University, and all methods for this study were performed in accordance with the regulations set by the Stanford University IRB. The experiment was conducted on a 3 T GE Signa MRI system (TR = 2 s, TE = 30 ms, flip angle = 75, FoV = 22 cm, in-plane resolution = 3.44 mm × 3.44 mm × 4.00 mm). Data were preprocessed with SPM5 ([www.fil.ion.ucl.ac.uk/spm](http://www.fil.ion.ucl.ac.uk/spm)), including motion correction with a two-pass, six-parameter, rigid-body realignment procedure. Each time series was also high-pass filtered, detrended, and z-scored. Voxel data were taken as the average of the time points at the peak of the hemodynamic response (the 3rd and 4th TRs relative to face onset, representing BOLD effects measured 4--8 s post-stimulus). No spatial smoothing was applied. For visualization purposes, results were transformed to the Talairach atlas space, and are displayed on a model inflated brain. Figures were produced using BrainVoyager QX 2.8 (<http://www.brainvoyager.com/>)^[@CR34]^. Region of interest (ROI) specifications {#Sec11} --------------------------------------- In the current experiment, key ROIs in the MTL and ventral visual stream were identified in each participant: the hippocampus (including hippocampal head, body, and tail), amygdala, ERC, PRC, PHC, FFA, PPA, LOC, and EVC. These regions were identified using a probabilistic map of voxels shared by at least 25% participants (a common threshold used in other work: e.g.^[@CR30]^,) based on ROIs determined from previous studies, which included 40 participants for MTL regions and 70 participants for ventral visual stream regions^[@CR22]^. The original MTL ROIs were manually segmented using MTL anatomical landmarks^[@CR35],[@CR36]^, while ventral visual stream ROIs were determined based on independent functional localizers containing face, scene, object, and scrambled images. The probabilistically determined ROIs were then transformed into each participant's native subject space for analyses. Memorability test and consistency analysis {#Sec12} ------------------------------------------ A face memory test was conducted on AMT, an online experiment crowd-sourcing platform, as in^[@CR1]^. A total of 941 workers (404 female; age: *M* = 34.94, *SD* = 11.14, *range*: 18--76) participated in the experiment, first providing informed consent following the protocol guidelines approved by the IRB at the Massachusetts Institute of Technology. The methodology for this memory test was performed in accordance with these IRB guidelines. In the test, participants saw a continuous stream of face images and pressed a key whenever they saw an image they had seen previously during the experiment. Each image was shown for 1.0 s with a 1.4 s interstimulus interval. 80 of the images (20%) shown were "target" images that repeated during the experiment 91--109 images after their first presentation. The remaining 320 images (80%) were "filler" images, used as spacing between target images, and sometimes repeating soon after their first presentation (1--7 images apart) to ensure that participants were paying attention to the task. There was no difference between target and filler images from the perspective of the participant, as they were continuously identifying repeats (unaware of target or filler status of each image), but memorability scores were only determined for the targets, because of the longer delay between their repetitions. In order to ensure participants were attentive to the task, if participants made more than 50% false alarms or missed more than 50% image repeats for the filler images, then they were prevented from continuing with the task, and their data were not incorporated into our analyses. 201 participants failed these criteria, and so ultimately the data from 740 (323 female; age: *M* = 34.50, *SD* = 10.97, range: 18--76) participants were used in the analyses. Targets and fillers were counterbalanced across participants, so ultimately approximately 50 participants made a target repeat response on each of the 400 images. Memorability consistency was assessed through split-half ranking correlations^[@CR13]^. Essentially, images were ranked by memorability score (*HR*, *FA*, *Pr*, or *d-prime*) determined from one random half of the participants and these scores were correlated with those of the same image ordering for the second half of participants, using Spearman's rank correlation tests. These split halves were repeated over 10,000 iterations, and then averaged to get a final consistency score. This metric indicates to what degree memorability scores are stable in an image, across different groups of observers. Chance in this analysis represents the average consistency if the image ordering was randomly permuted in the second half of participants before correlating, and p-values represent the results of a permutation test. Representational similarity analyses (RSAs) {#Sec13} ------------------------------------------- RSAs allow the testing of specific hypotheses of how information may be represented in the brain, as well as directly comparing different hypotheses^[@CR37]^. In the current experiment, hypotheses were defined based on stimulus memorability as well as subject memory and then contrasted. Hypotheses are formed through a representational similarity matrix (RSM), indicating hypothesized degree of similarity (or dissimilarity) between every stimulus pair. The hypotheses were selected based on winning hypotheses found in^[@CR22]^ (the only hypothetical models significantly correlated with voxel patterns in perceptual and memory-related ROIs), but are now expanded to use memorability as a continuous value rather than as a binary category type. This previous work used images at the extreme ends of memorability (i.e., binary categories of highly memorable and highly forgettable images), and found that regions across the brain showed significant correlations to two possible hypothesized models^[@CR22]^: (1) the more memorable an image is, the more neurally similar it is, and (2) the more remembered an image is for an individual participant, the more neurally similar it is (i.e., similar activity patterns during encoding are predictive of subsequent memory). These models are akin to other work showing there are regions in the brain that show more similar neural patterns for images that are ultimately remembered^[@CR38]--[@CR42]^. In the current study, using memorability as a continuous metric allows us to examine more fine-grained hypotheses about the representation of memorability in the brain, not previously possible in prior memory work. To construct an RSM based on image memorability, pairwise similarity was taken as the mean *Pr* of every possible pair of two stimuli across all response types. With this model, images would be hypothesized to be most neurally similar to the most memorable image; i.e., two memorable images would be neurally similar, and forgettable images would be more neurally similar to memorable images than other forgettable images (Fig. [3](#Fig3){ref-type="fig"}). This would predict a memorability-centric geometry, where items that are memorable take on a more stereotyped neural pattern, whereas items that are forgettable have no specific pattern. This geometry has been shown previously for encoding of items that are subsequently remembered versus forgotten by participants^[@CR38]--[@CR42]^. A subject memory-based hypothesized RSM was also constructed for all previously studied stimuli, except using binary values based on individual subject memory performance (i.e., remembered = 1, forgotten = 0) rather than stimulus memorability, and then taking the mean memory performance for each image pair. Thus, pairs of images where both were remembered were coded with a 1, pairs of images where one was remembered and one was forgotten was coded with a 0.5, and pairs of images where both were forgotten was coded with a 0. Ultimately, each participant ends up with their own individualized memory-based hypothesis RSM, while the memorability-based hypothesis RSM is stimulus-centric and thus the same across all participants. Once hypothesized memorability-based and memory-based RSMs are defined, they are then compared to the real data-based RSMs (Fig. [3](#Fig3){ref-type="fig"}). 7-voxel diameter spherical searchlights were moved through the brain and data-based RSMs were formed through Pearson's correlations between the voxel values (BOLD signal change averaged at the 3rd and 4th TRs relative to stimulus onset) within that sphere for every image pair. Hypothesized RSMs and data-based RSMs were then compared using Spearman's rank correlations at each spherical searchlight in the brain, resulting in a whole-brain correlation map for each participant of the degree to which actual BOLD data relates to that hypothesis. The correlation maps were then transformed with Fischer's z-transformations to allow parametric statistics to be performed on them. Group whole-brain maps were created by normalizing all participants to Talairach space and then performing *t*-tests of the whole-brain maps of the sixteen participants versus a null hypothesis of no correlation (*r* = 0). The resulting maps show where in the brain each hypothesis is significantly correlated with the BOLD data, across participants. As there were large differences between the effects being compared (for example, subject-based memory behavior has more widespread effects as it is tied to the task, motor response, and participants' decisions, while stimulus memorability was not a task-relevant variable per se, and was only parameterized post-hoc), we show the comparison of any two maps by showing the top 1000 significant voxels for each map. ROI analyses were also conducted using the same methods, but by extracting data-based RSMs from the voxels within probabilistically defined ROIs rather than spherical searchlights. The memorability-based RSA was done including all 400 stimuli regardless of image history (old/new) and participant response (confidence). In addition, we conducted separate memorability-based RSAs for specific memory response types (i.e., hit trials or correct rejection trials). For hit trials, we took all old trials with a correct old response (recollection, high confidence old, low confidence old), and for correct rejection trials, we took new trials with a correct new response (high confidence new, low confidence new). Confidence levels were not separated because of the loss in power from the decrease in trial counts (i.e., on average there were 135 hit trials, and of those, 36.4 recollection responses, 43.2 high confidence, and 55.3 low confidence). We downsampled the data so that the number of trials was the same across trial types. To do this, for each participant, we identified the minimum number of trials *m* between the two types (hit trials, correct rejection trials). For the trial type with more than *m* trials, we took a random subset of *m* trials and conducted the RSM with that subset of trials. This way, any effects we find are not due to there being more trials of one memory response type than the other. Participants generally had high numbers of both trial types, with on average 135 hit trials (*SD* = 22.9, *min* = 93, *max* = 168), and 138 correct rejection trials (*SD* = 25.3, *min* = 89, *max* = 177). Support vector regression (SVR) analyses {#Sec14} ---------------------------------------- SVRs were used to predict memorability score *Pr* from BOLD activity. SVRs allow us to determine the degree to which a representation of memorability along a continuous scale exists in the brain. *Pr*s were first logit-transformed to allow for linear statistics. SVRs were conducted with 100 iterations, using 80% of the trials for training and 20% of the trials for testing with no replacement. Accuracy was taken as the mean Pearson's correlation coefficient between predicted *Pr*s and actual *Pr*s over the 100 iterations. For the whole brain analyses, SVRs were conducted in a searchlight sphere of 7-voxel diameter moved throughout the brain. For the ROI analyses, SVRs were conducted in the probabilistically-defined ROIs. Data availability {#Sec15} ----------------- No new datasets were generated during the current study, however tools for doing these analyses (i.e., measuring the memorability of stimuli from a prior study) are available on the corresponding author's website. Electronic supplementary material ================================= {#Sec16} Supplementary Information **Electronic supplementary material** **Supplementary information** accompanies this paper at 10.1038/s41598-018-26467-5. **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We would like to thank Aude Oliva for helpful discussions on the work and her comments on the manuscript, and Anthony D. Wagner for guidance and support with the design and data collection for the original (2010) fMRI study. The original data were provided by J.R. Analyses were conducted by W.A.B. The manuscript was written by W.A.B. and J.R. Competing Interests {#FPar1} =================== The authors declare no competing interests.
{ "pile_set_name": "Pile-CC" }
Post navigation For a physical book. (I know: for my own consumption, I’ve only bought ebooks for the last six months or so. But this is a richly illustrated book, and it’s meant as a gift, so I want it in its full physicality, its cellulosic, tree-killing, chemically enhanced glossy incarnation. Plus, it’s not sold in ebook form.) In the space of 40 days, I have received 10 order updates from amazon.co.uk. The first (April 6) told me: “We regret to inform you that your order will take longer to fulfill than originally estimated.” On April 14, I received a new estimated delivery date: April 29. On April 15, I was told “We are pleased to report that the following item will dispatch sooner than expected”, i.e. April 21-22. On April 18, a new delay (estimated delivery date: April 23-30), together with an appropriately contrite apology statement: “One of Amazon’s aims is to provide a convenient and efficient service; in this case, we have fallen short. Please accept our sincere apologies.” On April 20, a new estimated delivery date (April 27 – May 4), with the same statement. Again, on April 22 (estimated delivery date: April 28-May 5), April 24 (estimated delivery date: April 30 – May 6), April 27 (estimated delivery date: May 4-9), and April 29 (estimated delivery date: May 5-10): all with Amazon’s sincere apologies. Then, the apologies stopped. On May 1, “We are awaiting a revised estimate from our supplier, and will email you as soon as we receive this information.” On the same day, two minutes later, another email with a new estimated delivery date: May 6. I wonder if I’m heading into another loop of apologies, revisions, notifications and estimates. I have tons of respect for Amazon’s operational abilities and I like knowing what’s going on with my order, but this is starting to feel like a case where making the catalog item available for ordering was perhaps premature. And in terms of communicating with the customer… more than, say, one update per week feels like too much information. It’s a book, after all: not a kidney, or a new set of corneas. Just get it to me when it’s ready, OK? Update, May 22: Since writing this post, I received 13 more email updates from Amazon along the same lines. The 14th was different: “We regret to inform you that we have been unable to obtain the following item… We apologise for the length of time it has taken us to reach this conclusion. Until recently, we had still hoped to obtain this item for you.” After 24 emails, it’s almost a relief.
{ "pile_set_name": "Pile-CC" }
Vashikaran mantra is derived from the great vedic and astrological ancient indian history. The initial step of this art is to recite the " mantra" which enables production of power and spirit for vashikaran mantra to work. Vashikaran mantra is strongly related to attraction, to control others feeling to some extent. Many pandits and babas have been practicing the vashikaran for may years. The question now is how to recite and expert in the art of vashikaran mantra for love. Even the tasks which are considered to be the most hardest can be easill resolved with vashikaran mantra in hindi.Vashikaran has been in existance from very early ages. The devils possesed the gods with mantra. Both Yantra and Mantra are very essential for achieving full knowledge. vashikaran specialist mantra If done with full self confidence and self belief, the success is imminent.The vedas says that main motive of vashikaran mantra should be for the well being of human society. perfect example of this is when shree Krishna used to play flute, all of nature used to get mesmerized by the power of simple, yet very powerful mantra. This is known as Mahakaali and Shree Krishna's " Beej Mantra".
{ "pile_set_name": "StackExchange" }
Q: Ansible - Mode 755 for directories and 644 for files recursively I'd like to allow anyone to list and read all files in my directory tree, but I don't want to make the files executable : dir \subdir1 file1 \subdir2 file2 ... \subdirX fileX The following task makes my directories and files readable, but it makes all the files executable as well: - name: Make my directory tree readable file: path: dir mode: 0755 recurse: yes On the other hand, if I choose mode 0644, then all my files are not executable, but I'm not able to list my directories. Is it possible to set mode 755 for all directories and 644 for all files in a directory tree? Thank you. A: Since version 1.8, Ansible supports symbolic modes. Thus, the following would perform the task you want: - name: Make my directory tree readable file: path: dir mode: u=rwX,g=rX,o=rX recurse: yes Because X (instead of x) only applies to directories or files with at least one x bit set. A: The Ansible file/copy modules don't give you the granularity of specifying permissions based on file type so you'd most likely need to do this manually by doing something along these lines: - name: Ensure directories are 0755 command: find {{ path }} -type d -exec chmod -c 0755 {} \; register: chmod_result changed_when: "chmod_result.stdout != \"\"" - name: Ensure files are 0644 command: find {{ path }} -type f -exec chmod -c 0644 {} \; register: chmod_result changed_when: "chmod_result.stdout != \"\"" These would have the effect of recursing through {{ path }} and changing the permissions of every file or directory to the specified permissions.
{ "pile_set_name": "Wikipedia (en)" }
Sikhism in Belgium Sikhism is a minority religion in Belgium, but Sikhs have played a role in Belgian history; during World War I, many Sikhs fought in Belgium. In the First Battle of Ypres, an entire platoon of Dogra Sikhs died. Migration to Belgium The first eight Sikhs who came to Belgium as private citizens arrived on 8 November 1972 as political exiles. They were expelled from Uganda; at the time, it was under the dictatorial rule of Idi Amin, who drove all Indians from the country. Other Sikhs who arrived before 1985 (only a handful, among them Jarnail Singh Alhuwalia) were workers at the Indian Embassy. Most Sikhs arrived in the wake of Sukhdev Singh Jalwehra in 1985, after the storming of the Golden Temple in Amritsar by Indian troops the previous year. When Jalwehra arrived in Belgium, a ban existed on the wearing of turbans in passport or identity-card photos. Jalwehra fought the case and won, and Sikhs were no longer required to remove their turbans. In 1993, when King Baudouin I died, Sukhdev Singh Jalwerha paid their respects at the palace with a group of other Sikhs as representatives of the Belgian Sikh community. The first Sikhs in Belgium were predominantly male laborers with limited education, sharing a rented house and dividing the costs. Since they were accustomed to working in agriculture, they looked for work in that sector and found seasonal jobs in the Flemish province of Limburg on fruit farms. Later Sikhs immigrated for economic reasons; they had been living in impoverished regions of the Punjab and came looking for a better life in Belgium. At first they also found employment on fruit farms but when they could afford to do so they established their own shops, particularly shops remaining open at night in Brussels. As immigrants, completing the necessary paperwork was challenging. Sikh women are now arriving in Belgium in greater numbers, many to reunite their families. Persecution In 1994, the government of the United States noted that while Belgium has freedom of religion and has not seen much systematic violence directed against religious minorities or newcomers, an exception occurred in 1993 against Sikhs. In Sint-Truiden, Sikh workers in agriculture were bullied by some citizens and one Sikh was shot. A house belonging to Sikhs was bombed, with no fatalities. There were arrests in the aftermath. Official Intervention Following problems with immigration documentation which gave rise to concerns about possible human trafficking, the Mayor of Vilvoorde closed the local Gurdwara between October and December 2014, and following disturbances in the Temple, again in September 2016. Gurdwaras There are seven Gurdwaras in Belgium; the oldest was founded in Sint-Truiden in 1993. Gurmit Singh Aulakh (who came to Belgium in November 2007), made a speech and invoking Khalistan slogans at Gurudwara Sangat Sahib in Sint-Truiden. Gurudwaras in Belgium are: Gurdwara Sangat Sahib, Watermaal and Sint-Truiden (Founded in 1993) Gurdwara Guru Nanak Sahib, Vilvoorde, Brussels (1999) Gurdwara Guru Ram Dass Sikh Study and Cultural Center, Borgloon (2005) In Luik/Liège (2005) Gurudwara Mata Sahib Kaur (2014) Gurdwara Singh Sabha, Alken (2013), Gurudwara Official website Gurdwara Shri Guru Ravidass Sabha, Oostende Sikh population According to a Dutch newspaper, there are approximately 10,000 Sikhs in Belgium. A Sikh stronghold is Sint-Truiden (Limburg), where the first Sikh Gurdwara was built. There are about 3,000 Sikhs in Limburg, 2,000 in Liege and more than 2,000 in Brussels. The remainder live throughout Belgium. Cities with significant Sikh populations are: Vilvoorde Sint-Truiden Liège (city) Borgloon Tienen Ostend Ghent Antwerp Leuven Alken Hasselt References External links Gurudwara Guru Nanak Sahib in Brussels Sikhism Belgium Bel
{ "pile_set_name": "Github" }
/* * Copyright (c) 2014 The WebRTC project authors. All Rights Reserved. * * Use of this source code is governed by a BSD-style license * that can be found in the LICENSE file in the root of the source * tree. An additional intellectual property rights grant can be found * in the file PATENTS. All contributing project authors may * be found in the AUTHORS file in the root of the source tree. */ #ifndef WEBRTC_COMMON_AUDIO_VAD_MOCK_MOCK_VAD_H_ #define WEBRTC_COMMON_AUDIO_VAD_MOCK_MOCK_VAD_H_ #include "webrtc/common_audio/vad/include/vad.h" #include "webrtc/test/gmock.h" namespace webrtc { class MockVad : public Vad { public: virtual ~MockVad() { Die(); } MOCK_METHOD0(Die, void()); MOCK_METHOD3(VoiceActivity, enum Activity(const int16_t* audio, size_t num_samples, int sample_rate_hz)); MOCK_METHOD0(Reset, void()); }; } // namespace webrtc #endif // WEBRTC_COMMON_AUDIO_VAD_MOCK_MOCK_VAD_H_
{ "pile_set_name": "Github" }
<?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" /> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">x86</Platform> <ProjectGuid>{DDFB13E8-92EC-4F6F-AAEF-90A71F12812E}</ProjectGuid> <OutputType>AppContainerExe</OutputType> <AppDesignerFolder>Properties</AppDesignerFolder> <RootNamespace>Samples.UWP</RootNamespace> <AssemblyName>Samples.UWP</AssemblyName> <DefaultLanguage>en-US</DefaultLanguage> <TargetPlatformIdentifier>UAP</TargetPlatformIdentifier> <TargetPlatformVersion>10.0.14393.0</TargetPlatformVersion> <TargetPlatformMinVersion>10.0.10586.0</TargetPlatformMinVersion> <MinimumVisualStudioVersion>14</MinimumVisualStudioVersion> <EnableDotNetNativeCompatibleProfile>true</EnableDotNetNativeCompatibleProfile> <FileAlignment>512</FileAlignment> <ProjectTypeGuids>{A5A43C5B-DE2A-4C0C-9213-0A381AF9435A};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids> <PackageCertificateKeyFile>Samples.UWP_TemporaryKey.pfx</PackageCertificateKeyFile> <PackageCertificateThumbprint>5D80C6709564B0BBADBD1B115AFCADCB0B9307AB</PackageCertificateThumbprint> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|ARM'"> <DebugSymbols>true</DebugSymbols> <OutputPath>bin\ARM\Debug\</OutputPath> <DefineConstants>DEBUG;TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <NoWarn>;2008</NoWarn> <DebugType>full</DebugType> <PlatformTarget>ARM</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|ARM'"> <OutputPath>bin\ARM\Release\</OutputPath> <DefineConstants>TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <Optimize>true</Optimize> <NoWarn>;2008</NoWarn> <DebugType>pdbonly</DebugType> <PlatformTarget>ARM</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> <UseDotNetNativeToolchain>true</UseDotNetNativeToolchain> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'"> <DebugSymbols>true</DebugSymbols> <OutputPath>bin\x64\Debug\</OutputPath> <DefineConstants>DEBUG;TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <NoWarn>;2008</NoWarn> <DebugType>full</DebugType> <PlatformTarget>x64</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'"> <OutputPath>bin\x64\Release\</OutputPath> <DefineConstants>TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <Optimize>true</Optimize> <NoWarn>;2008</NoWarn> <DebugType>pdbonly</DebugType> <PlatformTarget>x64</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> <UseDotNetNativeToolchain>true</UseDotNetNativeToolchain> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'"> <DebugSymbols>true</DebugSymbols> <OutputPath>bin\x86\Debug\</OutputPath> <DefineConstants>DEBUG;TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <NoWarn>;2008</NoWarn> <DebugType>full</DebugType> <PlatformTarget>x86</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x86'"> <OutputPath>bin\x86\Release\</OutputPath> <DefineConstants>TRACE;NETFX_CORE;WINDOWS_UWP</DefineConstants> <Optimize>true</Optimize> <NoWarn>;2008</NoWarn> <DebugType>pdbonly</DebugType> <PlatformTarget>x86</PlatformTarget> <UseVSHostingProcess>false</UseVSHostingProcess> <ErrorReport>prompt</ErrorReport> <Prefer32Bit>true</Prefer32Bit> <UseDotNetNativeToolchain>true</UseDotNetNativeToolchain> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.0.8" /> <PackageReference Include="Xamarin.Forms" Version="2.5.0.280555" /> <PackageReference Include="Win2D.uwp" Version="1.21.0" /> </ItemGroup> <ItemGroup> <None Include="Samples.UWP_TemporaryKey.pfx" /> </ItemGroup> <ItemGroup> <Compile Include="App.xaml.cs"> <DependentUpon>App.xaml</DependentUpon> </Compile> <Compile Include="MainPage.xaml.cs"> <DependentUpon>MainPage.xaml</DependentUpon> </Compile> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> <ItemGroup> <AppxManifest Include="Package.appxmanifest"> <SubType>Designer</SubType> </AppxManifest> </ItemGroup> <ItemGroup> <Content Include="Assets\LargeTile.scale-100.png" /> <Content Include="Assets\LargeTile.scale-125.png" /> <Content Include="Assets\LargeTile.scale-150.png" /> <Content Include="Assets\LargeTile.scale-200.png" /> <Content Include="Assets\LargeTile.scale-400.png" /> <Content Include="Assets\SmallTile.scale-100.png" /> <Content Include="Assets\SmallTile.scale-125.png" /> <Content Include="Assets\SmallTile.scale-150.png" /> <Content Include="Assets\SmallTile.scale-200.png" /> <Content Include="Assets\SmallTile.scale-400.png" /> <Content Include="Assets\SplashScreen.scale-100.png" /> <Content Include="Assets\SplashScreen.scale-125.png" /> <Content Include="Assets\SplashScreen.scale-150.png" /> <Content Include="Assets\SplashScreen.scale-200.png" /> <Content Include="Assets\SplashScreen.scale-400.png" /> <Content Include="Assets\Square150x150Logo.scale-100.png" /> <Content Include="Assets\Square150x150Logo.scale-125.png" /> <Content Include="Assets\Square150x150Logo.scale-150.png" /> <Content Include="Assets\Square150x150Logo.scale-200.png" /> <Content Include="Assets\Square150x150Logo.scale-400.png" /> <Content Include="Assets\Square44x44Logo.altform-unplated_targetsize-16.png" /> <Content Include="Assets\Square44x44Logo.altform-unplated_targetsize-24.png" /> <Content Include="Assets\Square44x44Logo.altform-unplated_targetsize-256.png" /> <Content Include="Assets\Square44x44Logo.altform-unplated_targetsize-32.png" /> <Content Include="Assets\Square44x44Logo.altform-unplated_targetsize-48.png" /> <Content Include="Assets\Square44x44Logo.scale-100.png" /> <Content Include="Assets\Square44x44Logo.scale-125.png" /> <Content Include="Assets\Square44x44Logo.scale-150.png" /> <Content Include="Assets\Square44x44Logo.scale-200.png" /> <Content Include="Assets\Square44x44Logo.scale-400.png" /> <Content Include="Assets\Square44x44Logo.targetsize-16.png" /> <Content Include="Assets\Square44x44Logo.targetsize-24.png" /> <Content Include="Assets\Square44x44Logo.targetsize-256.png" /> <Content Include="Assets\Square44x44Logo.targetsize-32.png" /> <Content Include="Assets\Square44x44Logo.targetsize-48.png" /> <Content Include="Assets\StoreLogo.scale-100.png" /> <Content Include="Assets\StoreLogo.scale-125.png" /> <Content Include="Assets\StoreLogo.scale-150.png" /> <Content Include="Assets\StoreLogo.scale-200.png" /> <Content Include="Assets\StoreLogo.scale-400.png" /> <Content Include="Assets\Wide310x150Logo.scale-100.png" /> <Content Include="Assets\Wide310x150Logo.scale-125.png" /> <Content Include="Assets\Wide310x150Logo.scale-150.png" /> <Content Include="Assets\Wide310x150Logo.scale-200.png" /> <Content Include="Assets\Wide310x150Logo.scale-400.png" /> <Content Include="Properties\Default.rd.xml" /> </ItemGroup> <ItemGroup> <ApplicationDefinition Include="App.xaml"> <Generator>MSBuild:Compile</Generator> <SubType>Designer</SubType> </ApplicationDefinition> <Page Include="MainPage.xaml"> <Generator>MSBuild:Compile</Generator> <SubType>Designer</SubType> </Page> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\..\src\SignaturePad.Forms.UWP\SignaturePad.Forms.UWP.csproj"> <Project>{6fc62387-6717-4577-a48b-d15848741f08}</Project> <Name>SignaturePad.Forms.UWP</Name> </ProjectReference> <ProjectReference Include="..\..\..\src\SignaturePad.UWP\SignaturePad.UWP.csproj"> <Project>{16131fdc-d50b-4ef2-8ecc-661184ff80db}</Project> <Name>SignaturePad.UWP</Name> </ProjectReference> <ProjectReference Include="..\Samples\Samples.csproj"> <Project>{cc18e151-5792-4da5-ade6-7cfba16a593e}</Project> <Name>Samples</Name> </ProjectReference> </ItemGroup> <PropertyGroup Condition=" '$(VisualStudioVersion)' == '' or '$(VisualStudioVersion)' &lt; '14.0' "> <VisualStudioVersion>14.0</VisualStudioVersion> </PropertyGroup> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WindowsXaml\v$(VisualStudioVersion)\Microsoft.Windows.UI.Xaml.CSharp.targets" /> </Project>
{ "pile_set_name": "Pile-CC" }
I am trying to find out information my great grandfather. His name was Ettore Petruzzini. He was born in 1889 in Ascoli Satriano, Italy. He came through Ellis Island but I can't find any ships logs with his information on it. I found my great grandmother, her name was Michelina Loporchio born 1891 in Ascoli Satriano, Italy. She came over a different time then him. I am also trying to find out if he was naturalized and if he was when. "For naturalizations prior to 1878 (when what is now Lackawanna County was part of Luzerne County), contact the Prothonotary’s Office at the Luzerne County Courthouse. They have an index to naturalization records for Luzerne County from 1830-1906 and 1912-1944. If you find your ancestor in the index you then need to send a request to the National Archives, Mid-Atlantic Region in Philadelphia (see address to obtain a photocopy of the original record. Researchers can also visit the National Archives to conduct their own research. Prothonotary’s Office Luzerne County Courthouse 200 North River Street Wilkes Barre, PA 18702 (570) 825-1745"--Quoted from the Lackawanna County Library System's website. I do not know if anyone at the courthouse will do a search for you, but if you still have relatives in the area perhaps one of them could do a search to determine if there is a record. Wow! All of you did so much for me. I am in tears because of all the information you all have uncovered and the correct path to take to get my Italian Citizenship. From the looks of it my grandmother was born while my great grandfather was still an alien. My grandmother is Mary Jane. Thank you all again for your hard work.
{ "pile_set_name": "Github" }
<?xml version="1.0" encoding="UTF-8" ?> <ldml> <identity> <version number="$Revision: 1.4 $"/> <generation date="$Date: 2009/05/05 23:06:37 $"/> <language type="kfo"/> <territory type="CI"/> </identity> </ldml>
{ "pile_set_name": "Wikipedia (en)" }
Christophe Jeannet Christophe Jeannet (born September 22, 1965 in France) is a former professional footballer who played as a goalkeeper. External links Christophe Jeannet profile at chamoisfc79.fr Category:1965 births Category:Living people Category:French footballers Category:Association football goalkeepers Category:Racing Besançon players Category:Chamois Niortais F.C. players Category:Ligue 2 players Category:SO Cholet players
{ "pile_set_name": "Pile-CC" }
P.S. The "finally" above should not be read as a flame against Dr Mounce who, I'm sure, is at least as glad as I am that the book now available. ;-) Rather CBD, for example, has been listing the book as available in its catlogue since at least early January --which is when I ordered it-- although I just now received it (actually it came a couple of days ago, but I've been traveling). --N
{ "pile_set_name": "StackExchange" }
Q: Removing a View in Silverlight I'm using SandRibbons from http://www.divelements.co.uk/net/controls/sandribbonsl/ and I'm trying to add a view to my contextual tab, I've successfully done this by using, ribbon3.Items.Add(Activator.CreateInstance(viewModel.filterValue)); Although here's the problem, how do I remove the view? I tried, ribbon3.Items.Remove(Activator.CreateInstance(viewModel.filterValue)); and ribbon3.Items.Remove(viewModel.filterValue); Although it doesn't seem to be working, if anyone has done anything similiar or has any input it would be greatly apperciated. Thanks, Jason A: I'll have to say I do not use Sandribbons... but from what I can see, you do not refer to the actual view in any of the remove statements you suggested. Possible solutions could be to save the position of your view after you added it: ribbon3.Items.Add(Activator.CreateInstance(viewModel.filterValue)); var posView = ribbon3.Items.Count - 1; // do it right after you added the view // some stuff ribbon3.Items.RemoveAt(posView); // note that it's the RemoveAt method or to save the view itself before adding it and keeping it as a reference: var theView = Activator.CreateInstance(viewModel.filterValue); ribbon3.Items.Add(theView); // some stuff ribbon3.Items.Remove(theView); Of course these are only possible and probably not the most efficient solutions to your issue, but it might help you to consider how to tackle the problem in a more optimal way.
{ "pile_set_name": "OpenWebText2" }
A Justice Department memorandum apparently prepared for Congress lays out for the first time the criteria that the national security establishment uses to decide whether to kill an American who is a senior al Qaeda leader or the the leader of one of its operational arms. Honors go to NBC's Mike Isikoff for the score. The memo has no classification markings on it, which tells me that it is a distillation from a larger, probably classified document prepared by the Office of Legal Counsel at the Department of Justice. Generally, the executive branch keeps secrets in one of two of ways. They classify something formally, or they claim that something not classified is tantamount to internal work product or private legal advice intended for the president and his advisers only. This allows the administration to keep its internal legal memos away from other branches of government, which it claims the right to do. The Bush Administration revealed operational details of secret programs to Congress but refused to provide the legal reasoning. The Obama administration has, at least in the past two years, begun to brief members of Congress proactively on both sets of details. This is one reason why Congress, for the most part, isn't complaining about "secret laws." (There are some exceptions.) Anyway, here is what the lawyers to the president's lawyer say about the legality of killing Americans who are constituted as al Qaeda or al Qaeda-linked threats. (The memo makes it clear that its language does not apply to other instances where American citizens might be killed by their government.) 1. "An informed, high-level official" must determine that the person represents "an imminent threat" of "violent attack against the United States." 2. Capturing the dude is "infeasible," and the government will continue to assess whether capturing him is feasible. 3. The killing, or "lethal operation," must be conducted according to the laws of war. Sounds kind of broad. What about due process? The memo says that if the person presents an imminent threat to the homeland, then his due-process rights must be weighed against the obligation of the executive to protect the homeland. It asserts that the procedures it lays out do constitute a form of due process, (although, significantly, not judicial due process — all of this takes place within the executive branch). It notes that the Supreme Court has as recently as 2006 acknowledged that the U.S. can use force against U.S. citizens engaged in armed combat against the U.S. And it doesn't matter if the person is planning to attack the U.S. from somewhere other than the place where an official war has been declared, so long as the country is determined to be a gathering place for al Qaeda and associated forces. Consent of the home nation? First, the administration will try to obtain it. If the home nation won't deal with the direct threat to U.S. interests, it reserves the right to act alone. Ok, but how does the executive branch determine whether a threat is imminent? Turns out it "does not require clear evidence" because the nature of terrorism and the asymmetries of information make it difficult to ever acquire such evidence. A "broader concept of imminence" is required. So here's what that means: Even if the person is not actively planning terrorist attacks against the U.S., because of the nature of terrorist attacks in general, merely his membership in an organization that is planning those attacks meets the requisite definition of imminence. So, basically, imminence does not mean imminent. And membership in al Qaeda is seen as tantamount to being in a car when someone decides to shoot someone on the street, even if the other occupant had no knowledge beforehand that the drive-by shooter would act. Accessory to murder, drone edition.
{ "pile_set_name": "PubMed Central" }
Introduction ============ Despite a paucity of data supporting their safety and efficacy, the use of heterogeneous overlapping drug-eluting stents (DESs) is not uncommon for treating diffuse long lesions in clinical practice.[@B1][@B2] Therefore, clinical trials and basic studies are needed to fully clarify their safety and efficacy and identify the potential pathophysiological mechanisms. Additionally, we suppose that the implantation sequence of different DESs may have distinct impacts on the endothelialization and arterial responses, particularly at the overlapping stented segment. Thus, in the present study, we evaluated the impacts of different implantation sequences of heterogeneous overlapping DESs on the endothelialization and arterial responses in rabbit iliac arteries. Materials and Methods ===================== Animal study protocol --------------------- Twenty-one New Zealand White rabbits, weighing 4.14±0.37 kg, were randomly assigned to receive two overlapping bare-metal stents (BMS, Driver™, Medtronic AVE Co., Minneapolis, MN, USA) (group I, B+B, n=7), or distal sirolimus-eluting stent (SES, Cypher™, Cordis Corp, Johnson & Johnson Co., New Brunswick, NJ, USA) plus a proximal paclitaxel-eluting stent (PES, Taxus™, Boston Scientific Co., Natick, MA, USA) (group II, C+T, n=7), or distal PES plus proximal SES (group III, T+C, n=7). After being anesthetized with ketamine (20 mg/kg intramuscularly) and xylazine (2 mg/kg intramuscularly), the stents were deployed at a single iliac artery via a right carotid artery approach under fluoroscopic guidance. The diameters of the stents were determined according to the iliac artery diameter after nitroglycerine infusion with a ratio of stent to artery of 1.0 to 1.1. Individual stents were deployed at their respective nominal pressures (9-11 atm, 10-second balloon inflation), and the overlapped segment was postdilated at 12 atm to ensure complete expansion. All rabbits were pretreated with aspirin 40 mg PO (-10 mg/kg) and clopidogrel 25 mg PO (-6 mg/kg) 24 hours before stenting with continued therapy (same doses of aspirin and clopidogrel) until death. Additionally, unfractionated heparin (150 IU/kg) was administered intravenously before catheterization procedures. This animal study was approved by the Ethical Committee of Korea University Guro Hospital, Seoul, Korea. Follow-up angiography and acetylcholine provocation test -------------------------------------------------------- Animals were anesthetized 3 months after the index procedure, and follow-up angiography of the iliac arteries was performed to evaluate the angiographic outcomes including the presence of stent fracture, patency, and position of the overlapping stents. Endothelial function of the stented iliac artery was assessed after direct arterial infusion to iliac arteries using increasing doses of acetylcholine (ACh) (3, 6, and 15 µg/min) followed by nitroglycerin via a microcatheter. ACh was injected into the stented iliac artery over a 1 minute period with 5 minutes intervals.[@B3] Angiography was repeated after each ACh dose. Then, an intrailiac infusion with 0.2 mg nitroglycerin was administered, and angiography was performed 2 minutes later. If significant vasoconstriction of the iliac artery was induced with any ACh dose, the ACh infusion was stopped. Significant endothelial dysfunction was defined as a transient \>90% luminal narrowing of the stented iliac artery.[@B3][@B4] Subsequently, rabbits were euthanized with an overdose of ketamine, and the stented arteries were perfusion fixed *in situ*. Specimens were embedded in 10% formalin; and 3 mm from the proximal segment, middle overlapped segment, and distal ends of the stent were cut with a tungsten carbide knife (Delaware Diamond Knives, Wilmington, DE, USA). Sections were cut on an automated microtome (Leica) and stained with hematoxylin and eosin. Histopathological analysis -------------------------- All histological samples were managed blindly by an expert. Computerized planimetry was performed as described previously.[@B5][@B6] Specimens were embedded in methylmethacrylate, and 50-100 µm sections were cut approximately 1 mm apart with a low-speed diamond wafer mounted on a Buehler Isomet saw (Buehler Ltd., Lake Bluff, IL, USA), leaving the stent wires intact in the cross-sections to minimize potential artifacts caused by removing stent wires. A calibrated microscope digital video imaging system, and a microcomputer program (Visus 2000 Visual Image Analysis System) were used to measure the sections. Lumen area borders were manually traced, and the area was circumscribed by the internal elastic lamina and the innermost border of the external elastic lamina (external elastic lamina area). The measured internal elastic lamina area minus the lumen area was considered the neointimal area. Area stenosis was calculated as 100×{1-(lesion lumen area/lesion internal elastic lamina area)}.[@B5] The measurements were made on four cross-sections of each stent. Arterial injury at each strut was evaluated by the anatomic structures that penetrated each strut. A numeric value was used as previously described by Schwartz et al.[@B7]: 0=no injury; 1=break in the internal elastic membrane; 2=perforation of the media; 3=perforation of the external elastic membrane to the adventitia. The injury score was calculated by dividing the sum of each inflammatory score by the total strut number at the examined section.[@B5][@B6] To determine the inflammatory score for each individual strut, a grading system was used as follows: 0=no inflammatory cells surrounding the strut; 1=light, noncircumferential lymphohistiocytic infiltrate surrounding strut; 2=localized, moderate to dense cellular aggregate surrounding the strut noncircumferentially; and 3= circumferential dense lymphohistiocytic cell infiltration of the strut. The inflammatory score for each cross-section was calculated by dividing the sum of the individual inflammatory scores by the total number of struts in the examined section.[@B6][@B8] Statistical analyses -------------------- All statistical analyses were performed using Statistical Package for the Social Sciences (SPSS) 11.0 (SPSS Inc., Chicago, IL, USA). Differences among groups of continuous variables were evaluated by one-way analysis of variance. Differences in discrete variables were expressed as counts and percentages and were analyzed with the chi-square (or Fisher\'s exact) test among groups as appropriate. Data are expressed as rate or mean±standard deviation. A two-tailed p of \<0.05 was considered statistically significant. Results ======= The baseline procedural characteristics were similar among the three groups ([Table 1](#T1){ref-type="table"}). The 3-month follow-up angiography of the distal stented segment after the index procedure showed that the B+B and C+T groups had significantly lower minimal luminal diameter (MLD), higher restenosis percentage, and late loss compared with those in the T+C group. The reference diameter and edge restenosis percentage were similar among the three groups. Interestingly, three cases of stent fractures occurred but only in the C+T group, whereas no stent fractures were observed in the other two groups. Furthermore, one rabbit with a stent fracture developed extensive necrosis distal to the fracture site ([Fig. 1](#F1){ref-type="fig"}). No angiographic stent thrombosis was observed in the three groups at the time of angiographic follow-up. The three groups did not differ significantly in the proximal stented segment, including MLD, restenosis percentage, and late loss. No stent fracture or thrombosis was observed among the three groups. Similar results were observed in the overlapping segment ([Table 2](#T2){ref-type="table"}). The histopathological results are presented in [Table 3](#T3){ref-type="table"} and [Figs. 2](#F2){ref-type="fig"} and [3](#F3){ref-type="fig"}. Consistent with the 3-month angiographic outcomes, the B+B and C+T groups had significantly lower luminal area, lower internal elastic lamina (IEL) area, and higher area stenosis in the distal stented segment compared with those in the T+C group. The C+T group had significantly lower IEL area, and a trend toward lower luminal area compared with those in the B+B group. In addition, the C+T group had significantly higher injury and inflammatory scores due to the three cases of stent fracture compared with those in the T+C and B+B groups. Despite similar injury scores, the T+C group had significantly higher inflammatory scores in the distal stented segment than those in the B+B group. The C+T and T+C groups had significantly larger luminal area and IEL area in the proximal stented segment compared with those in the B+B group. But, neointimal area and area stenosis did not differ significantly among the three groups. Furthermore, the injury and inflammatory scores were similar among the three groups. The three groups did not differ significantly regarding luminal area, neointima area, IEL area, or area stenosis in the overlapping segment. Moreover, the three groups also had similar injury scores. But, the C+T and T+C groups had significantly higher inflammatory scores than those in the B+B group, consistent with the results in the distal stented segment. The endothelial function assessed with the ACh provocation test showed that the C+T and T+C groups had a significantly higher incidence of significant vasoconstriction as compared with that in the B+B group, suggesting significant endothelial dysfunction in the vessels where DESs were deployed ([Table 4](#T4){ref-type="table"}). Significant iliac artery spasm (luminal narrowing \>90%) was induced with ACh infusion (50 µg/min) ([Fig. 4A](#F4){ref-type="fig"}), and vasodilation was achieved after a nitroglycerine infusion ([Fig. 4B](#F4){ref-type="fig"}). Discussion ========== As the overall angiographic outcomes and histopathological results were comparable between the two groups with different first-generation DES implants (Cypher and Taxus) at three different stented segments (proximal, overlapping and distal), the present study did not support the hypothesis that the different implantation sequences of heterogeneous DESs have distinct impacts on the endothelialization and arterial responses in the rabbit iliac artery. Some studies have shown that deploying BMSs for diffuse long lesions is associated with a high restenosis rate.[@B9] DESs have significantly reduced the rate of restenosis by inhibiting neointimal hyperplasia after stenting. However, a diffuse long lesion still remains an independent risk factor for restenosis even in the DES era.[@B10] Therefore, a variety of methods have been tested to treat diffuse long lesions including the heterogeneous overlapping DES.[@B1][@B2][@B11][@B12] Kang et al.[@B1] evaluated the effect of stent overlap with different DESs on neointimal hyperplasia in 47 patients with diffuse long lesions. Their study showed that percutaneous coronary intervention with different overlapping DESs resulted in similar suppression of neointimal hyperplasia and did not increase the side effects of the DES compared with using the same overlapping DES. Similarly, a recent study also suggested that overlapping heterogeneous DESs and overlapping homogeneous DESs had similar long-term safety and efficacy in patients with diffuse long lesions.[@B2] A preclinical study in a porcine model reported that overlapping different DESs did not differ significantly from overlapping the same DES in terms of restenosis rate, neointimal area, and endothelialization scores.[@B5] However, only one group of heterogeneous overlapping stents was used in their study.[@B5] It still remains unclear whether the implantation sequences of heterogeneous overlapping DES influences endothelialization and neointimal hyperplasia. Therefore, our study was the first to test this question and showed that although the two overlapping DES groups had significantly smaller neointimal area and less late loss compared with those in the BMS group, the two overlapping DES groups did not differ significantly, suggesting that the different Cypher and Taxus implantation sequences had similar effects on neointimal hyperplasia. Concerns have been raised regarding the safety of DESs. Some studies have shown that DESs result in a higher incidence of stent thrombosis compared with BMSs even long after the index procedure.[@B13] Increasing evidence suggests that the local inflammatory reaction in the DES segment might be an important mechanism behind stent thrombosis.[@B14][@B15] Nakazawa et al.[@B14] showed that DES implantation in the culprit lesion of acute myocardial infarction is associated with delayed arterial healing and increased local inflammatory reactions, which increase the rate of late stent thrombosis. Cook et al.[@B15] demonstrated that very late stent thrombosis is associated with histopathological signs of inflammation. Some preclinical studies have also revealed that DESs are related to significantly higher inflammatory reactions in the stented segment as compared with those in BMSs.[@B5][@B16][@B17] Finn et al.[@B16] compared the histopathological response at sites of overlapping homogeneous SESs, PESs, and BMSs in the rabbit iliac artery. Their study showed that DESs are associated with delayed arterial healing and promote inflammation at overlap sites compared with those of BMSs, suggesting that patients receiving overlapping DESs need more frequent follow-up than patients with overlapping BMSs. In addition, a study by Lim et al.[@B5] also suggested that although DESs inhibit neointimal hyperplasia, significantly higher inflammation reactions and poorer endothelialization occur at the site of overlapped segments as compared with those of BMS. Consistent with these studies,[@B5][@B16][@B17] our study also suggested that implanting overlapping DESs was related to significantly higher inflammatory reactions in the overlapped segment as compared with those in BMS. Specifically, our study also revealed that the local inflammatory reactions did not differ significantly by implantation sequence of the different DESs. Furthermore, some studies have suggested that DES implantation is associated with significant endothelial dysfunction compared to that of BMSs.[@B18] Similarly, our study showed that the two DES groups had a higher incidence of significant iliac artery spasm as compared with that in the BMS group, suggesting that a higher prevalence of endothelial dysfunction occurred in the DES implanted segment. Given the higher inflammatory scores observed in the two DES groups, we suppose that the long-lasting inflammatory reactions at the DES stented segments might be the mechanism behind the significant endothelial dysfunction in the DES groups. Interestingly, three cases of stent fracture occurred at a site distal to the Cypher segment. These fractures might have developed due to active joint motion in the area of the iliac artery. Recently, special attention has been paid to the clinical implications of stent fracture. Chhatriwalla et al.[@B19] reviewed literature regarding DES fracture and showed that most DES fracture reports involved Cypher stents, and that a DES fracture could be associated with stent thrombosis, myocardial infarction, or angina. Due to the potentially harmful effects of stent fracture, we suggest that a more flexible DES should be used in an artery with a greater range of motion such as the mid right coronary artery. Notably, the Cypher and Taxus stents did not differ significantly regarding endothelialization, neointimal growth, or arterial response at the proximal and overlapped segments, suggesting that the effect of inhibiting neointimal hyperplasia and peristructural inflammation were similar between these two types of DESs. More neointimal hyperplasia and a greater arterial response in the distal segment of C+T was mainly due to the stent fractures. Study limitations ----------------- The present study had some limitations. First, the number of experimental animals was relatively small, which might have weakened the conclusions. Second, we used first-generation DESs (Cypher and Taxus), and compared them with BMSs. Furthermore, we could not distinguish the effects of eluting drug from those of polymers. Third, we prepared the groups with the same overlapping DESs. Thus, the conclusions of the current study regarding the impacts of different implantation sequences on endothelialization and arterial response should be interpreted cautiously. Fourth, because the stents were deployed in normal iliac arteries, the results may not be representative of human atherosclerotic disease, particularly human coronary artery disease. In conclusion, despite similar arterial injury, greater inflammatory reactions were observed in the DES overlapped segments regardless of the implantation sequence as compared with BMS. Moreover, DES was more associated with impaired endothelial function on the adjacent nonstented segments as compared with that of BMS. Therefore, more frequent clinical follow-up with intensive medical therapy will be needed in patients receiving overlapping DESs as compared with those receiving overlapping BMSs. This study was supported by a research grant from the Korean Society of Cardiology (2006). The authors have no financial conflicts of interest. ![A representative case of stent fracture, associated occlusion and ischemic necrosis. A: limb ischemia caused by stent fracture and subsequent arterial occlusion. Arrow indicates the fractured struts. B: gross appearance of limb necrosis.](kcj-42-397-g001){#F1} ![Overall endothelialization under a microscope. A: bare metal stent group. B: distal Cypher+proximal Taxus group. C: distal Taxus+proximal Cypher group. From left to right are proximal segment, overlapped segment, and distal segment, respectively.](kcj-42-397-g002){#F2} ![High power light microscopic findings (×200) of hematoxylin and eosin staining. A: bare metal stent group. B: distal Cypher+proximal Taxus group. C: distal Taxus+proximal Cypher group. From left to right are proximal segment, overlapped segment, and distal segment, respectively. More large round mononuclear cells (inflammatory white blood cells, black arrows) were observed in the drug eluting stent groups (B and C) compared with those in the bare metal stent group (A).](kcj-42-397-g003){#F3} ![Acetylcholine provocation test. A: significant iliac artery spasm (luminal narrowing \>90%) induced by acetylcholine infusion (50 µg/min). B: vasodilation after nitroglycerine infusion.](kcj-42-397-g004){#F4} ###### Baseline procedural characteristics ![](kcj-42-397-i001) C: sirolimus-eluting stent, Cypher, T: paclitaxel eluting stent, Taxus, B: bare metal stent, C+T: distal Cypher+proximal Taxus, T+C: distal Taxus+proximal Cypher ###### Follow-up angiographic characteristics at 3 months after stenting ![](kcj-42-397-i002) P~1~, P~2~, P~3~ and P~4~ indicate the p of the comparisons among the overall 3 groups, between B+B and C+T, between B+B and T+C, between C+T and T+C, respectively. C: sirolimus-eluting stent, Cypher, T: paclitaxel eluting stent, Taxus, B: bare metal stent, C+T: distal Cypher+proximal Taxus, T+C: distal Taxus+proximal Cypher ###### Comparisons of morphometric measurements at 3 months after stenting ![](kcj-42-397-i003) P~1~, P~2~, P~3~ and P~4~ indicate the p of the comparisons among the overall 3 groups, between B+B and C+T, between B+B and T+C, between C+T and T+C, respectively. C: sirolimus-eluting stent, Cypher, T: paclitaxel eluting stent, Taxus, B: bare metal stent, C+T: distal Cypher+proximal Taxus, T+C: distal Taxus+proximal Cypher ###### Endothelial function assessed with the acetylcholine provocation test at 3 months after stenting ![](kcj-42-397-i004) P~1~, P~2~, P~3~ and P~4~ indicate the p of the comparisons among the overall 3 groups, between B+B and C+T, between B+B and T+C, between C+T and T+C, respectively. C: sirolimus-eluting stent, Cypher, T: paclitaxel eluting stent, Taxus, B: bare metal stent, C+T: distal Cypher+proximal Taxus, T+C: distal Taxus+proximal Cypher
{ "pile_set_name": "NIH ExPorter" }
This application proposes to adapt and pilot test an efficacy-based, manualized intervention to enhance the mental health benefits for children attending publicly funded, inner city, after-school programs. The proposed study is consistent with objectives of the R34 mechanism (1. Development and Pilot Testing of New or Adapted Interventions, and 2. Adaptation and Pilot Testing for Effectiveness) towards a planned program of research to study how mental health consultation and support can strengthen the benefits of after-school programs for inner city children's academic, social, and behavioral functioning. Despite extensive problems facing urban communities during after-school hours, few empirical studies have examined children's use of time, quantity and quality of programs available, participation rates, and the potential mental health benefits of programs. We propose to collaborate with a large, publicly funded provider of after-school programs - Chicago Park District's Park Kids program - toward three research goals. Children (n=318) in grades 1 to 8 who attend one of the participating programs will be enrolled in the study. First, we will collaborate with after-school program staff toward the adaptation and application of a manualized, efficacy-based intervention [unreadable] (Pelham, 1985) to meet the needs, capabilities, and constraints of their program, and we will provide training and ongoing support. Second, we will develop and implement a fidelity measure of staff adherence to the intervention. Third, we will use random-effects regression models to pilot test the impact of the intervention at three (experimental) after-school sites compared to three (comparison) after-school-as-usual sites on four domains of children's outcomes (Hoagwood et al., 1996): symptoms (children's externalizing and internalizing problems), functioning (children's academic, social, and behavioral outcomes), environmental outcomes (staff psychological climate), and satisfaction (parent and staff). This proposal responds to recent national concerns about after-school care, and the need for alternative venues for mental health service delivery in inner cities (Stephenson, 2000; Surgeon General's report). [unreadable] [unreadable]
{ "pile_set_name": "PubMed Abstracts" }
Phenotypic heterogeneity in families with age-related macular degeneration. To evaluate the ophthalmic phenotype in families with three or more individuals who have age-related maculopathy. Eight families were identified at academic centers in Massachusetts and North Carolina. Macular findings were graded based on a modification of the grading system used in the Age-Related Eye Disease Study (AREDS). All families had at least three members with stage 3 (extensive drusen change) or higher maculopathy in at least one eye, and six families had at least two members with advanced maculopathy (stage 4, geographic atrophy of the retinal pigment epithelium, or stage 5, exudative maculopathy). Both stages 4 and 5 maculopathy were observed among different individuals in four families. Individuals with stage 3 maculopathy were members of families with more advanced maculopathy (six families) and were of similar age as more severely affected family members but tended to be older than those with stage 2. The phenotypic appearance of the macula in families with multiple affected individuals is heterogeneous and representative of the spectrum of macular findings typically associated with age-related maculopathy.
{ "pile_set_name": "PubMed Central" }
1. Introduction {#s0005} =============== In 2003, the world was confronted with the emergence of a new and in many cases fatal infectious disease: severe acute respiratory syndrome (SARS). The first case with typical symptoms of SARS emerged in Foshan municipality, Guangdong Province, China, with the onset date of November 16, 2002.[@bb0005] Five index cases were reported in Foshan, Zhongshan, Jiangmen, Guangzhou, and Shenzhen municipalities of Guangdong Province before January 2003. The early-stage outbreak of SARS in Guangdong Province was sporadic and apparently not associated with the index cases.[@bb0010] By January 2003, SARS had developed into a large-scale outbreak in Guangdong Province,[@bb0015] and after February 2003, it had appeared in Hong Kong[@bb0020] and seven other provinces, including Guangxi, Jiangxi, Fujian, Hunan, Zhejiang, Sichuan and Shanxi.[@bb0025] Some cases from Shanxi Province and Hong Kong were imported to Beijing, and transmission from index cases was amplified within several health care facilities by March 2003.[@bb0030] Soon Beijing became the epicenter of SARS and endangered various other provinces or cities in mainland China. At the same time, Singapore, Canada, the United States, and Vietnam were involved in the worldwide spread through imported cases from Hong Kong.[@bb0035] The World Health Organization (WHO) issued the first global alert on March 12, 2003, regarding a cluster of cases of severe atypical pneumonia in hospitals in Hong Kong, Hanoi, and Guangdong.[@bb0040] Three days later, the WHO issued an emergency travel advisory. On March 24, 2003, the WHO described the clinical features of SARS, which were revised on May 1, 2003.[@bb0045] In April 2003, a novel coronavirus, named SARS-associated coronavirus, was identified as the infectious agent responsible for SARS.[@bb0050] ^,^ [@bb0055] Soon thereafter, cases were reported from 32 countries and regions (later corrected to 29). In total, 8,437 probable SARS cases, of whom 813 had died, were reported during the SARS epidemic of 2002--2003. Mainland China was the most seriously affected area, reporting 5,327 probable SARS cases, of whom 343 died, between November 16, 2002 and June 11, 2003. As a result of initial lack of awareness of SARS by health care workers (HCWs), the disease spread unnoticed in the early stages of the epidemic. This spreading of the disease was unduly prolonged by limited information sharing. At that time, a functional infectious diseases surveillance system was not yet available, and the reporting system was outdated, hampering data collection and delaying interventions. The SARS outbreak brought China virtually to a standstill, forcing the country to thoroughly review its infectious disease control policies. Since then, the Chinese government has implemented new and innovative strategies, strengthened the related aspects of the legal system and the disease prevention and control system, and made substantial investments to improve infrastructures, surveillance systems, and emergency preparedness and response capacity, such as the development of a real-time monitoring system that is now serving as a model for worldwide surveillance and response to infectious disease threats.[@bb0060] The world has moved on since the SARS epidemic, but the insights gained in mainland China remain valuable, with comparable infectious disease threats presenting continuously. 2. Description of the epidemic {#s0010} ============================== During the 2002--2003 SARS outbreak, mainland China reported 5,327 probable cases, of whom 343 died, amounting to a case fatality ratio (CFR) of 6.4%. The epidemic spanned a large geographical extent (170 counties of 22 provinces) but clustered in two areas: first in Guangdong Province, and about 3 months later in Beijing and its surrounding areas Shanxi, Inner Mongolia, Hebei, and Tianjin. [Fig. 1](#fig0005){ref-type="fig"} shows the temporal distribution of SARS in the six most seriously affected geographic areas of mainland China. Spatiotemporal analyses indicated that the spread of SARS occurred in two different patterns.[@bb0065] In the early stage of the epidemic, especially before strict control measures were taken, SARS spread to new areas randomly through certain index cases. Thereafter, human travel along transportation routes influenced the transmission of SARS, as illustrated by the spread of SARS in middle North China and South China. The epidemic period in middle North China was shorter than in South China, but the geographic spread was wider. SARS not only spread locally, but also diffused quickly and resulted in several outbreaks in areas of middle North China close to Beijing. In contrast, the SARS epidemic in South China was mainly limited to Guangdong Province.Fig. 1The temporal distribution of SARS outbreaks in the six most seriously affected geographic areas of mainland China.Number of new cases per day of onset since the first SARS case on November 16, 2002, in Guangdong Province. SARS: severe acute respiratory syndrome.Fig. 1 Transportation routes accelerated the spread of SARS in mainland China. National highways and inter-provincial freeways appeared to play a critical role, whereas railways seemed to be less important.[@bb0065] For the definition of SARS cases, a distinction was made between probable and suspected cases on the basis of contact history and the number and severity of symptoms. In China, it was only possible late in the epidemic to confirm SARS through serological tests. In a study comparing clinical characteristics of probable and suspected cases, it was found that although symptoms hardly differed, there were clearly different hematological profiles, justifying the distinction between probable and suspected cases and confirming that the suspected cases most likely did not have SARS.[@bb0070] The average duration (3.8 days) and pattern (including time of epidemic and age) of onset of symptoms to hospital admission among SARS patients in mainland China were comparable to those of other affected areas.[@bb0075] The duration of hospital admission to discharge for those who survived (29.7 days) was shorter than elsewhere in the world, possibly because of different hospitalization policies. The duration of hospital admission to death in mainland China was 17.4 days, which was also shorter than in other areas. Over the course of time, hospital epidemics were rapidly brought under control, with increasing efficiency.[@bb0080] This was due to increasing understanding of the disease and more effective preventive measures, such as establishing isolation wards, training and monitoring hospital staff in infection control, screening HCWs, and compliance with the use of personal protection equipment.[@bb0085] 3. Case fatality {#s0015} ================ Because of their deteriorated health status and apparent complications, the mortality rate increased significantly among patients aged 50 and above. The Tianjin SARS outbreak happened mainly within hospitals, leading to a high impact of comorbidity, which explains the relatively high CFR. Guangdong Province showed a considerably lower CFR than Beijing, the reason for which is still unclear ([Fig. 2](#fig0010){ref-type="fig"} ). In China, the overall CFR in mainland China was 6.4%, which was much lower than 17.2% in Hong Kong. And it was also lower than that reported in other areas and countries, e.g., CFR in Vietnam (9.7%). The much lower CFR in mainland China was also in contrast with the shorter duration of hospital admission to death compared with other countries or regions. The obvious reasons for the lower CFR are the young ages of the patients and a relatively higher number of community-acquired infections as opposed to hospital acquired infections. However, the relatively young ages of the cases (median age 33 years) only partly (approximately 25%) explains the low CFR in mainland China compared with other affected areas and countries, where patient median age varied from 37 years in Singapore to 49 years in Canada ([Table 1](#tbl0005){ref-type="table"} ).[@bb0090] The relatively lower proportion of hospital-acquired infections in mainland China (reflected in the lower proportion of infections among HCWs compared with other areas \[19% vs. 23%--56%; see [Table 1](#tbl0005){ref-type="table"}\]) is also only partly responsible for the lower CFR in mainland China, especially since this factor is highly correlated with age.Fig. 2Comparison of the case fatality ratios of different ages for SARS patients in Beijing, Guangdong, and Tianjin.SARS: severe acute respiratory syndrome. Intervals indicate 90% binomially distributed confidence intervals. The values in parentheses represent the overall case fatality ratio for each area.Fig. 2Table 1The characteristics of SARS outbreak in some countries or regions with high prevalences.Table 1Country or areaTotal case (person)Death case (CFR) \[person(%)\]Median age (year)Infected HCWs (percentage) \[person(%)\]Mainland China5,327343 (6.4)331,021 (19.2)Hong Kong, China1,755302 (17.2)40405 (23.1)Taiwan, China67487 (12.9)46205 (30.3)Singapore23833 (13.9)3797 (40.8)Vietnam626 (9.7)4335 (56.5)Canada25143 (17.1)49101 (40.2)[^1] It has been suggested that mainland China had a substantial number of cases that were not really SARS, especially in Guangdong Province, where the epidemic started. A review study comparing seroprevalence rates in different SARS affected areas did show relatively lower seroprevalence in mainland China, but the differences in other areas were small and far from significant. However, even if the lower seroprevalence that was found in mainland China actually represented overreporting, this factor could only explain a modest 10% of the lower case fatality.[@bb0095] Thus, there still remains a challenge in explaining the lower death rates in mainland China. The lower death rates may be due to better treatment (discussed below) and use of Chinese traditional medicines. 4. The effect of interventions {#s0020} ============================== During the SARS epidemic in mainland China, various interventions were implemented to contain the outbreak.[@bb0100] Overall, the measures taken were certainly effective, given the fact that the epidemic was fully controlled within 200 days after the first case emerged. The method of Wallinga and Teunis,[@bb0105] with quantifications from Lipsitch et al,[@bb0110] was used to estimate *R* ~*t*~, the effective or net reproduction number, which helps to determine which interventions were most important; *R* ~*t*~ is defined as the mean number of secondary cases infected by one primary case with symptom onset on day *t*. This number changes during the course of an epidemic, particularly as a result of effective control measures. If *R* ~*t*~ is larger than the threshold value of 1, a sustained chain of transmission will occur, eventually leading to a major epidemic. If this number is maintained below 1, then transmission may still continue, but the number of secondary cases is not sufficient to replace the primary cases, leading to a gradual fade out of the epidemic. [Fig. 3](#fig0015){ref-type="fig"} shows *R* ~*t*~ over time for mainland China, along with the timing of nine important events and public health control measures. The graph is characterized by a fluctuating pattern and wide confidence interval early in the epidemic, which can be explained by the initial low number of cases used in the calculations and the relatively more important impact of so-called "super-spreading events." In Guangdong Province, where the epidemic started, standard control measures, such as isolation and contact tracing (arrow 2), seem to already have helped to largely interrupt transmission in this province. However, during the period from day 80 to day 140, the number of new SARS cases steadily increased, due to the spread to and within other parts of China (i.e., mainly Beijing and its surrounding provinces). The first official report of an outbreak in Guangdong Province (arrow 3) and the WHO global alerts (arrow 4) were by no means reflected in a consistent reduction of *R* ~*t*~. Additionally, the first interventions in Beijing were not effective enough to cause any downward trend in the transmission (arrow 5). It was only around April 11 to April 14, 2003, that the Chinese authorities gained full control of all activities to combat SARS, with national, unambiguous, rational, widely followed guidelines and control measures, under central guidance (arrow 6). Immediately, the reproduction number decreased dramatically and consistently. Within 1 week, *R* ~*t*~ was below 1. Strikingly, this marked decrease after the period from April 11 to April 14, 2003, was consistently present in the patterns of *R* ~*t*~ for all heavily affected areas in mainland China.[@bb0115] The stringent control measures to prevent human contacts (arrow 7), including the decision to cancel the public holiday of May 1 (arrow 8), were all initiated after *R* ~*t*~ was below 1 (i.e., when the epidemic was already dying off), again consistently for all areas in mainland China. Given the information available at the time, the most stringent interventions were rational because it was not clear to which extent the epidemic was under control. However, looking retrospectively, we can now conclude that these measures, which severely affected public life, contributed little to the factual containment of the SARS epidemic; the essential moment had occurred earlier. That being said, the late interventions may still have played a role in speeding up the elimination of SARS. Additionally, cancelling the public holiday---when millions of people travel long distances to visit their family---may have prevented (smaller) outbreaks in yet unaffected locations.Fig. 3Estimated effective reproduction number (*R*~*t*~) during the SARS epidemic in China, 2002--2003.*R*~*t*~: number of secondary infections generated per primary case. Values represent average *R*~*t*~ (central white line) and associated 95% (gray) and 80% (black) confidence intervals, by date of symptom onset. The critical value of *R*~*t*~ = 1, below which sustained transmission is impossible, is marked with a broken horizontal line. Arrows reflect the time of important events and public health control measures: (1) local newspaper report about outbreak of unknown infectious disease in Guangdong Province (January 2, 2003); (2) start of control in Guangdong hospitals (e.g., isolation, contact tracing) (February 1--3, 2003); (3) first official report of outbreak by Guangdong authorities (February 11, 2003); (4) WHO global alerts; first mentioning of SARS (March 12--15, 2003); (5) first protocol of SARS control; start of isolation in Beijing hospitals (April 2, 2003); (6) mandatory reporting of SARS; definition of diagnostic criteria and treatment (April 11--14, 2003); (7) stringent control measures: quarantine in airports and stations; school, university, and public place closures; daily reporting by the national media (April 19--26, 2003); (8) public holiday cancelled; new 1000-bed SARS hospital opened (May 1, 2003); (9) further improvement of various guidelines and protocols (May 4--9, 2003).Fig. 3 We conclude that strong political commitment and a centrally coordinated response were the most important factors in the control of SARS in mainland China. With respect to future outbreaks of emerging infectious diseases, we emphasize that it is of first and foremost importance that effective control is based on clear national and international guidelines and well-built communication and reporting networks, along with firm determination and responsibilities at all levels. 5. Consequences of the epidemic {#s0025} =============================== There were many obvious immediate consequences of the epidemic, such as substantial morbidity and mortality, fear over the possibility of becoming infected, panic in the public domain, stringent quarantine measures, travel restrictions, etc. In addition, two important mid- and long-term consequences of the epidemic were identified. First, there was the economic impact. This was studied for Beijing by Beutels et al.[@bb0120] through associating time series of daily and monthly SARS cases and deaths and volume of public train, airplane and cargo transport, tourism, household consumption patterns and gross domestic product growth in Beijing. The authors concluded that leisure activities, local and international transport, and tourism were severely affected by SARS, particularly in May 2003. Much of this consumption was merely postponed; however, irrecoverable losses to the tourism alone were estimated at about USD 1.4 billion, or 300 times the cost of treatment for SARS cases in Beijing. Second, there were long-term health consequences among the SARS patients who were treated with corticosteroids. Lv et al. investigated the relationship between avascular necrosis (AVN) and corticosteroid treatment given to SARS patients through a longitudinal study of 71 SARS patients (mainly HCWs) who had been treated with corticosteroids, with an observation time of 36 months.[@bb0125] Magnetic resonance images and X-rays of the hips, knees, shoulders, ankles and wrists were taken as part of the post-SARS follow-up assessments. Thirty-nine percent developed AVN of the hips within 3--4 months after starting treatment. Two more cases of hip necrosis were seen after 1 year and another 11 cases of AVN were diagnosed after 3 years, 1 with hip necrosis and 10 with necrosis in other joints. In total, a staggering 58% of the cohort had developed AVN after 3 years of observation. The sole factor explaining AVN in the hip was the total dose of corticosteroids received. The use of corticosteroids has been debated, with conflicting opinions about steroids being the key component in the treatment of SARS.[@bb0130] It has remained uncertain whether the aggressive use of corticosteroids during the SARS epidemic has tipped the balance. Has the use of high-dose corticosteroids saved more lives and been responsible for the lower case fatality in mainland China? Do immediate benefits, in terms of saving lives, outweigh the adverse effects, including AVN? 6. Lessons learned and actions taken in China regarding epidemic preparedness {#s0030} ============================================================================= The SARS epidemic provided valuable experience and lessons with regard to controlling outbreaks of newly emerging infectious diseases, which are surely due to come. Human infection with avian influenza viruses, the novel A influenza (H1N1), and imported infectious diseases such as the Zika virus disease and yellow fever disease are already knocking at our doors! Important lessons learned in China included the need for more honesty and transparency, improvement of surveillance, better laboratory facilities, and optimized case management.[@bb0135] Also, public health measures to control infectious diseases, reporting systems, and central guidance and coordination came under scrutiny. Another lesson was the need to inform the public about, and involve them in, control measures in an adequate and timely manner. There was a strong realization that the best defense against any threat of newly emerging infectious diseases is a robust public health system in its science, capacity, and practice, and through collaboration with clinical and veterinary medicine, academia, industry, and other public and private partners. An important resolution of the Chinese government was to improve its disease surveillance system to rapidly identify newly emerging infectious diseases and to minimize their spread in China and to the rest of the world. The traditional surveillance network using reporting cards filled out by hand and sent by mail or fax has been replaced with an automatic information system called the China Information System for Disease Control and Prevention, which is the world\'s largest internet-based disease reporting system.[@bb0060] The government has also increased their investment in enhancing the capabilities of detecting, diagnosing, preventing, and controlling newly emerging infectious diseases at various levels. New and innovative strategies have been established for response to health emergencies, such as the establishment of the parallel laboratory confirmation mechanism for newly emerging infectious pathogens to reduce the risk of errors, rapid disclosure of information to the WHO and to the public, international information exchange and collaboration, and the provision of more information on public health and on infectious diseases to the public. Furthermore, the Chinese government has strengthened both the related aspects of the legal system and the disease prevention and control system. For example, the government issued the *Law of the People\'s Republic of China on Prevention and Treatment of Infectious Diseases (Revised Draft)* in 2004 and *Regulations on Preparedness for and Response to Emergent Public Health Hazards* in 2003, created the Chinese Centre for Disease Control and Prevention, and improved surveillance systems of infectious diseases and preparedness and response capacity for emerging public health events.[@bb0140], [@bb0145], [@bb0150] Education and training projects, such as training courses for public health officials and HCWs, have been initiated, and new training has been added to the education programs of universities. Funds for research projects on the development of vaccines, drugs, and diagnostic techniques have been granted to develop new approaches in the prevention, diagnosis, and treatment of emerging infectious diseases. 7. Conclusion {#s0035} ============= The epidemic of a new infectious disease, SARS, took firstly China and subsequently many other areas in the world completely by surprise. Fortunately, the consequences of this epidemic, in terms of people afflicted and economic loss, were not entirely catastrophic. Also, it turned out that SARS could be controlled relatively easily through standard interventions. However, the epidemic revealed some important weaknesses in the Chinese public health system, which have been dealt with efficiently and successfully by the Chinese government. At the moment, China is better prepared than ever for epidemics, which may be much worse than SARS in terms of speed of spread and fatality rate. In fact, SARS can be viewed as a wake-up call. "(Excerpted from *Infectious Disease in China: The Best Practical Cases*. Beijing: People\'s Medical Publishing House; 2018.)" Competing interests {#s0040} =================== The authors declare that there is no conflict of interest. [^1]: SARS: severe acute respiratory syndrome; CFR: case fatality ratio; HCWs: health care workers.
{ "pile_set_name": "OpenWebText2" }
In today’s world of agile software development and fast release cycles, developers increasingly rely on third-party libraries and components to get the job done. Since many of those libraries come from long-running, open-source projects, developers often assume they’re getting well-written, bug-free code. They’re wrong. The major patching efforts triggered by the Heartbleed, Shellshock and POODLE flaws this year serve as examples of the effect of critical vulnerabilities in third-party code. The flaws affected software that runs on servers, desktop computers, mobile devices and hardware appliances, affecting millions of consumers and businesses. However, these highly publicized vulnerabilities were not isolated incidents. Similar flaws have been found in libraries such as OpenSSL, LibTIFF, libpng, OpenJPEG, FFmpeg, Libav and countless others, and these have made their way into thousands of products over the years. Among the reasons why these bugs end up in finished products is a belief by developers that the third-party code they choose to integrate is secure because it has already been used by many others. The shallow bugs myth “There is a myth that open source is secure because everyone can review it; more eyes reviewing it making all bugs shallow,” said Jake Kouns, CISO of Risk Based Security, a firm that specializes in tracking vulnerabilities. “The reality is that while everyone could look at the code, they don’t and accountability for quality is deferred. Developers and companies that consume third party libraries are not allocating their own resources to security test ‘someone else’s code.’ Right or wrong, everyone seems to think that someone else will find the vulnerabilities and what is published is secure.” The reality is that many open source projects, even the ones producing code that’s critical to the Internet infrastructure, are often poorly funded, understaffed and have nowhere close to enough resources to pay for professional code audits or the manpower to engage in massive rewrites of old code. OpenSSL is a prominent example of such a case, but far from the only one. After the critical Heartbleed bug was announced in April, it was revealed that the OpenSSL project had only one full-time developer and that the project was being primarily funded through contract-based work that other team members did in their spare time for companies in need of SSL/TLS expertise. The developers of OpenBSD criticized OpenSSL for maintaining old code for platforms that few people care about and decided to fork the project to create a cleaner version of the library dubbed LibreSSL. The flaws in open source libraries are often the result of one or more of these reasons: old code or low code maturity, insufficient auditing or fuzzing—a process of finding vulnerabilities by automatically feeding unexpected input to applications—and too few maintainers, said Carsten Eiram, the chief research officer of Risk Based Security. “We see that many vulnerabilities being found in these libraries are by researchers simply running some of the latest fuzzers against them, so it’s often something the maintainers or companies using said libraries could do themselves. Software vendors are quick to implement libraries into their products, but rarely audit or even fuzz these first or help maintaining them.” It’s all marketing The Heartbleed, Shellshock and POODLE vulnerabilities raised a lot of interest among software developers and system administrators, partly because of the attention the flaws received in the media. Some vendors are still identifying products affected by these flaws and are releasing fixes for them, months after they were first announced. Eiram believes that the primary reason why these vulnerabilities stood out was not their impact, but the way in which they were advertised by their finders—with fancy names and logos. The sad truth is that flaws of similar importance are regularly found in widespread libraries, but manage to fly under the radar and are rarely patched by the software vendors who use them. “A lot of vulnerabilities—18—have been addressed in OpenSSL since Heartbleed, and we haven’t remotely seen the same attention to issuing fixes quickly—or at all—from the vendors,” Eiram said. “We see fixes of varying severity to libraries on almost a daily basis, but rarely see vendors bundling these libraries in their products issue fixes, even though we know these libraries are heavily used.” One example of that is a vulnerability discovered in 2006 by Tavis Ormandy, a security researcher who now works at Google. The flaw was among several that affected LibTIFF and were fixed in a new release at the time. It was tracked as CVE-2006-3459 in the Common Vulnerabilities and Exposures database. “In 2010, a vulnerability was fixed in Adobe Reader, which turned out to be one of the vulnerabilities covered by CVE-2006-3459,” Eiram said. “For four years, a vulnerable and outdated version of LibTIFF had been bundled with Adobe Reader, and it was even proven to be exploitable.” Adobe Systems has since become one of the software vendors taking the threat of flaws in third-party components seriously, Eiram said. “They’ve made major improvements to their process of tracking and addressing vulnerabilities in the third-party libraries and components used in their products.” Another one of those vendors is Google. Aside from just keeping track of vulnerabilities in the third-party code it uses, the company’s researchers are actively searching for flaws in that code. “We’ve seen two of Google’s prolific researchers, Gynvael Coldwind and Mateusz Jurczyk, finding more than 1,000 issues in FFmpeg and Libav, which is used in Chrome, and they’re currently looking at other libraries like FreeType too,” Eiram said. “OpenJPEG also seems to receive some scrutiny by Google at the moment, which is used in PDFium that in turn is used in Chrome. Obviously, Google also put a lot of effort into securing WebKit, when they started using that as the engine for Chrome.” Making such contributions helps to improve the code maturity of those libraries for everyone, and is something that all software vendors should do. If vendors would at least use fuzzing to test the libraries that they use and help fix the issues they find in the process, it would make a significant difference, Eiram said. Much more so than the bug bounty programs for critical Internet software, like those run by Hacker One or Google, which so far have had little effect at drawing researchers toward finding vulnerabilities in libraries, he said. An accurate bill of materials Unfortunately we’re a long way from that happening, as many software developers fail to even keep track of which third-party components they use and where, not to mention vulnerabilities that are later found and patched in those components. Veracode, a security firm that runs a cloud-based vulnerability scanning service, found that third-party and open source components introduce an average of 24 known vulnerabilities into each application and that in the case of some enterprises, 40 percent of the applications they use have one or more critical vulnerabilities introduced by components. “Most companies learned lessons from trying to patch Heartbleed and Shellshock,” said Chris Eng, vice president of research at Veracode. “One of the challenges was that it involved not only patching servers, but patching vulnerable hardware and software products. Answering the question ‘Which of my products rely on a vulnerable version of OpenSSL?’ was difficult for many organizations due to lack of visibility into the composition of their software products.” “Having an accurate ‘bill of materials,’ so to speak, for all software projects is critical to the patching effort,” Eng said. “This has always been true, but Heartbleed and Shellshock both amplified the issue thanks to the ubiquity of both OpenSSL and Bash.” For system administrators the situation is even more complicated, because they rely on software vendors for fixes and their response to flaws in third-party components varies greatly from fairly quickly to none. “We do sense that the software industry has recognized the threat and is trying to deal with it—at least many major companies—but properly mapping libraries used in the code and tracking and triaging vulnerabilities in these requires significant resources,” Eiram said. Be prepared for more If there’s one thing that Heartbleed and Shellshock changed it is that researchers now have a model for advertising the flaws they find so they reach a wider audience. Even though many in the security industry don’t agree with this approach, because it tends to hype the risks, it does seem to put pressure on vendors to act. It also seems to attract the attention of even more researchers, leading to increased scrutiny of some libraries, even if for short periods of time. “Researchers looking to find the most impactful vulnerabilities will naturally be drawn to software that is widespread and baked into a wide range of products,” Eng said. “I think this will continue, because many researchers are motivated—at least in part—by the media attention that comes with discovering high-profile bugs.” From a business perspective, being forced to unexpectedly reallocate resources that are planned for something else in order to deal with flaws like Heartbleed, which involves identifying affected products and issuing patches, is likely a burden for software vendors. If faced with highly publicized flaws on a regular basis, vendors might naturally be pushed to adopt more proactive strategies that involve tracking, patching and even finding vulnerabilities in third-party components themselves as a matter of course. “It does seem like more and more software companies are discussing the challenge of dealing with third party libraries and components, and have recognized how these are an Achilles heel,” Eiram said. “The vast majority of them still need to implement a proper policy and approach for dealing with this challenge.” Companies should map which of their products contain third-party libraries, should define policies for who may add such components to products and how, should consider the security state of a library before using it by consulting the various vulnerability databases and should create an in-house vulnerability tracking solution or purchase a subscription to a commercial one with strong library coverage. “Longer term we may see a shift, but developers may still view Heartbleed and Shellshock as isolated events rather than a trend,” Eng said. “On the other hand, automated solutions are now making it easier for enterprises to identify components with known vulnerabilities in their application portfolios, so I think we’ll see the proactive approach becoming a best practice over time.” On the enterprise side, “Organizations need to recognize that bugs of the magnitude we’ve seen in 2014 will continue into 2015 and put the processes in place to quickly identify where they are vulnerable, have a sound procedure to prioritize the issues and efficient processes to patch systems when bugs are discovered to reduce risk of exploitation,” said Gavin Millard, technical director for EMEA region at Tenable Network Security. “When the next ‘Bug du Jour’ hits, the rapid response to deal with it needs to be a well oiled, tested and efficient machine.”
{ "pile_set_name": "Github" }
// Licensed to the Apache Software Foundation (ASF) under one // or more contributor license agreements. See the NOTICE file // distributed with this work for additional information // regarding copyright ownership. The ASF licenses this file // to you under the Apache License, Version 2.0 (the // "License"); you may not use this file except in compliance // with the License. You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, // software distributed under the License is distributed on an // "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY // KIND, either express or implied. See the License for the // specific language governing permissions and limitations // under the License. #include <stdio.h> #include "rpc/thrift-util.h" #include "testutil/gtest-util.h" #include "util/container-util.h" #include "util/network-util.h" #include "gen-cpp/RuntimeProfile_types.h" #include "common/names.h" namespace impala { TEST(ThriftUtil, SimpleSerializeDeserialize) { // Loop over compact and binary protocols for (int i = 0; i < 2; ++i) { bool compact = (i == 0); ThriftSerializer serializer(compact); TCounter counter; counter.__set_name("Test"); counter.__set_unit(TUnit::UNIT); counter.__set_value(123); vector<uint8_t> msg; EXPECT_OK(serializer.SerializeToVector(&counter, &msg)); uint8_t* buffer1 = NULL; uint8_t* buffer2 = NULL; uint32_t len1 = 0; uint32_t len2 = 0; EXPECT_OK(serializer.SerializeToBuffer(&counter, &len1, &buffer1)); EXPECT_EQ(len1, msg.size()); EXPECT_TRUE(memcmp(buffer1, msg.data(), len1) == 0); // Serialize again and ensure the memory buffer is the same and being reused. EXPECT_OK(serializer.SerializeToBuffer(&counter, &len2, &buffer2)); EXPECT_EQ(len1, len2); EXPECT_TRUE(buffer1 == buffer2); TCounter deserialized_counter; EXPECT_OK(DeserializeThriftMsg(buffer1, &len1, compact, &deserialized_counter)); EXPECT_EQ(len1, len2); EXPECT_EQ(counter, deserialized_counter); // Serialize to string std::string str; EXPECT_OK(serializer.SerializeToString(&counter, &str)); EXPECT_EQ(len2, str.length()); // Verifies that deserialization of 'str' works. TCounter deserialized_counter_2; EXPECT_OK(DeserializeThriftMsg(reinterpret_cast<const uint8_t*>(str.data()), &len2, compact, &deserialized_counter_2)); EXPECT_EQ(counter, deserialized_counter_2); } } TEST(ThriftUtil, TNetworkAddressComparator) { EXPECT_TRUE(TNetworkAddressComparator(MakeNetworkAddress("aaaa", 500), MakeNetworkAddress("zzzz", 500))); EXPECT_FALSE(TNetworkAddressComparator(MakeNetworkAddress("zzzz", 500), MakeNetworkAddress("aaaa", 500))); EXPECT_TRUE(TNetworkAddressComparator(MakeNetworkAddress("aaaa", 500), MakeNetworkAddress("aaaa", 501))); EXPECT_FALSE(TNetworkAddressComparator(MakeNetworkAddress("aaaa", 501), MakeNetworkAddress("aaaa", 500))); EXPECT_FALSE(TNetworkAddressComparator(MakeNetworkAddress("aaaa", 500), MakeNetworkAddress("aaaa", 500))); } }
{ "pile_set_name": "PubMed Abstracts" }
Evaluation of five computer programs in the diagnosis of second-degree AV block. Five electrocardiogram (ECG) analyzing systems were tested with a microcomputer-based ECG signal generator to assess the accuracy of the systems in interpreting Wenckebach periodicity. Although normal sinus rhythm with normal PR intervals and sinus rhythms with first-degree atrioventricular (AV) block were diagnosed by all five systems, second-degree AV block with classic Wenckebach periodicity was routinely misdiagnosed by four of the five systems. No system recognized the atypical Wenckebach periods in a total of 200 trials, misinterpreting the phenomenon as atrial fibrillation, supraventricular rhythm, sinoatrial block, and other rhythm disturbances. In advanced AV block and a variety of ventricular arrhythmias, none of the five systems diagnosed second-degree AV block with Wenckebach periods. Marked unsatisfactory performance with regard to the diagnosis of Wenckebach periodicity indicates the urgent need for accelerated and comprehensive testing of ECG diagnostic equipment. The present generating device was seen as an effective troubleshooter in optimizing the diagnostic competency of computerized ECG systems.
{ "pile_set_name": "Wikipedia (en)" }
Alexandr Bárta Alexandr Bárta (9 April 1894 – 1945) was a Slovak fencer. He competed in the team sabre competition at the 1924 Summer Olympics. References Category:1894 births Category:1945 deaths Category:People from Levoča Category:Czechoslovak male fencers Category:Slovak male fencers Category:Olympic fencers of Czechoslovakia Category:Fencers at the 1924 Summer Olympics
{ "pile_set_name": "Pile-CC" }
British ZedsEast Anglian Crew I watched 5 minutes, got bored and turned over. Why do television people always think their audience is full of mindless chumps? When will they realise most of us want more for our licence fee/sky subscription.
{ "pile_set_name": "PubMed Abstracts" }
The unwelcome trio: HIV plus cutaneous and visceral leishmaniasis. Leishmania/Human Immunodeficiency Virus (HIV) coinfection has emerged as an extremely serious and increasingly frequent health problem in the last decades. Considering the insidious and not typical clinical picture in presence of immunosuppressive conditions, the increasing number of people travelling in endemic zones, the ability to survive, within both human and vector bodies, of the parasite, clinicians and dermatologists as the first line should be aware of these kind of "pathologic alliances," to avoid delayed diagnosis and treatment. In this setting, the occurrence of cutaneous lesions can, paradoxically, aid the physician in recognition and approaching the correct staging and management of the two (or three) diseases. Treatment of these unwelcome synergies is a challenge: apart from the recommended anti-retroviral protocols, different anti-leishmanial drugs have been widely used, according with the standard guidelines for visceral leishmaniasis (VL), with no successful treatment regimen still been established.
{ "pile_set_name": "USPTO Backgrounds" }
The semiconductor integrated circuit (IC) industry has experienced rapid growth. Technological advances in IC materials and design have produced generations of ICs where each generation has smaller and more complex circuits than the previous generation. In the course of IC evolution, functional density (i.e., the number of interconnected devices per chip area) has generally increased while feature size (i.e., the smallest component that can be created using a fabrication process) has decreased. Such advances have increased the complexity of processing and manufacturing ICs. For these advances, similar developments in IC processing and manufacturing are developed. As the density of semiconductor devices increases and the size of circuit elements becomes smaller, the resistance capacitance (RC) delay time increasingly dominates circuit performance. To reduce the RC delay, there is a desire to use low-k dielectrics. The low-k dielectrics are useful as intermetal dielectrics (IMDs) and/or as interlayer dielectrics (ILDs). However, the low-k dielectrics may present problems during processing. It is desirable to have improved manufacturing methods for forming reliable low-k dielectrics.
{ "pile_set_name": "StackExchange" }
Q: postsharp in assembly I have an postsharp attribute for handling exceptions in a entire dll ( that ddl is provided by other team) and manage database calls. So the idea is treat the exceptions with postsharp So, this is the attribute [Serializable] public class MethodConnectionTracking: OnExceptionAspect { bool canceled = false; public override void OnException(MethodExecutionArgs args) { Exception ex = args.Exception; if (ex != null) { --- do things } } } to make that works and intercept all methods in the assemblyInfo.cs for that project called SPData i have: [assembly: MethodConnectionTracking(AttributeTargetElements = MulticastTargets.Method)] and that works great. But i want to specify that line in other project. So, the main project references SPData. So, in main project AssemblyInfo.cs file i write: [assembly: MethodConnectionTracking(AttributeTargetAssemblies = "SPData", AttributeTargetElements = MulticastTargets.Method)] But it does not work. Is it possibly to do what i want, Am I missing some parameter? Thanks in advance. A: You don't need AttributeTargetElements = MulticastTargets.Method as it is already provided when using OnExceptionAspect base class You don't need to check if ex != null because it will never be null as OnException won't be invoked unless there is an exception. See http://programmersunlimited.wordpress.com/2011/08/01/postsharp-why-are-my-arguments-null/ Are you sure you have the correct assembly name? Are you using the namespace? You need to use the actual assembly name (without the .dll). Try a wildcard "SPData*" and see if that helps. Have you stepped through the code or looked at the compiled assembly using ILSpy? Unless you are providing the wrong name, it should work. Is the reference to a project or a compiled assembly? Is the assembly signed or obfuscated?
{ "pile_set_name": "OpenWebText2" }
Two days into 2017 and the first details from the new Spider-Man game for PS4 may have inadvertently been leaked on IMDB. The internet movie database isn’t always the correct source for facts but from a history of details on the first Guardians flick and a few others, it does have some validations under its belt. Earlier today an update to the site, which was taken down shortly after being posted, dropped the synopsis of some of the game’s possible story elements. As always, until developer Insomniac Games (Ratchet & Clank, Resistance) or Sony give the official word, you should take the following info with a grain of salt. But man oh man if this is true then Spider-Man for PS4 could be one hell of a ride. Of course, Stan Lee isn’t writing the game but he and Ditko will have some sort of deserved legacy credit. Sure, Spider-Man fighting the Green Goblin isn’t something the wide audience of film and video games hasn’t experienced before. What’s interesting, in the above, is the possible path Insomniac could be carving out with the character. A married Peter Parker with a daughter. Could we see Annie Parker come to video games? The Renew Your Vows series by Dan Slott gave us a really fun character and one I wouldn’t mind seeing make a jump to PS4. Peter Parker, voiced by Yuri Lowenthal (Ben 10), could be married again which is something fans have wanted since Peter and MJ were ripped apart by the events of One More Day. Thank you for that spike strip on our highway of enjoyment, Joe Quesada. While this is just a sliver of information, if true it reaffirms Insomniac is telling their own unique Spider-Man story and Marvel are really starting to learn the value of letting talented studios take chances with their IP in order to tell stories which lend themselves to good gaming experiences. IMDB is also listing this as a 2017 game. Judging by how much we’ve actually seen from this exclusive title, would put Spider-Man as a late Fall release. What do you think? Is this a Spider-Man you’re excited to see? Would it be fun to play part of the game as Annie Parker? What if he’s married to Gwen instead of Mary Jane in this universe?
{ "pile_set_name": "StackExchange" }
Q: How does requiring a module affect performance in Node.js? I've been reading a few documentations on Node.js recently to try and get me started on some personnal work. During the process, a thought came up again and again, but I couldn't quite get a satisfactory answer. In every piece of code I've seen so far, there were quite a few require(). I understand the primary function of it is to include modules. But it seems like people used it to include other Javascript files in their main, too. As far as I know, there is no proper way to do that in Javascript (as stated here: How do I include a JavaScript file in another JavaScript file?). Here is my concern; seperating files according to their functions is obviously good to improve readibility. However, isn't require() slowing down drastically the programm since it has to fetch information by opening the Javascript file and so on? If so, is the difference significant enough to worry about in a web application? Wouldn't it be more prudent to write a programm that automates the assembly process in order to obtain a single file (or at least a more monolithic one) before submitting the version? A: On the server, any performance impact here will be inconsequential relative to everything else the server is doing (like interpreting & executing all this javascript!). Of course, if you are having perf issues and are suspicious of this claim it's never a bad idea to measure. On the other hand, if we're talking about javascript that will be sent over a network to a browser then it's a different story... there are a ton of tools for bundling & minifying code so browsers can download/execute it faster. That said, HTTP 2 changes this a bit - with this new protocol, many files can often be downloaded in parallel faster than big bundles.
{ "pile_set_name": "StackExchange" }
Q: PhanthomJS , execute script and return value from Js to c# I need to return a value from a javascript script to c# using PhantomJS. This time, from AngularJS, but, ¿Can PhantomJS return a javascript value from a evaluated script? I can do it with Awesowium with webViewBrowser.ExecuteJavascriptWithResult but there is a way with PhantomJS? Example (Not working) var saldos=driver.ExecutePhantomJS("var strValue=JSON.stringify($('#accounts').scope().accounts);"); A: Ok, just adding "return" on your script or function. Also, remove ExecutePhantomJS and use ExecuteScript var accounts=driver.ExecuteScript("return JSON.stringify($('#accounts').scope().accounts);");
{ "pile_set_name": "StackExchange" }
Q: unable to find an inherited method for function ‘dbReadTable’ for signature ‘"character", "missing"’ I am trying to read the table data in R with MySQL package. Getting error: unable to find an inherited method for function ‘dbReadTable’ for signature ‘"character", "missing"’ code: library(RMySQL) library(dbConnect) mydb = dbConnect(MySQL(), user='user', password='password', dbname='blank_copy', host='ip address', port = port_number) dbListTables(mydb) dbListFields(mydb, 'SELECT * FROM table_name') dbReadTable("table_name") A: the DBIConnection is missing dbReadTable(mydb,"table_name")
{ "pile_set_name": "Github" }
# If we don't need RTTI or EH, there's no reason to export anything # from the hello plugin. if( NOT LLVM_REQUIRES_RTTI ) if( NOT LLVM_REQUIRES_EH ) set(LLVM_EXPORTED_SYMBOL_FILE ${CMAKE_CURRENT_SOURCE_DIR}/Hello.exports) endif() endif() if(WIN32 OR CYGWIN) set(LLVM_LINK_COMPONENTS Core Support) endif() add_llvm_loadable_module( LLVMHello Hello.cpp DEPENDS intrinsics_gen PLUGIN_TOOL opt )
{ "pile_set_name": "StackExchange" }
Q: How to change language of app when click button in Android When I click a button, I want to change language in my Android app. Eg: Current language in app is English and then i click a button, language will change to Vietnamese. This is the first time I do it. But I do not know how I do? Can you help me? A: Same question in below link. This might help you. Change the language to Vietnamese instead of france Clicking a button to switch the language
{ "pile_set_name": "Wikipedia (en)" }
Snap matchlock The snap matchlock is a type of matchlock mechanism used to ignite early firearms. It was used in Europe from about 1475 to 1640, and in Japan from 1543 till about 1880. Description The serpentine (a curved lever with a clamp on the end) was held in firing position by a weak spring, and released by pressing a button, pulling a trigger, or even pulling a short string passing into the mechanism. The slow match held in the serpentine swung into a flash pan containing priming powder. The flash from the flash pan travelled through the touch hole igniting the main propellant charge of the gun. As the match was often extinguished after its relatively violent collision with the flash pan, this type fell out of favour with soldiers, but was often used in fine target weapons. In Japan the first documented introduction of the matchlock which became known as the tanegashima was through the Portuguese in 1543. The tanegashima seems to have been based on examples of snap matchlocks purchased from Portuguese traders, and originally produced in the Portuguese armories present in Asia including Portuguese Malacca and Goa in Portuguese India, which were cities captured by the Portuguese in 1511 and 1510, respectively. The Malay arquebus, istinggar, ulitized this type of mechanism. See also Black powder Gonne Matchlock Muzzleloading References External links Handgonnes and Matchlocks Matchlokkes Category:Early firearms Category:Firearm actions
{ "pile_set_name": "PubMed Abstracts" }
Decreased bone carbonate content in response to metabolic, but not respiratory, acidosis. In vitro cultured neonatal mouse calvariae release calcium and buffer the medium proton concentration in response to a decrease in the medium pH caused by a reduction in bicarbonate concentration ([HCO3-]), a model of metabolic acidosis, but not to an equivalent decrease in pH caused by an increase in the partial pressure of carbon dioxide (PCO2), a model of respiratory acidosis. We have postulated that the medium is in equilibrium with the carbonated apatite in bone. To determine whether bone carbonate is depleted during models of acidosis, we cultured calvariae in control medium (pH approximately 7.4, PCO2 approximately 43, [HCO3-] approximately 26) or in medium in which the pH was equivalently reduced by either a decrease in [HCO3-] (metabolic acidosis, pH approximately 7.1, [HCO3-] approximately 13) or an increase in PCO2 (respiratory acidosis, pH approximately 7.1, PCO2 approximately 86) and determined net calcium flux (JCa) and bone carbonate content. We found that compared with control, after 3, 24, and 48 h there was a decrease in bone carbonate content during metabolic but not during respiratory acidosis. Compared with control, at 3 h JCa increased with both respiratory and metabolic acidosis; however, at 24 and 48 h JCa increased only with metabolic acidosis. JCa was correlated inversely with percent bone carbonate content in control and metabolic acidosis at all time periods studied (r = -0.809, n = 23, P < 0.001). Thus a model of metabolic acidosis appears to increase JCa from bone, perhaps due to the low [HCO3-] inducing bone carbonate dissolution.(ABSTRACT TRUNCATED AT 250 WORDS)
{ "pile_set_name": "OpenWebText2" }
Malaysia Airlines 9M-MRD taking off from Schiphol, believed to be Flight MH17, photographed by Tom Warners of @SchipholSpot Donetsk People's Republic fighters fill their tank with fuel at a gas station in Snizhne, 100 kilometers east of Donetsk, eastern Ukraine Thursday, July 17, 2014. (AP Photo/Dmitry Lovetsky) In this Nov. 15, 2012 photo, a Malaysia Airlines Boeing 777-200 takes off from Los Angeles International Airport in Los Angeles. The plane, with the tail number 9M-MRD, is the same aircraft that was heading from Amsterdam to Kuala Lumpur on Thursday, July 17, 2014 when it was shot down near the Ukraine Russia border, according to Anton Gerashenko, an adviser to Ukraine's interior minister. (AP Photo/JoePriesAviation.net) Smoke rises up at the crash site of the passenger plane, near the village of Grabovo, Ukraine (AP Photo/Dmitry Lovetsky) A Malaysia Airlines passenger aircraft carrying 295 people on board has crashed after apparently being shot down in eastern Ukraine near the Russian border. Flight MH17 from Amsterdam to Kuala Lumpur was in transit over the war-torn region when it disappeared from radar screens. Malaysian Airlines said nine British citizens were on board the plane, ITV News reports. The Foreign Office was unable to confirm this. Graphic photographs have emerged which appear to show bodies and debris strewn across the crash site. One image appears to show a piece of the wreckage with somebody climbing over the white, blue and red striped fuselage. The commercial Boeing airliner, a 777-200, departed from Amsterdam at 12.14am local time, 15 minutes alter than scheduled, according to flight records. It was expected to arrive in Malaysia at 6:10am local time, but did not enter Russian airspace when it was expected to, a Russian aviation source told Reuters. An adviser to Ukraine's Interior Minister said the plane was shot down. Anton Gerashenko said on his Facebook page that the plane was flying at an altitude of 33,000 feet over eastern Ukraine when it was hit by a missile fired from a Buk launcher. And Ukrainian president Petro Poroshenko appeared to blame separatists for the missile strike, saying the "armed forces of Ukraine did not take action against any airborne targets". He added: "We are sure that those who are guilty in this tragedy will be held responsible." But a spokesman for the rebels said the plane must have been shot down by Ukrainian government troops. The incident has sparked a fresh international crisis which will put more pressure on Russia to rein in the rebels. The country has been torn apart by internal strife since the overthrow of the Moscow-backed regime of Viktor Yanukovych, with Russian backed separatists already accused by the authorities in Kiev of shooting down military jets with missiles supplied by Russia. Russian president Vladimir Putin and his US counterpart Barack Obama have tonight spoken by phone. Meanwhile, Malaysian prime minister Najib Razak said he was launching an immediate investigation into the crash. Malaysia Airlines posted on its Twitter feed that it "has lost contact of MH17 from Amsterdam. The last known position was over Ukrainian airspace. More details to follow". A statement later read: "Malaysia Airlines confirms it received notification from Ukrainian ATC that it had lost contact with flight MH17 at 1415 (GMT) at 30km from Tamak waypoint, approximately 50km from the Russia-Ukraine border. "Flight MH17 operated on a Boeing 777 departed Amsterdam at 12.15pm (Amsterdam local time) and was estimated to arrive at Kuala Lumpur International Airport at 6.10 am (Malaysia local time) the next day. "The flight was carrying 280 passengers and 15 crew onboard. More details to follow." A similar launcher to the Buk, a Soviet era surface-to-air missile system capable of taking down a high altitude aircraft, was reported by journalists near the eastern Ukrainian town of Snizhne earlier today. The Interfax report said the plane came down 50 km (20 miles) short of entering Russian airspace. It "began to drop, afterwards it was found burning on the ground on Ukrainian territory," the unnamed source said. A separate unnamed source in the Ukrainian security apparatus, quoted by Interfax, said the plane disappeared from radar at a height of 10,000 metres after which it came down near the town of Shakhtyorsk. The region has seen severe fighting between Ukrainian forces and pro-Russian separatist rebels in recent days. A Ukrainian fighter jet was shot down last night by a Russian plane, Ukrainian authorities said, adding to what Kiev says is mounting evidence that Moscow is directly supporting separatist insurgents in eastern Ukraine. Expand Close The flight path of Malaysia Airlines MH17 according to Flightaware.com / Facebook Twitter Email Whatsapp The flight path of Malaysia Airlines MH17 according to Flightaware.com Security Council spokesman Andrei Lysenko said the pilot of the Sukhoi-25 jet hit by the air-to-air missile was forced to bail out. Pro-Russia rebels also claimed responsibility for strikes on two Ukrainian Sukhoi-25 jets, but Moscow denies Western charges that is supporting the separatists or sowing unrest in Ukraine. Malaysia's defence minister Hishamuddin Hussein said there had been no confirmation that MH17 was shot down and added he had instructed the country's military to check and get confirmation. The area is one of the main routes from Europe to Asia for air traffic. This evening, the Department for Transport in London said flights were now being diverted. A DfT spokesman said: "Flights already airborne are being routed around the area by air traffic control in the region. Pilots around the world have been advised to plan routes that avoid the area by Eurocontrol, the European organisation for the safety of air navigation." Airliner tracking sites seemed to indicated that traffic was now steering clear of Ukrainian airspace. The incident brings tragedy to Malaysia Airlines for the second time this year. In March, one of its jets disappeared with 227 passengers and 12 crew on board in one of the greatest aviation mysteries of all time. It would not be the first time a civilian airliner has been mistakenly shot down. In 1988, an Iran Air flight from Tehran to Dubai was shot down by the US warship USS Vincennes in the Persian Gulf. All 290 on board, including 66 children and 16 crew, died. In 1983, Korean Air Lines Flight 007 from New York to Seoul via Anchorage was shot down by a Soviet military jet near Sakhalin Island in the East Sea. All 269 passengers and crew were killed. The Soviets initially denied knowledge of the incident but later admitted responsibility, claiming that the aircraft was on a spy mission. A spokesman for the Foreign Office said: "We are aware of the reports and are urgently working to establish what has happened." Boeing, who manufactured the aircraft, said: "We are aware of reports on MH17. We're gathering more information. "Our thoughts and prayers are with those on board MH17, as well as their families and loved ones. We stand ready to provide assistance." UK Prime Minister David Cameron tweeted: "I'm shocked and saddened by the Malaysian air disaster. Officials from across Whitehall are meeting to establish the facts." Aircraft shot down in Ukraine: A timeline 4 October 2001 - Siberia Airlines Flight 1812 crashes over the Black Sea en route from Tel Aviv, Israel to Novosibirsk, Russia. Ukraine later admits that the disaster was probably caused by an errant missile fired by its armed forces. Evhen Marchuk, the chairman of Ukraine's security council, conceded that the plane was probably been brought down by "an accidental hit from an S-200 rocket fired during exercises". 29 May - Rebels in eastern Ukraine shoot down a government military helicopter amid heavy fighting around Slovyansk, killing at least 12 soldiers including a general, officials say. Acting Ukrainian president Oleksandr Turchynov tells parliament in Kiev that rebels used a portable air defence missile to bring down the helicopter. He says 14 died, including General Serhiy Kulchytskiy. 24 June - Ukrainian government says a military helicopter has been shot down over a rebel-controlled area in Slovyansk. 14 July - A Ukrainian military transport plane is shot down along the eastern border with Russia but all eight people aboard managed to bail out safely, the defence ministry says. Separatist rebels claim responsibility for downing the Antonov-26, but Ukrainian officials swiftly rule that out and blamed Russia instead. 16 July - A Ukrainian air force fighter jet is shot down by a missile fired from a Russian plane, according to Ukraine's Security Council. The pilot of the Sukhoi-25 jet is forced to bail out after his plane was hit, says spokesman Andrei Lysenko. Meanwhile, pro-Russian rebels claim responsibility for strikes on two Sukhoi-25 jets. 17 July - Almost 300 people die after a Malaysia Airlines plane is apparently shot down over Ukraine. Disaster strikes again for Malaysia Airlines Almost incredibly, Malaysia Airlines finds itself at the centre of a world aviation disaster for the second time this year. It is only a few months since the Far East carrier was embroiled in what has become one of the great plane mysteries - the disappearance of flight MH370. Now another Malaysia Airlines Boeing 777 lies wrecked in the Ukraine - seemingly the victim of a missile attack. It was on March 8 that flight 370, carrying 239 passengers and crew veered far off course for unknown reasons during a flight from Kuala Lumpur to Beijing. Initially, it was thought that the plane would soon be located. But weeks, and finally months, have passed - with the search areas changed and different theories being expanded - and still there has been no sign of the aircraft. The search continues as does the heartache for the families of those lost on the flight. Now today, comes news that of all the carriers to be involved in what appears to be an act of sabotage it should be beleaguered Malaysian Airlines. Once more their managers have had to tell the world that they have lost contact with one of their aircraft. Now there are some who doubt whether the airline can recover from this. BUK missile designed to take down aircraft The BUK missile system is a set of medium range surface-to-air missile systems which were first developed in the Soviet Union and continue to be produced by Russia. A Buk division of Ukraine's armed forces was reportedly relocated to Donetsk region on Wednesday. Designed to take out cruise missiles, aircrafts, helicopters and short range ballistic missiles, they can reach altitudes of up to 25km (15.5 miles or 82,000ft), according to the manufacturer's website. Developed by Moscow firm Almaz-Antey, they are thought to have been used during the Russian war with Georgia in the territory of South Ossetia in 2008. The manufacturer's website, which also lists military equipment including radar and naval missile systems, displays two models of Buk launchers - the Buk-M1-2 and the Buk-M2E. A description of the Buk-M1-2, which has an altitude target range of up to 25km (15.5 miles or 82,000ft), reads: "The "Buk-M1-2" ADMC is designed to provide air defence for troops and facilities against attacks from current and future high-speed manoeuvring tactical and strategic aircraft, attack helicopters including hovering helicopters, and tactical ballistic, cruise, and air-to-air missiles, in conditions of heavy radio jamming and counter fire; as well as to destroy water and ground surface targets." Meanwhile, the Buk-M2E "is designed to destroy tactical and strategic aircraft, helicopters, cruise missiles, and other aerodynamic aircraft at any point in their range of operation, along with tactical ballistic and aircraft missiles, and smart air bombs in conditions of heavy enemy counter fire and radio jamming; as well as to attack water and ground surface contrast targets." Belfast Telegraph
{ "pile_set_name": "OpenWebText2" }
RENTON, WASH. – Seattle Sounders FC has re-signed defenders Jhon Kennedy Hurtado and Patrick Ianni, the club announced today. Per Major League Soccer and club policy, terms were not disclosed. "We are very excited about bringing back Patrick and Jhon again next season," said Head Coach Sigi Schmid. "Both players have been with us since the start and it was important for us to keep continuity and consistency along a successful back line." Hurtado, 28, joined Sounders FC in 2009 from Colombia’s Deportivo Cali. A MLS All-Star and finalist for MLS Defender of the Year in his first season, Hurtado has become one of the league’s top center backs and has helped establish the Seattle defense as one of the best in MLS. Despite missing much of the 2010 season due to injury, Hurtado has started 106 of 110 appearances across all competitions with one goal and three assists. His 81 appearances in regular-season play ranks sixth all-time in club history. Ianni, 27, has played the past four seasons in Seattle after he was acquired in a trade with the Houston Dynamo in 2009. Ianni made 21 appearances across all competitions in 2012, and his 74 league appearances for Sounders FC ranks eighth in club history. The UCLA product and No. 8 overall pick in the 2006 SuperDraft scored the 2012 MLS Goal of the Year on a side-winding volley against Sporting Kansas City. Overall, Ianni has started 83 of 98 appearances across all competitions for Seattle.
{ "pile_set_name": "Pile-CC" }
Followers Follow this blog with bloglovin Tawny Bee's Fan Box Twitter Update Twitter Updates As Featured On Networked Blogs Tuesday, 27 March 2012 Challenge #6 was the most fun I've ever had doing an Easter sort of craft. "Create something using Peeps, that looks like a Peep or is somehow Peep themed." (courtesy of Just Crafty Enough) After scouring the city for Peeps. Did you know that Corn Syrup is banned in Canada? Peeps are like contraband up here. Finally found a huge display of them at the Shoppers. Phew! I thought I would have to craft up the Peeps using felt or something. Here they are rocking one of our favourite songs "For Whom the Peeps Toll" on stage. OMG! We had so much fun making Peeptallica. I laughed at the silliness (and awesomeness) of it all so much. Notice that even the Peeps box was recycled into this challenge. It made an awesome base guitar. Even Logan got in on the fun. He made one guitarist Peep. Terry jokes that this is Dave Mustaine (sans red hair). Awww poor Dave got kicked out of Peeptallica and had to form his own band Peepadeth. Har har har. Logan made a cool sparkly green guitar for his creation. And Lane? Lane decided to eat the Peeps I allotted for her to have for this challenge. And in natural Laney fashion she ate a bite out of each. We call this creation our mosh pit. It's what happens to brave Peeps who decide to go nuts at the Peeptallica concert. Oh! The carnage! Wasn't this Peeps challenge awesome? I loved making this craft! To see all of the other Peeps challenge entrants, check out the Flickr group.
{ "pile_set_name": "StackExchange" }
Q: Is it possible to reference other Storyboards within Storyboards? I am working on an iOS app and I want to split it into a couple of Storyboards, the problem is that I can't seem to figure out a way to setup references between Storyboards as an entry point. For example I want my Main.storyboard to look something like this: --> Authentication Storyboard --> Tabbed Storyboard --> ... I am new to iOS development and I would expect this to be simple, but can't seem to find a way to do it. A: Yes you can use multiple Storyboards in a project BUT No, you cant reference them directly from inside another storyboard. Referencing a Storyboard to use on launch Your project target can use any of the storyboards contained within it. The reference for the storyboard entry point can be found in the target settings. Select the target Select the General Tab In the dropdown for main interface select which storyboard to use initially. Referencing a Storyboard in code So in this example I wish to push a viewcontroller onto a navigation Controller. The viewcontroller I want to reference may be in a different storyboard or the current one I'm using. The code doesn't care... if let storyboard = UIStoryboard(name: "A2ndStoryboard", bundle: nil) { if let vc = storyboard.instantiateViewControllerWithIdentifier("SomeViewController" as String) { navigationController?.pushViewController(vc, animated: true) } } When you want to reference ViewControllers like this you need to ensure that they are identified in the Storyboard correctly:
{ "pile_set_name": "StackExchange" }
Q: Python ternary execution order In python, if I use a ternary operator: x = a if <condition> else b Is a executed even if condition is false? Or does condition evaluate first and then goes to either a or b depending on the result? A: The condition is evaluated first, if it is False, a is not evaluated: documentation. A: It gets evaluated depending if meets the condition. For example: condition = True print(2 if condition else 1/0) #Output is 2 print((1/0, 2)[condition]) #ZeroDivisionError is raised No matter if 1/0 raise an error, is never evaluated as the condition was True on the evaluation. Sames happen in the other way: condition = False print(1/0 if condition else 2) #Output is 2
{ "pile_set_name": "PubMed Abstracts" }
Germinated Buckwheat extract decreases blood pressure and nitrotyrosine immunoreactivity in aortic endothelial cells in spontaneously hypertensive rats. Thee present study analysed the quantification of rutin in raw buckwheat extract (RBE) and germinated buckwheat extract (GBE) by high performance liquid chromatography (HPLC), and examined changes in body weight, systolic blood pressure (SBP) and nitrotyrosine (a marker for peroxynitrite formation) immunoreactivity in aortic endothelial cells in spontaneously hypertensive rats (SHRs) and normotensive Wistar-Kyoto (WKY) rats after treatment with RBE and GBE for 5 weeks. In the HPLC study, RBE and GBE contained a mean content of rutin of 1.52 +/- 0.21 and 2.92 +/- 0.88 mg/g, respectively. In the 600 mg/kg GBE-treated group, SBP was lower than that in the 600 mg/kg RBE-treated group. The treatment with RBE and/or GBE significantly reduced oxidative damage in aortic endothelial cells by lowering nitrotyrosine immunoreactivity. These results suggest that GBE has an antihypertensive effect and may protect arterial endothelial cells from oxidative stress.
{ "pile_set_name": "Wikipedia (en)" }
Cecilia Mary Ady Cecilia Mary Ady (28 November 1881 – 27 March 1958) was an English writer, academic and historian. She worked at the University of Oxford, and – like her mother – became known as an authority on the Italian Renaissance. She came to wider public attention after she was dismissed by a former friend from her college, and her colleagues supported her reinstatement. Life Ady was born in Edgcote in Northamptonshire in 1881, the only child of Rev. William Henry Ady, a clergyman, and his wife, Julia Cartwright Ady, a biographer and an amateur expert on the Italian Renaissance. Her mother's interest in Italy had been fired by her uncle, William Cornwallis Cartwright. Her mother took responsibility for Cecilia's education, and Cecilia obtained a place at Oxford, where she studied at St Hugh's Hall, and obtained a first in the honours school of modern history in 1903 (although women were not at that date entitled to be awarded degrees). She became a protégée of the historian Edward Armstrong. He was commissioned to oversee a book series titled "The States of Italy": his plans were not fully realised, but Ady's book, History of Milan under the Sforza was one of two to be published. In 1909 she joined St Hugh's as a tutor, where she developed a close relationship with the college's principal, Eleanor Jourdain. Jourdain eventually turned against Ady, allegedly jealous of her popularity. Ady was sacked from her position in November 1923, at Jourdain's insistence, for disloyalty. Jourdain felt that Ady had leaked information to the staff about her plans for introducing a vice-principal to the college. Ady protested, and a mass resignation followed, which included six of the college's council. The matter became of wider public interest, and Lord Curzon (the chancellor of the university) was asked to investigate. Ady's name was eventually cleared, and Jourdain died just before she was to be asked to resign. The inquiry resulted in improvements to the employment conditions of female tutors. Ady then became a tutor with the Society of Home Students. In 1929 her old college re-employed her as a research fellow. In 1938 she was awarded the degree of Doctor of Letters (DLitt), after she published a monograph titled The Bentivoglio of Bologna: a Study in Despotism (1937). Ady died in Oxford in 1958. Following her death, her colleagues and former research students compiled a memorial volume of donated essays, titled Italian Renaissance Studies (1960). Works include History of Milan under the Sforza (1907) Pius II (Aeneas Silvius Piccolomini): the Humanist Pope (1913) A History of Modern Italy, 1871–1915 (translation) of Benedetto Croce's work Italian Studies (1934) (Editor) The Bentivoglio of Bologna: a Study in Despotism (1937) Lorenzo Dei Medici and Renaissance Italy (1955) References Category:1881 births Category:1958 deaths Category:People from South Northamptonshire District Category:British historians Category:20th-century British writers Category:Alumni of St Hugh's College, Oxford Category:First women admitted to degrees at Oxford Category:British women historians
{ "pile_set_name": "StackExchange" }
Q: Paypal REST Api for Paying another paypal account I'm looking into the new paypal REST api. I want the ability to be able to pay another paypal account, transfer money from my acount to their acount. All the documentation I have seen so far is about charging users. Is paying someone with the REST api possible? Similar to the function of the mass pay api or adaptive payments api. A: At this moment, paying another user via API is not possible via REST APIs, so mass pay/Adaptive payments would be the current existing solution. It is likely that this ability will be part of REST in a future release.
{ "pile_set_name": "PubMed Abstracts" }
Acid-base status in the avian patient using a portable point-of-care analyzer. The i-STAT PCA can be used as a blood analyzer in critical avian patients, although single values must be interpreted carefully. The study of the acid-base status in companion birds is still in its infancy. Further research is needed to establish normal reference values in arterial blood gases, compare them with venous blood gas, and to determine if the formulas that deviate from small animal medicine are or are not applicable.
{ "pile_set_name": "Pile-CC" }
Abstract L-Arabinose is the second most abundant pentose beside D-xylose and is found in the plant polysaccharides, hemicellulose and pectin. The need to find renewable carbon and energy sources has accelerated research to investigate the potential of L-arabinose for the development and production of biofuels and other bioproducts. Fungi produce a number of extracellular arabinanases, including α-L-arabinofuranosidases and endo-arabinanases, to specifically release L-arabinose from the plant polymers. Following uptake of L-arabinose, its intracellular catabolism follows a four-step alternating reduction and oxidation path, which is concluded by a phosphorylation, resulting in D-xylulose 5-phosphate, an intermediate of the pentose phosphate pathway. The genes and encoding enzymes L-arabinose reductase, L-arabinitol dehydrogenase, L-xylulose reductase, xylitol dehydrogenase, and xylulokinase of this pathway were mainly characterized in the two biotechnological important fungi Aspergillus niger and Trichoderma reesei. Analysis of the components of the L-arabinose pathway revealed a number of specific adaptations in the enzymatic and regulatory machinery towards the utilization of L-arabinose. Further genetic and biochemical analysis provided evidence that L-arabinose and the interconnected D-xylose pathway are also involved in the oxidoreductive degradation of the hexose D-galactose. Degradation of different l-arabinose polymers in pectin. Different types of arabinan and arabinogalactan are present as side chains of rhamnogalacturonan I in pectin. Endo-processive arabinanases (ABN) attack randomly the backbone of arabinan, while exo-acting α-l-arabinofuranosidases (ABF) release the terminal α-l-arabinofuranoside residues l-arabinose and d-xylose catabolism in fungi. The first step of the l-arabinose pathway is catalyzed by two different aldose reductases. In A. niger, the l-arabinose-specific reductase (LarA) is responsible for the main activity, while in T. reesei the d-xylose reductase XYL1 is responsible for the first step in l-arabinose assimilation. The last two shared steps of the two pentose catabolic pathways result in d-xylulose 5-phosphate, which is further assimilated via the pentose phosphate pathway A second oxidoreductive pathway of d-galactose utilization. A second pathway for d-galactose uilization was recently identified in T. reesei and A. nidulans. It starts with the reduction of d-galactose to galactitol by the d-xylose reductase XYL1 (aldose reductase) in T. reesei. A hypothetical draft of the pathway for the further degradation of galactitol is summarized
{ "pile_set_name": "StackExchange" }
Q: How to unlock the file after AWS S3 Helper uploading file? I am using the official PHP SDK with the official service provider for laravel to upload an image to Amazon S3. The image is temporarily stored on my server and should be deleted after uploading. The following is the code I used to do my upload and delete. $temp_path = "/screenshot_temp/testing.png"; $client = AWS::createClient('s3'); $result = $client->putObject(array( 'Bucket' => self::$bucketName, 'Key' => 'screenshot/testing.png', 'SourceFile' => $temp_path, 'ACL' => 'public-read' )); ); chown($temp_path, 777); unlink($temp_path); The upload is successful. I can see my image with the link return, and I can see it on the amazon console. The problem is that the delete fails, with the following error message: ErrorException: unlink(... path of my file ...): Permission denied I am sure my file permission setting is correct, and I am able to delete my file with the section of code for uploading to S3 comment out. So it should be the problem that the file is locked during uploading the file. Is there a way I can unlock and delete my file? A: Yes the stream upload locks the file till it finishes, Try either of 2, $client = AWS::createClient('s3'); $fileContent = file_get_contents($temp_path); $result = $client->putObject(array( 'Bucket' => self::$bucketName, 'Key' => 'screenshot/testing.png', 'Body' => $fileContent, 'ACL' => 'public-read' )); ); unlink($temp_path); or $client = AWS::createClient('s3'); $fileContent = file_get_contents($temp_path); $result = $client->putObject(array( 'Bucket' => self::$bucketName, 'Key' => 'screenshot/testing.png', 'Body' => $fileContent, 'ACL' => 'public-read' )); ); gc_collect_cycles(); unlink($temp_path); A: When you're using SourceFile option at putObject S3Client opens a file, but doesn't close it after operation. In most cases you just can unset $client and/or $result to close opened files. But unfortunately not in this case. Use Body option instead of the SourceFile. // temp file $file = fopen($temp_path, "r"); // use resource, not a path $result = $client->putObject(array( 'Bucket' => self::$bucketName, 'Key' => 'screenshot/testing.png', 'Body' => $file, 'ACL' => 'public-read' )); ); fclose($file); unlink($temp_path);
{ "pile_set_name": "USPTO Backgrounds" }
1. Field of the Invention The present invention relates to a holder for holding wash-targets when washing the wash-targets such as electronic components and fine components and, more specifically, to a holder used at the time of washing by soaking the wash-target in a washing solvent. Also, it relates to a method for washing the wash-targets using the holder. 2. Description of the Related Art Electronic components formed by semiconductor chips or a small-size precision components require a high cleanness depending on the purpose of their use. Therefore, the components are washed after being manufactured and before being shipped as products or being mounted into a device. Especially, a high cleanness is required in a magnetic head slider which is required to float a magnetic disk low, since it is mounted to a magnetic disk device. Further, the magnetic head slider is a component which requires a highly precise positioning, so that it is required to be surely washed. Japanese Patent Unexamined Publication No. 6-103511 discloses an example of a method for washing the components. The method disclosed in Japanese Patent Undisclosed Publication No. 6-103511 is a method in which a magnetic head slider is soaked in a wash tank by being held to a holder and ultrasonic washing is performed in that state. The above-described holder is formed by: through-holes formed in lattice form for enclosing the magnetic head slider; a member comprising a net for covering the bottom end openings; and a member comprising a net for covering the upper end openings of the through-holes. The magnetic head slider is enclosed thereby in the through-holes and it is surrounded by the wall face of the through-hole and a pair of the nets which cover both end openings of the through-hole. Thereby, the magnetic head slider is prevented from being projected outside. Further, the size and thickness of the opening of the through-hole is set 1.1-2.5 times the longitudinal and lateral sizes and the thickness of the magnetic head slider, so that the magnetic head slider can freely move within the through-hole. Thereby, the ultrasonic washing performed on the magnetic head slider can be effectively achieved in the wash tank. However, there is an inconvenience being generated when components are washed by using the holder of Japanese Patent Undisclosed Publication No. 6-103511 as described below. First, the wash-target is surrounded by the wall face forming the through-hole so that the solvent of a washing liquid cannot be easily removed from the wash-target after the washing. Especially, the solvent may remain in four corners (corner parts) of the through-hole and stains can be generated in the wash-target. Further, both sides of the wash-target are held by a net so that damage of ESD (electrostatic discharge) likely to be occurred. Moreover, the wash-target cannot be stably held through the net so that the wash-target may move during the washing by ultrasonic oscillation and collides against the wall face of the through-hole. Due to the impact, there may be a crack or break being generated for causing damage on the wash-target.
{ "pile_set_name": "NIH ExPorter" }
The unified Clinical and Administrative Core will be responsible for administration and coordination of the projects and cores, and organization of the clinical material collection. This core will continue to play a crucial role in the proposed Program Project. The administrative component of this core is responsible for general management of the grant, including processing of purchase orders, maintaining records and protocols, and correspondence pertaining to the projects. The administrative Core will also schedule laboratory meetings, seminar series, and appropriate process for outside review, including the annual meeting of the Advisory Board. The Clinical Component of the core will continue to solicit patient material and coordinate the transport of the clinical specimens to this Program Project. Specifically, the clinical component of the core will identify sources and solicit cell cultures from physician-scientists in contact with patients with EB, and coordinate the transport to the Tissue Culture Core. The Core will also coordinate and facilitate the shipment of blood samples or DNA speciments directly to Project 1 for genetic linkage analysis or to Project 2 for direct examination of mutated DNA sequences. Since the network of collaborators represents an international community of clinician-investigators who live in different continents, careful orchestrating of the clinical activities is a high priority for this core.
{ "pile_set_name": "PubMed Abstracts" }
Foscarnet 5 versus 7 days a week treatment for severe gastrointestinal CMV disease in HIV-infected patients. In a randomized open trial foscarnet 90 mg/kg b.i.d. 5 days for 3 weeks was compared to 90 mg/kg b.i.d. daily in severe gastrointestinal cytomegalovirus disease in HIV-infected patients. Thirty-eight patients were randomized, 36 were evaluable (all male, age 24-54 years, median 40 years; CD4/microliter 0-150, median 10). Treatment efficacy was evaluated based on a score consisting of symptoms, endoscopic and histologic examination. In the 5-day treatment group 10/16 (62%) patients responded to treatment, in the 7-day treatment group 13/20 (65%), with symptoms resolving in most patients after 1 week. Side effects and adverse events were seen in 13 patients in the 5-day treatment group and in 15 patients in the 7-day treatment group. Laboratory abnormalities were common in both groups, in one patient reversible renal insufficiency developed. Efficacy and safety of treatment 5 days a week was comparable to the standard regimen.
{ "pile_set_name": "PubMed Abstracts" }
Are ILC2s Jekyll and Hyde in airway inflammation? Asthma is a complex heterogeneous disease of the airways characterized by lung inflammation, airway hyperreactivity (AHR), mucus overproduction, and remodeling of the airways. Group 2 innate lymphoid cells (ILC2s) play a crucial role in the initiation and propagation of type 2 inflammatory programs in allergic asthma models, independent of adaptive immunity. In response to allergen, helminths or viral infection, damaged airway epithelial cells secrete IL-33, IL-25, and thymic stromal lymphopoietin (TSLP), which activate ILC2s to produce type 2 cytokines such as IL-5, IL-13, and IL-9. Furthermore, ILC2s coordinate a network of cellular responses and interact with numerous cell types to propagate the inflammatory response and repair lung damage. ILC2s display functional plasticity in distinct asthma phenotypes, enabling them to respond to very different immune microenvironments. Thus, in the context of non-allergic asthma, triggered by exposure to environmental factors, ILC2s transdifferentiate to ILC1-like cells and activate type 1 inflammatory programs in the lung. In this review, we summarize accumulating evidence on the heterogeneity, plasticity, regulatory mechanisms, and pleiotropic roles of ILC2s in allergic inflammation as well as mechanisms for their suppression in the airways.
{ "pile_set_name": "StackExchange" }
Q: XML Serialized from xsd.exe generated code using substitution groups is invalid (invalid xsi:type error) I've generated some C# classes from some 3GPP XSDs (multiple XSD files/namespaces) and it works great for serialization apart from one problem with concrete instances of an abstract type used in a substitution group. Firstly, the relevant parts of the schema: (genericNrm.xsd) <element name="ManagedElement"> <complexType> <complexContent> <extension base="xn:NrmClass"> <sequence> ... <choice minOccurs="0" maxOccurs="unbounded"> <element ref="xn:IRPAgent"/> <element ref="xn:ManagedElementOptionallyContainedNrmClass"/> <element ref="xn:VsDataContainer"/> </choice> </sequence> </extension> </complexContent> </complexType> </element> <element name="ManagedElementOptionallyContainedNrmClass" type="xn:NrmClass" abstract="true" /> (eutran.xsd) <element name="ENBFunction" substitutionGroup="xn:ManagedElementOptionallyContainedNrmClass"> <complexType> <complexContent> <extension base="xn:NrmClass"> <sequence> <element name="attributes" minOccurs="0"> <complexType> <all> <element name="userLabel" type="string" minOccurs="0"/> ... etc The XML serialised from a simple ManagedElement with contained ENBFunction is: <ManagedElement xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" id="1234" xmlns="http://www.3gpp.org/ftp/specs/archive/32_series/32.625#genericNrm"> <ManagedElementOptionallyContainedNrmClass xmlns:q1="http://www.3gpp.org/ftp/specs/archive/32_series/32.765#eutranNrm" xsi:type="q1:ENBFunction" id="1234"> <q1:attributes> <q1:userLabel>label</q1:userLabel> </q1:attributes> </ManagedElementOptionallyContainedNrmClass> </ManagedElement> The built in visual studio XML validation complains about the element, stating "This is an invalid xsi:type 'http://www.3gpp.org/ftp/specs/archive/32_series/32.765#eutranNrm:ENBFunction'. So is the serialized XML wrong or the validation? Is it something to do with the separate namespaces? I can deserialize the XML just fine but I need the generated XML to be schema compliant (without changing the schemas). I found that if I manually change the XML to the following, the error disappears (and I find it easier to read also): <ManagedElement xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" id="1234" xmlns="http://www.3gpp.org/ftp/specs/archive/32_series/32.625#genericNrm"> <q1:ENBFunction xmlns:q1="http://www.3gpp.org/ftp/specs/archive/32_series/32.765#eutranNrm" id="1234"> <q1:attributes> <q1:userLabel>label</q1:userLabel> </q1:attributes> </q1:ENBFunction> </ManagedElement> Can I force the serializer to output it this way? Thanks for looking... A: I solved this by manually editing the code generated from the XSD. An XmlElementAttribute is needed on the ManagedElement classes Items collection to ensure the serialisation works correctly: [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "4.0.30319.1")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType=true, Namespace="http://www.3gpp.org/ftp/specs/archive/32_series/32.625#genericNrm")] [System.Xml.Serialization.XmlRootAttribute(Namespace="http://www.3gpp.org/ftp/specs/archive/32_series/32.625#genericNrm", IsNullable=false)] public partial class ManagedElement : NrmClass { ... [System.Xml.Serialization.XmlElementAttribute("ENBFunction", typeof(ENBFunction), Namespace = "http://www.3gpp.org/ftp/specs/archive/32_series/32.765#eutranNrm")] public NrmClass[] Items { ... This attribute is required for all classes inherited from ManagedElement to ensure the correct one is used at serialization time.
{ "pile_set_name": "Github" }
/* * Copyright (c) 1994-1997 Sam Leffler * Copyright (c) 1994-1997 Silicon Graphics, Inc. * * Permission to use, copy, modify, distribute, and sell this software and * its documentation for any purpose is hereby granted without fee, provided * that (i) the above copyright notices and this permission notice appear in * all copies of the software and related documentation, and (ii) the names of * Sam Leffler and Silicon Graphics may not be used in any advertising or * publicity relating to the software without the specific, prior written * permission of Sam Leffler and Silicon Graphics. * * THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, * EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. * * IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR * ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, * OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, * WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF * LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE * OF THIS SOFTWARE. */ #define WIN32_LEAN_AND_MEAN #define VC_EXTRALEAN #include "tiffiop.h" #include <stdlib.h> #ifdef JPEG_SUPPORT /* * TIFF Library * * JPEG Compression support per TIFF Technical Note #2 * (*not* per the original TIFF 6.0 spec). * * This file is simply an interface to the libjpeg library written by * the Independent JPEG Group. You need release 5 or later of the IJG * code, which you can find on the Internet at ftp.uu.net:/graphics/jpeg/. * * Contributed by Tom Lane <[email protected]>. */ #include <setjmp.h> int TIFFFillStrip(TIFF* tif, uint32 strip); int TIFFFillTile(TIFF* tif, uint32 tile); int TIFFReInitJPEG_12( TIFF *tif, int scheme, int is_encode ); int TIFFJPEGIsFullStripRequired_12(TIFF* tif); /* We undefine FAR to avoid conflict with JPEG definition */ #ifdef FAR #undef FAR #endif /* Libjpeg's jmorecfg.h defines INT16 and INT32, but only if XMD_H is not defined. Unfortunately, the MinGW and Borland compilers include a typedef for INT32, which causes a conflict. MSVC does not include a conflicting typedef given the headers which are included. */ #if defined(__BORLANDC__) || defined(__MINGW32__) # define XMD_H 1 #endif /* The windows RPCNDR.H file defines boolean, but defines it with the unsigned char size. You should compile JPEG library using appropriate definitions in jconfig.h header, but many users compile library in wrong way. That causes errors of the following type: "JPEGLib: JPEG parameter struct mismatch: library thinks size is 432, caller expects 464" For such users we will fix the problem here. See install.doc file from the JPEG library distribution for details. */ /* Define "boolean" as unsigned char, not int, per Windows custom. */ #if defined(__WIN32__) && !defined(__MINGW32__) # ifndef __RPCNDR_H__ /* don't conflict if rpcndr.h already read */ typedef unsigned char boolean; # endif # define HAVE_BOOLEAN /* prevent jmorecfg.h from redefining it */ #endif #include "jpeglib.h" #include "jerror.h" /* * Do we want to do special processing suitable for when JSAMPLE is a * 16bit value? */ #if defined(JPEG_LIB_MK1) # define JPEG_LIB_MK1_OR_12BIT 1 #elif BITS_IN_JSAMPLE == 12 # define JPEG_LIB_MK1_OR_12BIT 1 #endif /* * We are using width_in_blocks which is supposed to be private to * libjpeg. Unfortunately, the libjpeg delivered with Cygwin has * renamed this member to width_in_data_units. Since the header has * also renamed a define, use that unique define name in order to * detect the problem header and adjust to suit. */ #if defined(D_MAX_DATA_UNITS_IN_MCU) #define width_in_blocks width_in_data_units #endif /* * On some machines it may be worthwhile to use _setjmp or sigsetjmp * in place of plain setjmp. These macros will make it easier. */ #define SETJMP(jbuf) setjmp(jbuf) #define LONGJMP(jbuf,code) longjmp(jbuf,code) #define JMP_BUF jmp_buf typedef struct jpeg_destination_mgr jpeg_destination_mgr; typedef struct jpeg_source_mgr jpeg_source_mgr; typedef struct jpeg_error_mgr jpeg_error_mgr; /* * State block for each open TIFF file using * libjpeg to do JPEG compression/decompression. * * libjpeg's visible state is either a jpeg_compress_struct * or jpeg_decompress_struct depending on which way we * are going. comm can be used to refer to the fields * which are common to both. * * NB: cinfo is required to be the first member of JPEGState, * so we can safely cast JPEGState* -> jpeg_xxx_struct* * and vice versa! */ typedef struct { union { struct jpeg_compress_struct c; struct jpeg_decompress_struct d; struct jpeg_common_struct comm; } cinfo; /* NB: must be first */ int cinfo_initialized; jpeg_error_mgr err; /* libjpeg error manager */ JMP_BUF exit_jmpbuf; /* for catching libjpeg failures */ struct jpeg_progress_mgr progress; /* * The following two members could be a union, but * they're small enough that it's not worth the effort. */ jpeg_destination_mgr dest; /* data dest for compression */ jpeg_source_mgr src; /* data source for decompression */ /* private state */ TIFF* tif; /* back link needed by some code */ uint16 photometric; /* copy of PhotometricInterpretation */ uint16 h_sampling; /* luminance sampling factors */ uint16 v_sampling; tmsize_t bytesperline; /* decompressed bytes per scanline */ /* pointers to intermediate buffers when processing downsampled data */ JSAMPARRAY ds_buffer[MAX_COMPONENTS]; int scancount; /* number of "scanlines" accumulated */ int samplesperclump; TIFFVGetMethod vgetparent; /* super-class method */ TIFFVSetMethod vsetparent; /* super-class method */ TIFFPrintMethod printdir; /* super-class method */ TIFFStripMethod defsparent; /* super-class method */ TIFFTileMethod deftparent; /* super-class method */ /* pseudo-tag fields */ void* jpegtables; /* JPEGTables tag value, or NULL */ uint32 jpegtables_length; /* number of bytes in same */ int jpegquality; /* Compression quality level */ int jpegcolormode; /* Auto RGB<=>YCbCr convert? */ int jpegtablesmode; /* What to put in JPEGTables */ int ycbcrsampling_fetched; int max_allowed_scan_number; } JPEGState; #define JState(tif) ((JPEGState*)(tif)->tif_data) static int JPEGDecode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s); static int JPEGDecodeRaw(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s); static int JPEGEncode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s); static int JPEGEncodeRaw(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s); static int JPEGInitializeLibJPEG(TIFF * tif, int decode ); static int DecodeRowError(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s); #define FIELD_JPEGTABLES (FIELD_CODEC+0) static const TIFFField jpegFields[] = { { TIFFTAG_JPEGTABLES, -3, -3, TIFF_UNDEFINED, 0, TIFF_SETGET_C32_UINT8, TIFF_SETGET_C32_UINT8, FIELD_JPEGTABLES, FALSE, TRUE, "JPEGTables", NULL }, { TIFFTAG_JPEGQUALITY, 0, 0, TIFF_ANY, 0, TIFF_SETGET_INT, TIFF_SETGET_UNDEFINED, FIELD_PSEUDO, TRUE, FALSE, "", NULL }, { TIFFTAG_JPEGCOLORMODE, 0, 0, TIFF_ANY, 0, TIFF_SETGET_INT, TIFF_SETGET_UNDEFINED, FIELD_PSEUDO, FALSE, FALSE, "", NULL }, { TIFFTAG_JPEGTABLESMODE, 0, 0, TIFF_ANY, 0, TIFF_SETGET_INT, TIFF_SETGET_UNDEFINED, FIELD_PSEUDO, FALSE, FALSE, "", NULL } }; /* * libjpeg interface layer. * * We use setjmp/longjmp to return control to libtiff * when a fatal error is encountered within the JPEG * library. We also direct libjpeg error and warning * messages through the appropriate libtiff handlers. */ /* * Error handling routines (these replace corresponding * IJG routines from jerror.c). These are used for both * compression and decompression. */ static void TIFFjpeg_error_exit(j_common_ptr cinfo) { JPEGState *sp = (JPEGState *) cinfo; /* NB: cinfo assumed first */ char buffer[JMSG_LENGTH_MAX]; (*cinfo->err->format_message) (cinfo, buffer); TIFFErrorExt(sp->tif->tif_clientdata, "JPEGLib", "%s", buffer); /* display the error message */ jpeg_abort(cinfo); /* clean up libjpeg state */ LONGJMP(sp->exit_jmpbuf, 1); /* return to libtiff caller */ } /* * This routine is invoked only for warning messages, * since error_exit does its own thing and trace_level * is never set > 0. */ static void TIFFjpeg_output_message(j_common_ptr cinfo) { char buffer[JMSG_LENGTH_MAX]; (*cinfo->err->format_message) (cinfo, buffer); TIFFWarningExt(((JPEGState *) cinfo)->tif->tif_clientdata, "JPEGLib", "%s", buffer); } /* Avoid the risk of denial-of-service on crafted JPEGs with an insane */ /* number of scans. */ /* See http://www.libjpeg-turbo.org/pmwiki/uploads/About/TwoIssueswiththeJPEGStandard.pdf */ static void TIFFjpeg_progress_monitor(j_common_ptr cinfo) { JPEGState *sp = (JPEGState *) cinfo; /* NB: cinfo assumed first */ if (cinfo->is_decompressor) { const int scan_no = ((j_decompress_ptr)cinfo)->input_scan_number; if (scan_no >= sp->max_allowed_scan_number) { TIFFErrorExt(((JPEGState *) cinfo)->tif->tif_clientdata, "TIFFjpeg_progress_monitor", "Scan number %d exceeds maximum scans (%d). This limit " "can be raised through the LIBTIFF_JPEG_MAX_ALLOWED_SCAN_NUMBER " "environment variable.", scan_no, sp->max_allowed_scan_number); jpeg_abort(cinfo); /* clean up libjpeg state */ LONGJMP(sp->exit_jmpbuf, 1); /* return to libtiff caller */ } } } /* * Interface routines. This layer of routines exists * primarily to limit side-effects from using setjmp. * Also, normal/error returns are converted into return * values per libtiff practice. */ #define CALLJPEG(sp, fail, op) (SETJMP((sp)->exit_jmpbuf) ? (fail) : (op)) #define CALLVJPEG(sp, op) CALLJPEG(sp, 0, ((op),1)) static int TIFFjpeg_create_compress(JPEGState* sp) { /* initialize JPEG error handling */ sp->cinfo.c.err = jpeg_std_error(&sp->err); sp->err.error_exit = TIFFjpeg_error_exit; sp->err.output_message = TIFFjpeg_output_message; /* set client_data to avoid UMR warning from tools like Purify */ sp->cinfo.c.client_data = NULL; return CALLVJPEG(sp, jpeg_create_compress(&sp->cinfo.c)); } static int TIFFjpeg_create_decompress(JPEGState* sp) { /* initialize JPEG error handling */ sp->cinfo.d.err = jpeg_std_error(&sp->err); sp->err.error_exit = TIFFjpeg_error_exit; sp->err.output_message = TIFFjpeg_output_message; /* set client_data to avoid UMR warning from tools like Purify */ sp->cinfo.d.client_data = NULL; return CALLVJPEG(sp, jpeg_create_decompress(&sp->cinfo.d)); } static int TIFFjpeg_set_defaults(JPEGState* sp) { return CALLVJPEG(sp, jpeg_set_defaults(&sp->cinfo.c)); } static int TIFFjpeg_set_colorspace(JPEGState* sp, J_COLOR_SPACE colorspace) { return CALLVJPEG(sp, jpeg_set_colorspace(&sp->cinfo.c, colorspace)); } static int TIFFjpeg_set_quality(JPEGState* sp, int quality, boolean force_baseline) { return CALLVJPEG(sp, jpeg_set_quality(&sp->cinfo.c, quality, force_baseline)); } static int TIFFjpeg_suppress_tables(JPEGState* sp, boolean suppress) { return CALLVJPEG(sp, jpeg_suppress_tables(&sp->cinfo.c, suppress)); } static int TIFFjpeg_start_compress(JPEGState* sp, boolean write_all_tables) { return CALLVJPEG(sp, jpeg_start_compress(&sp->cinfo.c, write_all_tables)); } static int TIFFjpeg_write_scanlines(JPEGState* sp, JSAMPARRAY scanlines, int num_lines) { return CALLJPEG(sp, -1, (int) jpeg_write_scanlines(&sp->cinfo.c, scanlines, (JDIMENSION) num_lines)); } static int TIFFjpeg_write_raw_data(JPEGState* sp, JSAMPIMAGE data, int num_lines) { return CALLJPEG(sp, -1, (int) jpeg_write_raw_data(&sp->cinfo.c, data, (JDIMENSION) num_lines)); } static int TIFFjpeg_finish_compress(JPEGState* sp) { return CALLVJPEG(sp, jpeg_finish_compress(&sp->cinfo.c)); } static int TIFFjpeg_write_tables(JPEGState* sp) { return CALLVJPEG(sp, jpeg_write_tables(&sp->cinfo.c)); } static int TIFFjpeg_read_header(JPEGState* sp, boolean require_image) { return CALLJPEG(sp, -1, jpeg_read_header(&sp->cinfo.d, require_image)); } static int TIFFjpeg_has_multiple_scans(JPEGState* sp) { return CALLJPEG(sp, 0, jpeg_has_multiple_scans(&sp->cinfo.d)); } static int TIFFjpeg_start_decompress(JPEGState* sp) { const char* sz_max_allowed_scan_number; /* progress monitor */ sp->cinfo.d.progress = &sp->progress; sp->progress.progress_monitor = TIFFjpeg_progress_monitor; sp->max_allowed_scan_number = 100; sz_max_allowed_scan_number = getenv("LIBTIFF_JPEG_MAX_ALLOWED_SCAN_NUMBER"); if( sz_max_allowed_scan_number ) sp->max_allowed_scan_number = atoi(sz_max_allowed_scan_number); return CALLVJPEG(sp, jpeg_start_decompress(&sp->cinfo.d)); } static int TIFFjpeg_read_scanlines(JPEGState* sp, JSAMPARRAY scanlines, int max_lines) { return CALLJPEG(sp, -1, (int) jpeg_read_scanlines(&sp->cinfo.d, scanlines, (JDIMENSION) max_lines)); } static int TIFFjpeg_read_raw_data(JPEGState* sp, JSAMPIMAGE data, int max_lines) { return CALLJPEG(sp, -1, (int) jpeg_read_raw_data(&sp->cinfo.d, data, (JDIMENSION) max_lines)); } static int TIFFjpeg_finish_decompress(JPEGState* sp) { return CALLJPEG(sp, -1, (int) jpeg_finish_decompress(&sp->cinfo.d)); } static int TIFFjpeg_abort(JPEGState* sp) { return CALLVJPEG(sp, jpeg_abort(&sp->cinfo.comm)); } static int TIFFjpeg_destroy(JPEGState* sp) { return CALLVJPEG(sp, jpeg_destroy(&sp->cinfo.comm)); } static JSAMPARRAY TIFFjpeg_alloc_sarray(JPEGState* sp, int pool_id, JDIMENSION samplesperrow, JDIMENSION numrows) { return CALLJPEG(sp, (JSAMPARRAY) NULL, (*sp->cinfo.comm.mem->alloc_sarray) (&sp->cinfo.comm, pool_id, samplesperrow, numrows)); } /* * JPEG library destination data manager. * These routines direct compressed data from libjpeg into the * libtiff output buffer. */ static void std_init_destination(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; TIFF* tif = sp->tif; sp->dest.next_output_byte = (JOCTET*) tif->tif_rawdata; sp->dest.free_in_buffer = (size_t) tif->tif_rawdatasize; } static boolean std_empty_output_buffer(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; TIFF* tif = sp->tif; /* the entire buffer has been filled */ tif->tif_rawcc = tif->tif_rawdatasize; #ifdef IPPJ_HUFF /* * The Intel IPP performance library does not necessarily fill up * the whole output buffer on each pass, so only dump out the parts * that have been filled. * http://trac.osgeo.org/gdal/wiki/JpegIPP */ if ( sp->dest.free_in_buffer >= 0 ) { tif->tif_rawcc = tif->tif_rawdatasize - sp->dest.free_in_buffer; } #endif TIFFFlushData1(tif); sp->dest.next_output_byte = (JOCTET*) tif->tif_rawdata; sp->dest.free_in_buffer = (size_t) tif->tif_rawdatasize; return (TRUE); } static void std_term_destination(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; TIFF* tif = sp->tif; tif->tif_rawcp = (uint8*) sp->dest.next_output_byte; tif->tif_rawcc = tif->tif_rawdatasize - (tmsize_t) sp->dest.free_in_buffer; /* NB: libtiff does the final buffer flush */ } static void TIFFjpeg_data_dest(JPEGState* sp, TIFF* tif) { (void) tif; sp->cinfo.c.dest = &sp->dest; sp->dest.init_destination = std_init_destination; sp->dest.empty_output_buffer = std_empty_output_buffer; sp->dest.term_destination = std_term_destination; } /* * Alternate destination manager for outputting to JPEGTables field. */ static void tables_init_destination(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; /* while building, jpegtables_length is allocated buffer size */ sp->dest.next_output_byte = (JOCTET*) sp->jpegtables; sp->dest.free_in_buffer = (size_t) sp->jpegtables_length; } static boolean tables_empty_output_buffer(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; void* newbuf; /* the entire buffer has been filled; enlarge it by 1000 bytes */ newbuf = _TIFFrealloc((void*) sp->jpegtables, (tmsize_t) (sp->jpegtables_length + 1000)); if (newbuf == NULL) ERREXIT1(cinfo, JERR_OUT_OF_MEMORY, 100); sp->dest.next_output_byte = (JOCTET*) newbuf + sp->jpegtables_length; sp->dest.free_in_buffer = (size_t) 1000; sp->jpegtables = newbuf; sp->jpegtables_length += 1000; return (TRUE); } static void tables_term_destination(j_compress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; /* set tables length to number of bytes actually emitted */ sp->jpegtables_length -= (uint32) sp->dest.free_in_buffer; } static int TIFFjpeg_tables_dest(JPEGState* sp, TIFF* tif) { (void) tif; /* * Allocate a working buffer for building tables. * Initial size is 1000 bytes, which is usually adequate. */ if (sp->jpegtables) _TIFFfree(sp->jpegtables); sp->jpegtables_length = 1000; sp->jpegtables = (void*) _TIFFmalloc((tmsize_t) sp->jpegtables_length); if (sp->jpegtables == NULL) { sp->jpegtables_length = 0; TIFFErrorExt(sp->tif->tif_clientdata, "TIFFjpeg_tables_dest", "No space for JPEGTables"); return (0); } sp->cinfo.c.dest = &sp->dest; sp->dest.init_destination = tables_init_destination; sp->dest.empty_output_buffer = tables_empty_output_buffer; sp->dest.term_destination = tables_term_destination; return (1); } /* * JPEG library source data manager. * These routines supply compressed data to libjpeg. */ static void std_init_source(j_decompress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; TIFF* tif = sp->tif; sp->src.next_input_byte = (const JOCTET*) tif->tif_rawdata; sp->src.bytes_in_buffer = (size_t) tif->tif_rawcc; } static boolean std_fill_input_buffer(j_decompress_ptr cinfo) { JPEGState* sp = (JPEGState* ) cinfo; static const JOCTET dummy_EOI[2] = { 0xFF, JPEG_EOI }; #ifdef IPPJ_HUFF /* * The Intel IPP performance library does not necessarily read the whole * input buffer in one pass, so it is possible to get here with data * yet to read. * * We just return without doing anything, until the entire buffer has * been read. * http://trac.osgeo.org/gdal/wiki/JpegIPP */ if( sp->src.bytes_in_buffer > 0 ) { return (TRUE); } #endif /* * Normally the whole strip/tile is read and so we don't need to do * a fill. In the case of CHUNKY_STRIP_READ_SUPPORT we might not have * all the data, but the rawdata is refreshed between scanlines and * we push this into the io machinery in JPEGDecode(). * http://trac.osgeo.org/gdal/ticket/3894 */ WARNMS(cinfo, JWRN_JPEG_EOF); /* insert a fake EOI marker */ sp->src.next_input_byte = dummy_EOI; sp->src.bytes_in_buffer = 2; return (TRUE); } static void std_skip_input_data(j_decompress_ptr cinfo, long num_bytes) { JPEGState* sp = (JPEGState*) cinfo; if (num_bytes > 0) { if ((size_t)num_bytes > sp->src.bytes_in_buffer) { /* oops, buffer overrun */ (void) std_fill_input_buffer(cinfo); } else { sp->src.next_input_byte += (size_t) num_bytes; sp->src.bytes_in_buffer -= (size_t) num_bytes; } } } static void std_term_source(j_decompress_ptr cinfo) { /* No work necessary here */ (void) cinfo; } static void TIFFjpeg_data_src(JPEGState* sp) { sp->cinfo.d.src = &sp->src; sp->src.init_source = std_init_source; sp->src.fill_input_buffer = std_fill_input_buffer; sp->src.skip_input_data = std_skip_input_data; sp->src.resync_to_restart = jpeg_resync_to_restart; sp->src.term_source = std_term_source; sp->src.bytes_in_buffer = 0; /* for safety */ sp->src.next_input_byte = NULL; } /* * Alternate source manager for reading from JPEGTables. * We can share all the code except for the init routine. */ static void tables_init_source(j_decompress_ptr cinfo) { JPEGState* sp = (JPEGState*) cinfo; sp->src.next_input_byte = (const JOCTET*) sp->jpegtables; sp->src.bytes_in_buffer = (size_t) sp->jpegtables_length; } static void TIFFjpeg_tables_src(JPEGState* sp) { TIFFjpeg_data_src(sp); sp->src.init_source = tables_init_source; } /* * Allocate downsampled-data buffers needed for downsampled I/O. * We use values computed in jpeg_start_compress or jpeg_start_decompress. * We use libjpeg's allocator so that buffers will be released automatically * when done with strip/tile. * This is also a handy place to compute samplesperclump, bytesperline. */ static int alloc_downsampled_buffers(TIFF* tif, jpeg_component_info* comp_info, int num_components) { JPEGState* sp = JState(tif); int ci; jpeg_component_info* compptr; JSAMPARRAY buf; int samples_per_clump = 0; for (ci = 0, compptr = comp_info; ci < num_components; ci++, compptr++) { samples_per_clump += compptr->h_samp_factor * compptr->v_samp_factor; buf = TIFFjpeg_alloc_sarray(sp, JPOOL_IMAGE, compptr->width_in_blocks * DCTSIZE, (JDIMENSION) (compptr->v_samp_factor*DCTSIZE)); if (buf == NULL) return (0); sp->ds_buffer[ci] = buf; } sp->samplesperclump = samples_per_clump; return (1); } /* * JPEG Decoding. */ #ifdef CHECK_JPEG_YCBCR_SUBSAMPLING #define JPEG_MARKER_SOF0 0xC0 #define JPEG_MARKER_SOF1 0xC1 #define JPEG_MARKER_SOF2 0xC2 #define JPEG_MARKER_SOF9 0xC9 #define JPEG_MARKER_SOF10 0xCA #define JPEG_MARKER_DHT 0xC4 #define JPEG_MARKER_SOI 0xD8 #define JPEG_MARKER_SOS 0xDA #define JPEG_MARKER_DQT 0xDB #define JPEG_MARKER_DRI 0xDD #define JPEG_MARKER_APP0 0xE0 #define JPEG_MARKER_COM 0xFE struct JPEGFixupTagsSubsamplingData { TIFF* tif; void* buffer; uint32 buffersize; uint8* buffercurrentbyte; uint32 bufferbytesleft; uint64 fileoffset; uint64 filebytesleft; uint8 filepositioned; }; static void JPEGFixupTagsSubsampling(TIFF* tif); static int JPEGFixupTagsSubsamplingSec(struct JPEGFixupTagsSubsamplingData* data); static int JPEGFixupTagsSubsamplingReadByte(struct JPEGFixupTagsSubsamplingData* data, uint8* result); static int JPEGFixupTagsSubsamplingReadWord(struct JPEGFixupTagsSubsamplingData* data, uint16* result); static void JPEGFixupTagsSubsamplingSkip(struct JPEGFixupTagsSubsamplingData* data, uint16 skiplength); #endif static int JPEGFixupTags(TIFF* tif) { #ifdef CHECK_JPEG_YCBCR_SUBSAMPLING JPEGState* sp = JState(tif); if ((tif->tif_dir.td_photometric==PHOTOMETRIC_YCBCR)&& (tif->tif_dir.td_planarconfig==PLANARCONFIG_CONTIG)&& (tif->tif_dir.td_samplesperpixel==3) && !sp->ycbcrsampling_fetched) JPEGFixupTagsSubsampling(tif); #endif return(1); } #ifdef CHECK_JPEG_YCBCR_SUBSAMPLING static void JPEGFixupTagsSubsampling(TIFF* tif) { /* * Some JPEG-in-TIFF produces do not emit the YCBCRSUBSAMPLING values in * the TIFF tags, but still use non-default (2,2) values within the jpeg * data stream itself. In order for TIFF applications to work properly * - for instance to get the strip buffer size right - it is imperative * that the subsampling be available before we start reading the image * data normally. This function will attempt to analyze the first strip in * order to get the sampling values from the jpeg data stream. * * Note that JPEGPreDeocode() will produce a fairly loud warning when the * discovered sampling does not match the default sampling (2,2) or whatever * was actually in the tiff tags. * * See the bug in bugzilla for details: * * http://bugzilla.remotesensing.org/show_bug.cgi?id=168 * * Frank Warmerdam, July 2002 * Joris Van Damme, May 2007 */ static const char module[] = "JPEGFixupTagsSubsampling"; struct JPEGFixupTagsSubsamplingData m; uint64 fileoffset = TIFFGetStrileOffset(tif, 0); if( fileoffset == 0 ) { /* Do not even try to check if the first strip/tile does not yet exist, as occurs when GDAL has created a new NULL file for instance. */ return; } m.tif=tif; m.buffersize=2048; m.buffer=_TIFFmalloc(m.buffersize); if (m.buffer==NULL) { TIFFWarningExt(tif->tif_clientdata,module, "Unable to allocate memory for auto-correcting of subsampling values; auto-correcting skipped"); return; } m.buffercurrentbyte=NULL; m.bufferbytesleft=0; m.fileoffset=fileoffset; m.filepositioned=0; m.filebytesleft=TIFFGetStrileByteCount(tif, 0); if (!JPEGFixupTagsSubsamplingSec(&m)) TIFFWarningExt(tif->tif_clientdata,module, "Unable to auto-correct subsampling values, likely corrupt JPEG compressed data in first strip/tile; auto-correcting skipped"); _TIFFfree(m.buffer); } static int JPEGFixupTagsSubsamplingSec(struct JPEGFixupTagsSubsamplingData* data) { static const char module[] = "JPEGFixupTagsSubsamplingSec"; uint8 m; while (1) { while (1) { if (!JPEGFixupTagsSubsamplingReadByte(data,&m)) return(0); if (m==255) break; } while (1) { if (!JPEGFixupTagsSubsamplingReadByte(data,&m)) return(0); if (m!=255) break; } switch (m) { case JPEG_MARKER_SOI: /* this type of marker has no data and should be skipped */ break; case JPEG_MARKER_COM: case JPEG_MARKER_APP0: case JPEG_MARKER_APP0+1: case JPEG_MARKER_APP0+2: case JPEG_MARKER_APP0+3: case JPEG_MARKER_APP0+4: case JPEG_MARKER_APP0+5: case JPEG_MARKER_APP0+6: case JPEG_MARKER_APP0+7: case JPEG_MARKER_APP0+8: case JPEG_MARKER_APP0+9: case JPEG_MARKER_APP0+10: case JPEG_MARKER_APP0+11: case JPEG_MARKER_APP0+12: case JPEG_MARKER_APP0+13: case JPEG_MARKER_APP0+14: case JPEG_MARKER_APP0+15: case JPEG_MARKER_DQT: case JPEG_MARKER_SOS: case JPEG_MARKER_DHT: case JPEG_MARKER_DRI: /* this type of marker has data, but it has no use to us and should be skipped */ { uint16 n; if (!JPEGFixupTagsSubsamplingReadWord(data,&n)) return(0); if (n<2) return(0); n-=2; if (n>0) JPEGFixupTagsSubsamplingSkip(data,n); } break; case JPEG_MARKER_SOF0: /* Baseline sequential Huffman */ case JPEG_MARKER_SOF1: /* Extended sequential Huffman */ case JPEG_MARKER_SOF2: /* Progressive Huffman: normally not allowed by TechNote, but that doesn't hurt supporting it */ case JPEG_MARKER_SOF9: /* Extended sequential arithmetic */ case JPEG_MARKER_SOF10: /* Progressive arithmetic: normally not allowed by TechNote, but that doesn't hurt supporting it */ /* this marker contains the subsampling factors we're scanning for */ { uint16 n; uint16 o; uint8 p; uint8 ph,pv; if (!JPEGFixupTagsSubsamplingReadWord(data,&n)) return(0); if (n!=8+data->tif->tif_dir.td_samplesperpixel*3) return(0); JPEGFixupTagsSubsamplingSkip(data,7); if (!JPEGFixupTagsSubsamplingReadByte(data,&p)) return(0); ph=(p>>4); pv=(p&15); JPEGFixupTagsSubsamplingSkip(data,1); for (o=1; o<data->tif->tif_dir.td_samplesperpixel; o++) { JPEGFixupTagsSubsamplingSkip(data,1); if (!JPEGFixupTagsSubsamplingReadByte(data,&p)) return(0); if (p!=0x11) { TIFFWarningExt(data->tif->tif_clientdata,module, "Subsampling values inside JPEG compressed data have no TIFF equivalent, auto-correction of TIFF subsampling values failed"); return(1); } JPEGFixupTagsSubsamplingSkip(data,1); } if (((ph!=1)&&(ph!=2)&&(ph!=4))||((pv!=1)&&(pv!=2)&&(pv!=4))) { TIFFWarningExt(data->tif->tif_clientdata,module, "Subsampling values inside JPEG compressed data have no TIFF equivalent, auto-correction of TIFF subsampling values failed"); return(1); } if ((ph!=data->tif->tif_dir.td_ycbcrsubsampling[0])||(pv!=data->tif->tif_dir.td_ycbcrsubsampling[1])) { TIFFWarningExt(data->tif->tif_clientdata,module, "Auto-corrected former TIFF subsampling values [%d,%d] to match subsampling values inside JPEG compressed data [%d,%d]", (int)data->tif->tif_dir.td_ycbcrsubsampling[0], (int)data->tif->tif_dir.td_ycbcrsubsampling[1], (int)ph,(int)pv); data->tif->tif_dir.td_ycbcrsubsampling[0]=ph; data->tif->tif_dir.td_ycbcrsubsampling[1]=pv; } } return(1); default: return(0); } } } static int JPEGFixupTagsSubsamplingReadByte(struct JPEGFixupTagsSubsamplingData* data, uint8* result) { if (data->bufferbytesleft==0) { uint32 m; if (data->filebytesleft==0) return(0); if (!data->filepositioned) { TIFFSeekFile(data->tif,data->fileoffset,SEEK_SET); data->filepositioned=1; } m=data->buffersize; if ((uint64)m>data->filebytesleft) m=(uint32)data->filebytesleft; assert(m<0x80000000UL); if (TIFFReadFile(data->tif,data->buffer,(tmsize_t)m)!=(tmsize_t)m) return(0); data->buffercurrentbyte=data->buffer; data->bufferbytesleft=m; data->fileoffset+=m; data->filebytesleft-=m; } *result=*data->buffercurrentbyte; data->buffercurrentbyte++; data->bufferbytesleft--; return(1); } static int JPEGFixupTagsSubsamplingReadWord(struct JPEGFixupTagsSubsamplingData* data, uint16* result) { uint8 ma; uint8 mb; if (!JPEGFixupTagsSubsamplingReadByte(data,&ma)) return(0); if (!JPEGFixupTagsSubsamplingReadByte(data,&mb)) return(0); *result=(ma<<8)|mb; return(1); } static void JPEGFixupTagsSubsamplingSkip(struct JPEGFixupTagsSubsamplingData* data, uint16 skiplength) { if ((uint32)skiplength<=data->bufferbytesleft) { data->buffercurrentbyte+=skiplength; data->bufferbytesleft-=skiplength; } else { uint16 m; m=(uint16)(skiplength-data->bufferbytesleft); if (m<=data->filebytesleft) { data->bufferbytesleft=0; data->fileoffset+=m; data->filebytesleft-=m; data->filepositioned=0; } else { data->bufferbytesleft=0; data->filebytesleft=0; } } } #endif static int JPEGSetupDecode(TIFF* tif) { JPEGState* sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; #if defined(JPEG_DUAL_MODE_8_12) && !defined(TIFFInitJPEG) if( tif->tif_dir.td_bitspersample == 12 ) return TIFFReInitJPEG_12( tif, COMPRESSION_JPEG, 0 ); #endif JPEGInitializeLibJPEG( tif, TRUE ); assert(sp != NULL); assert(sp->cinfo.comm.is_decompressor); /* Read JPEGTables if it is present */ if (TIFFFieldSet(tif,FIELD_JPEGTABLES)) { TIFFjpeg_tables_src(sp); if(TIFFjpeg_read_header(sp,FALSE) != JPEG_HEADER_TABLES_ONLY) { TIFFErrorExt(tif->tif_clientdata, "JPEGSetupDecode", "Bogus JPEGTables field"); return (0); } } /* Grab parameters that are same for all strips/tiles */ sp->photometric = td->td_photometric; switch (sp->photometric) { case PHOTOMETRIC_YCBCR: sp->h_sampling = td->td_ycbcrsubsampling[0]; sp->v_sampling = td->td_ycbcrsubsampling[1]; break; default: /* TIFF 6.0 forbids subsampling of all other color spaces */ sp->h_sampling = 1; sp->v_sampling = 1; break; } /* Set up for reading normal data */ TIFFjpeg_data_src(sp); tif->tif_postdecode = _TIFFNoPostDecode; /* override byte swapping */ return (1); } /* Returns 1 if the full strip should be read, even when doing scanline per */ /* scanline decoding. This happens when the JPEG stream uses multiple scans. */ /* Currently only called in CHUNKY_STRIP_READ_SUPPORT mode through */ /* scanline interface. */ /* Only reads tif->tif_dir.td_bitspersample, tif->tif_rawdata and */ /* tif->tif_rawcc members. */ /* Can be called independently of the usual setup/predecode/decode states */ int TIFFJPEGIsFullStripRequired(TIFF* tif) { int ret; JPEGState state; #if defined(JPEG_DUAL_MODE_8_12) && !defined(TIFFJPEGIsFullStripRequired) if( tif->tif_dir.td_bitspersample == 12 ) return TIFFJPEGIsFullStripRequired_12( tif ); #endif memset(&state, 0, sizeof(JPEGState)); state.tif = tif; TIFFjpeg_create_decompress(&state); TIFFjpeg_data_src(&state); if (TIFFjpeg_read_header(&state, TRUE) != JPEG_HEADER_OK) { TIFFjpeg_destroy(&state); return (0); } ret = TIFFjpeg_has_multiple_scans(&state); TIFFjpeg_destroy(&state); return ret; } /* * Set up for decoding a strip or tile. */ /*ARGSUSED*/ static int JPEGPreDecode(TIFF* tif, uint16 s) { JPEGState *sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; static const char module[] = "JPEGPreDecode"; uint32 segment_width, segment_height; int downsampled_output; int ci; assert(sp != NULL); if (sp->cinfo.comm.is_decompressor == 0) { tif->tif_setupdecode( tif ); } assert(sp->cinfo.comm.is_decompressor); /* * Reset decoder state from any previous strip/tile, * in case application didn't read the whole strip. */ if (!TIFFjpeg_abort(sp)) return (0); /* * Read the header for this strip/tile. */ if (TIFFjpeg_read_header(sp, TRUE) != JPEG_HEADER_OK) return (0); tif->tif_rawcp = (uint8*) sp->src.next_input_byte; tif->tif_rawcc = sp->src.bytes_in_buffer; /* * Check image parameters and set decompression parameters. */ if (isTiled(tif)) { segment_width = td->td_tilewidth; segment_height = td->td_tilelength; sp->bytesperline = TIFFTileRowSize(tif); } else { segment_width = td->td_imagewidth; segment_height = td->td_imagelength - tif->tif_row; if (segment_height > td->td_rowsperstrip) segment_height = td->td_rowsperstrip; sp->bytesperline = TIFFScanlineSize(tif); } if (td->td_planarconfig == PLANARCONFIG_SEPARATE && s > 0) { /* * For PC 2, scale down the expected strip/tile size * to match a downsampled component */ segment_width = TIFFhowmany_32(segment_width, sp->h_sampling); segment_height = TIFFhowmany_32(segment_height, sp->v_sampling); } if (sp->cinfo.d.image_width < segment_width || sp->cinfo.d.image_height < segment_height) { TIFFWarningExt(tif->tif_clientdata, module, "Improper JPEG strip/tile size, " "expected %dx%d, got %dx%d", segment_width, segment_height, sp->cinfo.d.image_width, sp->cinfo.d.image_height); } if( sp->cinfo.d.image_width == segment_width && sp->cinfo.d.image_height > segment_height && tif->tif_row + segment_height == td->td_imagelength && !isTiled(tif) ) { /* Some files have a last strip, that should be truncated, */ /* but their JPEG codestream has still the maximum strip */ /* height. Warn about this as this is non compliant, but */ /* we can safely recover from that. */ TIFFWarningExt(tif->tif_clientdata, module, "JPEG strip size exceeds expected dimensions," " expected %dx%d, got %dx%d", segment_width, segment_height, sp->cinfo.d.image_width, sp->cinfo.d.image_height); } else if (sp->cinfo.d.image_width > segment_width || sp->cinfo.d.image_height > segment_height) { /* * This case could be dangerous, if the strip or tile size has * been reported as less than the amount of data jpeg will * return, some potential security issues arise. Catch this * case and error out. */ TIFFErrorExt(tif->tif_clientdata, module, "JPEG strip/tile size exceeds expected dimensions," " expected %dx%d, got %dx%d", segment_width, segment_height, sp->cinfo.d.image_width, sp->cinfo.d.image_height); return (0); } if (sp->cinfo.d.num_components != (td->td_planarconfig == PLANARCONFIG_CONTIG ? td->td_samplesperpixel : 1)) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG component count"); return (0); } #ifdef JPEG_LIB_MK1 if (12 != td->td_bitspersample && 8 != td->td_bitspersample) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG data precision"); return (0); } sp->cinfo.d.data_precision = td->td_bitspersample; sp->cinfo.d.bits_in_jsample = td->td_bitspersample; #else if (sp->cinfo.d.data_precision != td->td_bitspersample) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG data precision"); return (0); } #endif /* In some cases, libjpeg needs to allocate a lot of memory */ /* http://www.libjpeg-turbo.org/pmwiki/uploads/About/TwoIssueswiththeJPEGStandard.pdf */ if( TIFFjpeg_has_multiple_scans(sp) ) { /* In this case libjpeg will need to allocate memory or backing */ /* store for all coefficients */ /* See call to jinit_d_coef_controller() from master_selection() */ /* in libjpeg */ /* 1 MB for regular libjpeg usage */ toff_t nRequiredMemory = 1024 * 1024; for (ci = 0; ci < sp->cinfo.d.num_components; ci++) { const jpeg_component_info *compptr = &(sp->cinfo.d.comp_info[ci]); if( compptr->h_samp_factor > 0 && compptr->v_samp_factor > 0 ) { nRequiredMemory += (toff_t)( ((compptr->width_in_blocks + compptr->h_samp_factor - 1) / compptr->h_samp_factor)) * ((compptr->height_in_blocks + compptr->v_samp_factor - 1) / compptr->v_samp_factor) * sizeof(JBLOCK); } } if( sp->cinfo.d.mem->max_memory_to_use > 0 && nRequiredMemory > (toff_t)(sp->cinfo.d.mem->max_memory_to_use) && getenv("LIBTIFF_ALLOW_LARGE_LIBJPEG_MEM_ALLOC") == NULL ) { TIFFErrorExt(tif->tif_clientdata, module, "Reading this image would require libjpeg to allocate " "at least %u bytes. " "This is disabled since above the %u threshold. " "You may override this restriction by defining the " "LIBTIFF_ALLOW_LARGE_LIBJPEG_MEM_ALLOC environment variable, " "or setting the JPEGMEM environment variable to a value greater " "or equal to '%uM'", (unsigned)(nRequiredMemory), (unsigned)(sp->cinfo.d.mem->max_memory_to_use), (unsigned)((nRequiredMemory + 1000000 - 1) / 1000000)); return 0; } } if (td->td_planarconfig == PLANARCONFIG_CONTIG) { /* Component 0 should have expected sampling factors */ if (sp->cinfo.d.comp_info[0].h_samp_factor != sp->h_sampling || sp->cinfo.d.comp_info[0].v_samp_factor != sp->v_sampling) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG sampling factors %d,%d\n" "Apparently should be %d,%d.", sp->cinfo.d.comp_info[0].h_samp_factor, sp->cinfo.d.comp_info[0].v_samp_factor, sp->h_sampling, sp->v_sampling); return (0); } /* Rest should have sampling factors 1,1 */ for (ci = 1; ci < sp->cinfo.d.num_components; ci++) { if (sp->cinfo.d.comp_info[ci].h_samp_factor != 1 || sp->cinfo.d.comp_info[ci].v_samp_factor != 1) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG sampling factors"); return (0); } } } else { /* PC 2's single component should have sampling factors 1,1 */ if (sp->cinfo.d.comp_info[0].h_samp_factor != 1 || sp->cinfo.d.comp_info[0].v_samp_factor != 1) { TIFFErrorExt(tif->tif_clientdata, module, "Improper JPEG sampling factors"); return (0); } } downsampled_output = FALSE; if (td->td_planarconfig == PLANARCONFIG_CONTIG && sp->photometric == PHOTOMETRIC_YCBCR && sp->jpegcolormode == JPEGCOLORMODE_RGB) { /* Convert YCbCr to RGB */ sp->cinfo.d.jpeg_color_space = JCS_YCbCr; sp->cinfo.d.out_color_space = JCS_RGB; } else { /* Suppress colorspace handling */ sp->cinfo.d.jpeg_color_space = JCS_UNKNOWN; sp->cinfo.d.out_color_space = JCS_UNKNOWN; if (td->td_planarconfig == PLANARCONFIG_CONTIG && (sp->h_sampling != 1 || sp->v_sampling != 1)) downsampled_output = TRUE; /* XXX what about up-sampling? */ } if (downsampled_output) { /* Need to use raw-data interface to libjpeg */ sp->cinfo.d.raw_data_out = TRUE; #if JPEG_LIB_VERSION >= 70 sp->cinfo.d.do_fancy_upsampling = FALSE; #endif /* JPEG_LIB_VERSION >= 70 */ tif->tif_decoderow = DecodeRowError; tif->tif_decodestrip = JPEGDecodeRaw; tif->tif_decodetile = JPEGDecodeRaw; } else { /* Use normal interface to libjpeg */ sp->cinfo.d.raw_data_out = FALSE; tif->tif_decoderow = JPEGDecode; tif->tif_decodestrip = JPEGDecode; tif->tif_decodetile = JPEGDecode; } /* Start JPEG decompressor */ if (!TIFFjpeg_start_decompress(sp)) return (0); /* Allocate downsampled-data buffers if needed */ if (downsampled_output) { if (!alloc_downsampled_buffers(tif, sp->cinfo.d.comp_info, sp->cinfo.d.num_components)) return (0); sp->scancount = DCTSIZE; /* mark buffer empty */ } return (1); } /* * Decode a chunk of pixels. * "Standard" case: returned data is not downsampled. */ #if !JPEG_LIB_MK1_OR_12BIT static int JPEGDecode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { JPEGState *sp = JState(tif); tmsize_t nrows; (void) s; /* ** Update available information, buffer may have been refilled ** between decode requests */ sp->src.next_input_byte = (const JOCTET*) tif->tif_rawcp; sp->src.bytes_in_buffer = (size_t) tif->tif_rawcc; if( sp->bytesperline == 0 ) return 0; nrows = cc / sp->bytesperline; if (cc % sp->bytesperline) TIFFWarningExt(tif->tif_clientdata, tif->tif_name, "fractional scanline not read"); if( nrows > (tmsize_t) sp->cinfo.d.image_height ) nrows = sp->cinfo.d.image_height; /* data is expected to be read in multiples of a scanline */ if (nrows) { do { /* * In the libjpeg6b-9a 8bit case. We read directly into * the TIFF buffer. */ JSAMPROW bufptr = (JSAMPROW)buf; if (TIFFjpeg_read_scanlines(sp, &bufptr, 1) != 1) return (0); ++tif->tif_row; buf += sp->bytesperline; cc -= sp->bytesperline; } while (--nrows > 0); } /* Update information on consumed data */ tif->tif_rawcp = (uint8*) sp->src.next_input_byte; tif->tif_rawcc = sp->src.bytes_in_buffer; /* Close down the decompressor if we've finished the strip or tile. */ return sp->cinfo.d.output_scanline < sp->cinfo.d.output_height || TIFFjpeg_finish_decompress(sp); } #endif /* !JPEG_LIB_MK1_OR_12BIT */ #if JPEG_LIB_MK1_OR_12BIT /*ARGSUSED*/ static int JPEGDecode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { JPEGState *sp = JState(tif); tmsize_t nrows; (void) s; /* ** Update available information, buffer may have been refilled ** between decode requests */ sp->src.next_input_byte = (const JOCTET*) tif->tif_rawcp; sp->src.bytes_in_buffer = (size_t) tif->tif_rawcc; if( sp->bytesperline == 0 ) return 0; nrows = cc / sp->bytesperline; if (cc % sp->bytesperline) TIFFWarningExt(tif->tif_clientdata, tif->tif_name, "fractional scanline not read"); if( nrows > (tmsize_t) sp->cinfo.d.image_height ) nrows = sp->cinfo.d.image_height; /* data is expected to be read in multiples of a scanline */ if (nrows) { JSAMPROW line_work_buf = NULL; /* * For 6B, only use temporary buffer for 12 bit imagery. * For Mk1 always use it. */ if( sp->cinfo.d.data_precision == 12 ) { line_work_buf = (JSAMPROW) _TIFFmalloc(sizeof(short) * sp->cinfo.d.output_width * sp->cinfo.d.num_components ); } do { if( line_work_buf != NULL ) { /* * In the MK1 case, we always read into a 16bit * buffer, and then pack down to 12bit or 8bit. * In 6B case we only read into 16 bit buffer * for 12bit data, which we need to repack. */ if (TIFFjpeg_read_scanlines(sp, &line_work_buf, 1) != 1) return (0); if( sp->cinfo.d.data_precision == 12 ) { int value_pairs = (sp->cinfo.d.output_width * sp->cinfo.d.num_components) / 2; int iPair; for( iPair = 0; iPair < value_pairs; iPair++ ) { unsigned char *out_ptr = ((unsigned char *) buf) + iPair * 3; JSAMPLE *in_ptr = line_work_buf + iPair * 2; out_ptr[0] = (unsigned char)((in_ptr[0] & 0xff0) >> 4); out_ptr[1] = (unsigned char)(((in_ptr[0] & 0xf) << 4) | ((in_ptr[1] & 0xf00) >> 8)); out_ptr[2] = (unsigned char)(((in_ptr[1] & 0xff) >> 0)); } } else if( sp->cinfo.d.data_precision == 8 ) { int value_count = (sp->cinfo.d.output_width * sp->cinfo.d.num_components); int iValue; for( iValue = 0; iValue < value_count; iValue++ ) { ((unsigned char *) buf)[iValue] = line_work_buf[iValue] & 0xff; } } } ++tif->tif_row; buf += sp->bytesperline; cc -= sp->bytesperline; } while (--nrows > 0); if( line_work_buf != NULL ) _TIFFfree( line_work_buf ); } /* Update information on consumed data */ tif->tif_rawcp = (uint8*) sp->src.next_input_byte; tif->tif_rawcc = sp->src.bytes_in_buffer; /* Close down the decompressor if we've finished the strip or tile. */ return sp->cinfo.d.output_scanline < sp->cinfo.d.output_height || TIFFjpeg_finish_decompress(sp); } #endif /* JPEG_LIB_MK1_OR_12BIT */ /*ARGSUSED*/ static int DecodeRowError(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { (void) buf; (void) cc; (void) s; TIFFErrorExt(tif->tif_clientdata, "TIFFReadScanline", "scanline oriented access is not supported for downsampled JPEG compressed images, consider enabling TIFF_JPEGCOLORMODE as JPEGCOLORMODE_RGB." ); return 0; } /* * Decode a chunk of pixels. * Returned data is downsampled per sampling factors. */ /*ARGSUSED*/ static int JPEGDecodeRaw(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { JPEGState *sp = JState(tif); tmsize_t nrows; TIFFDirectory *td = &tif->tif_dir; (void) s; nrows = sp->cinfo.d.image_height; /* For last strip, limit number of rows to its truncated height */ /* even if the codestream height is larger (which is not compliant, */ /* but that we tolerate) */ if( (uint32)nrows > td->td_imagelength - tif->tif_row && !isTiled(tif) ) nrows = td->td_imagelength - tif->tif_row; /* data is expected to be read in multiples of a scanline */ if ( nrows != 0 ) { /* Cb,Cr both have sampling factors 1, so this is correct */ JDIMENSION clumps_per_line = sp->cinfo.d.comp_info[1].downsampled_width; int samples_per_clump = sp->samplesperclump; #if defined(JPEG_LIB_MK1_OR_12BIT) unsigned short* tmpbuf = _TIFFmalloc(sizeof(unsigned short) * sp->cinfo.d.output_width * sp->cinfo.d.num_components); if(tmpbuf==NULL) { TIFFErrorExt(tif->tif_clientdata, "JPEGDecodeRaw", "Out of memory"); return 0; } #endif do { jpeg_component_info *compptr; int ci, clumpoffset; if( cc < sp->bytesperline ) { TIFFErrorExt(tif->tif_clientdata, "JPEGDecodeRaw", "application buffer not large enough for all data."); return 0; } /* Reload downsampled-data buffer if needed */ if (sp->scancount >= DCTSIZE) { int n = sp->cinfo.d.max_v_samp_factor * DCTSIZE; if (TIFFjpeg_read_raw_data(sp, sp->ds_buffer, n) != n) return (0); sp->scancount = 0; } /* * Fastest way to unseparate data is to make one pass * over the scanline for each row of each component. */ clumpoffset = 0; /* first sample in clump */ for (ci = 0, compptr = sp->cinfo.d.comp_info; ci < sp->cinfo.d.num_components; ci++, compptr++) { int hsamp = compptr->h_samp_factor; int vsamp = compptr->v_samp_factor; int ypos; for (ypos = 0; ypos < vsamp; ypos++) { JSAMPLE *inptr = sp->ds_buffer[ci][sp->scancount*vsamp + ypos]; JDIMENSION nclump; #if defined(JPEG_LIB_MK1_OR_12BIT) JSAMPLE *outptr = (JSAMPLE*)tmpbuf + clumpoffset; #else JSAMPLE *outptr = (JSAMPLE*)buf + clumpoffset; if (cc < (tmsize_t)(clumpoffset + (tmsize_t)samples_per_clump*(clumps_per_line-1) + hsamp)) { TIFFErrorExt(tif->tif_clientdata, "JPEGDecodeRaw", "application buffer not large enough for all data, possible subsampling issue"); return 0; } #endif if (hsamp == 1) { /* fast path for at least Cb and Cr */ for (nclump = clumps_per_line; nclump-- > 0; ) { outptr[0] = *inptr++; outptr += samples_per_clump; } } else { int xpos; /* general case */ for (nclump = clumps_per_line; nclump-- > 0; ) { for (xpos = 0; xpos < hsamp; xpos++) outptr[xpos] = *inptr++; outptr += samples_per_clump; } } clumpoffset += hsamp; } } #if defined(JPEG_LIB_MK1_OR_12BIT) { if (sp->cinfo.d.data_precision == 8) { int i=0; int len = sp->cinfo.d.output_width * sp->cinfo.d.num_components; for (i=0; i<len; i++) { ((unsigned char*)buf)[i] = tmpbuf[i] & 0xff; } } else { /* 12-bit */ int value_pairs = (sp->cinfo.d.output_width * sp->cinfo.d.num_components) / 2; int iPair; for( iPair = 0; iPair < value_pairs; iPair++ ) { unsigned char *out_ptr = ((unsigned char *) buf) + iPair * 3; JSAMPLE *in_ptr = (JSAMPLE *) (tmpbuf + iPair * 2); out_ptr[0] = (unsigned char)((in_ptr[0] & 0xff0) >> 4); out_ptr[1] = (unsigned char)(((in_ptr[0] & 0xf) << 4) | ((in_ptr[1] & 0xf00) >> 8)); out_ptr[2] = (unsigned char)(((in_ptr[1] & 0xff) >> 0)); } } } #endif sp->scancount ++; tif->tif_row += sp->v_sampling; buf += sp->bytesperline; cc -= sp->bytesperline; nrows -= sp->v_sampling; } while (nrows > 0); #if defined(JPEG_LIB_MK1_OR_12BIT) _TIFFfree(tmpbuf); #endif } /* Close down the decompressor if done. */ return sp->cinfo.d.output_scanline < sp->cinfo.d.output_height || TIFFjpeg_finish_decompress(sp); } /* * JPEG Encoding. */ static void unsuppress_quant_table (JPEGState* sp, int tblno) { JQUANT_TBL* qtbl; if ((qtbl = sp->cinfo.c.quant_tbl_ptrs[tblno]) != NULL) qtbl->sent_table = FALSE; } static void suppress_quant_table (JPEGState* sp, int tblno) { JQUANT_TBL* qtbl; if ((qtbl = sp->cinfo.c.quant_tbl_ptrs[tblno]) != NULL) qtbl->sent_table = TRUE; } static void unsuppress_huff_table (JPEGState* sp, int tblno) { JHUFF_TBL* htbl; if ((htbl = sp->cinfo.c.dc_huff_tbl_ptrs[tblno]) != NULL) htbl->sent_table = FALSE; if ((htbl = sp->cinfo.c.ac_huff_tbl_ptrs[tblno]) != NULL) htbl->sent_table = FALSE; } static void suppress_huff_table (JPEGState* sp, int tblno) { JHUFF_TBL* htbl; if ((htbl = sp->cinfo.c.dc_huff_tbl_ptrs[tblno]) != NULL) htbl->sent_table = TRUE; if ((htbl = sp->cinfo.c.ac_huff_tbl_ptrs[tblno]) != NULL) htbl->sent_table = TRUE; } static int prepare_JPEGTables(TIFF* tif) { JPEGState* sp = JState(tif); /* Initialize quant tables for current quality setting */ if (!TIFFjpeg_set_quality(sp, sp->jpegquality, FALSE)) return (0); /* Mark only the tables we want for output */ /* NB: chrominance tables are currently used only with YCbCr */ if (!TIFFjpeg_suppress_tables(sp, TRUE)) return (0); if (sp->jpegtablesmode & JPEGTABLESMODE_QUANT) { unsuppress_quant_table(sp, 0); if (sp->photometric == PHOTOMETRIC_YCBCR) unsuppress_quant_table(sp, 1); } if (sp->jpegtablesmode & JPEGTABLESMODE_HUFF) { unsuppress_huff_table(sp, 0); if (sp->photometric == PHOTOMETRIC_YCBCR) unsuppress_huff_table(sp, 1); } /* Direct libjpeg output into jpegtables */ if (!TIFFjpeg_tables_dest(sp, tif)) return (0); /* Emit tables-only datastream */ if (!TIFFjpeg_write_tables(sp)) return (0); return (1); } static int JPEGSetupEncode(TIFF* tif) { JPEGState* sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; static const char module[] = "JPEGSetupEncode"; #if defined(JPEG_DUAL_MODE_8_12) && !defined(TIFFInitJPEG) if( tif->tif_dir.td_bitspersample == 12 ) return TIFFReInitJPEG_12( tif, COMPRESSION_JPEG, 1 ); #endif JPEGInitializeLibJPEG( tif, FALSE ); assert(sp != NULL); assert(!sp->cinfo.comm.is_decompressor); sp->photometric = td->td_photometric; /* * Initialize all JPEG parameters to default values. * Note that jpeg_set_defaults needs legal values for * in_color_space and input_components. */ if (td->td_planarconfig == PLANARCONFIG_CONTIG) { sp->cinfo.c.input_components = td->td_samplesperpixel; if (sp->photometric == PHOTOMETRIC_YCBCR) { if (sp->jpegcolormode == JPEGCOLORMODE_RGB) { sp->cinfo.c.in_color_space = JCS_RGB; } else { sp->cinfo.c.in_color_space = JCS_YCbCr; } } else { if ((td->td_photometric == PHOTOMETRIC_MINISWHITE || td->td_photometric == PHOTOMETRIC_MINISBLACK) && td->td_samplesperpixel == 1) sp->cinfo.c.in_color_space = JCS_GRAYSCALE; else if (td->td_photometric == PHOTOMETRIC_RGB && td->td_samplesperpixel == 3) sp->cinfo.c.in_color_space = JCS_RGB; else if (td->td_photometric == PHOTOMETRIC_SEPARATED && td->td_samplesperpixel == 4) sp->cinfo.c.in_color_space = JCS_CMYK; else sp->cinfo.c.in_color_space = JCS_UNKNOWN; } } else { sp->cinfo.c.input_components = 1; sp->cinfo.c.in_color_space = JCS_UNKNOWN; } if (!TIFFjpeg_set_defaults(sp)) return (0); /* Set per-file parameters */ switch (sp->photometric) { case PHOTOMETRIC_YCBCR: sp->h_sampling = td->td_ycbcrsubsampling[0]; sp->v_sampling = td->td_ycbcrsubsampling[1]; if( sp->h_sampling == 0 || sp->v_sampling == 0 ) { TIFFErrorExt(tif->tif_clientdata, module, "Invalig horizontal/vertical sampling value"); return (0); } if( td->td_bitspersample > 16 ) { TIFFErrorExt(tif->tif_clientdata, module, "BitsPerSample %d not allowed for JPEG", td->td_bitspersample); return (0); } /* * A ReferenceBlackWhite field *must* be present since the * default value is inappropriate for YCbCr. Fill in the * proper value if application didn't set it. */ { float *ref; if (!TIFFGetField(tif, TIFFTAG_REFERENCEBLACKWHITE, &ref)) { float refbw[6]; long top = 1L << td->td_bitspersample; refbw[0] = 0; refbw[1] = (float)(top-1L); refbw[2] = (float)(top>>1); refbw[3] = refbw[1]; refbw[4] = refbw[2]; refbw[5] = refbw[1]; TIFFSetField(tif, TIFFTAG_REFERENCEBLACKWHITE, refbw); } } break; case PHOTOMETRIC_PALETTE: /* disallowed by Tech Note */ case PHOTOMETRIC_MASK: TIFFErrorExt(tif->tif_clientdata, module, "PhotometricInterpretation %d not allowed for JPEG", (int) sp->photometric); return (0); default: /* TIFF 6.0 forbids subsampling of all other color spaces */ sp->h_sampling = 1; sp->v_sampling = 1; break; } /* Verify miscellaneous parameters */ /* * This would need work if libtiff ever supports different * depths for different components, or if libjpeg ever supports * run-time selection of depth. Neither is imminent. */ #ifdef JPEG_LIB_MK1 /* BITS_IN_JSAMPLE now permits 8 and 12 --- dgilbert */ if (td->td_bitspersample != 8 && td->td_bitspersample != 12) #else if (td->td_bitspersample != BITS_IN_JSAMPLE ) #endif { TIFFErrorExt(tif->tif_clientdata, module, "BitsPerSample %d not allowed for JPEG", (int) td->td_bitspersample); return (0); } sp->cinfo.c.data_precision = td->td_bitspersample; #ifdef JPEG_LIB_MK1 sp->cinfo.c.bits_in_jsample = td->td_bitspersample; #endif if (isTiled(tif)) { if ((td->td_tilelength % (sp->v_sampling * DCTSIZE)) != 0) { TIFFErrorExt(tif->tif_clientdata, module, "JPEG tile height must be multiple of %d", sp->v_sampling * DCTSIZE); return (0); } if ((td->td_tilewidth % (sp->h_sampling * DCTSIZE)) != 0) { TIFFErrorExt(tif->tif_clientdata, module, "JPEG tile width must be multiple of %d", sp->h_sampling * DCTSIZE); return (0); } } else { if (td->td_rowsperstrip < td->td_imagelength && (td->td_rowsperstrip % (sp->v_sampling * DCTSIZE)) != 0) { TIFFErrorExt(tif->tif_clientdata, module, "RowsPerStrip must be multiple of %d for JPEG", sp->v_sampling * DCTSIZE); return (0); } } /* Create a JPEGTables field if appropriate */ if (sp->jpegtablesmode & (JPEGTABLESMODE_QUANT|JPEGTABLESMODE_HUFF)) { if( sp->jpegtables == NULL || memcmp(sp->jpegtables,"\0\0\0\0\0\0\0\0\0",8) == 0 ) { if (!prepare_JPEGTables(tif)) return (0); /* Mark the field present */ /* Can't use TIFFSetField since BEENWRITING is already set! */ tif->tif_flags |= TIFF_DIRTYDIRECT; TIFFSetFieldBit(tif, FIELD_JPEGTABLES); } } else { /* We do not support application-supplied JPEGTables, */ /* so mark the field not present */ TIFFClrFieldBit(tif, FIELD_JPEGTABLES); } /* Direct libjpeg output to libtiff's output buffer */ TIFFjpeg_data_dest(sp, tif); return (1); } /* * Set encoding state at the start of a strip or tile. */ static int JPEGPreEncode(TIFF* tif, uint16 s) { JPEGState *sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; static const char module[] = "JPEGPreEncode"; uint32 segment_width, segment_height; int downsampled_input; assert(sp != NULL); if (sp->cinfo.comm.is_decompressor == 1) { tif->tif_setupencode( tif ); } assert(!sp->cinfo.comm.is_decompressor); /* * Set encoding parameters for this strip/tile. */ if (isTiled(tif)) { segment_width = td->td_tilewidth; segment_height = td->td_tilelength; sp->bytesperline = TIFFTileRowSize(tif); } else { segment_width = td->td_imagewidth; segment_height = td->td_imagelength - tif->tif_row; if (segment_height > td->td_rowsperstrip) segment_height = td->td_rowsperstrip; sp->bytesperline = TIFFScanlineSize(tif); } if (td->td_planarconfig == PLANARCONFIG_SEPARATE && s > 0) { /* for PC 2, scale down the strip/tile size * to match a downsampled component */ segment_width = TIFFhowmany_32(segment_width, sp->h_sampling); segment_height = TIFFhowmany_32(segment_height, sp->v_sampling); } if (segment_width > 65535 || segment_height > 65535) { TIFFErrorExt(tif->tif_clientdata, module, "Strip/tile too large for JPEG"); return (0); } sp->cinfo.c.image_width = segment_width; sp->cinfo.c.image_height = segment_height; downsampled_input = FALSE; if (td->td_planarconfig == PLANARCONFIG_CONTIG) { sp->cinfo.c.input_components = td->td_samplesperpixel; if (sp->photometric == PHOTOMETRIC_YCBCR) { if (sp->jpegcolormode != JPEGCOLORMODE_RGB) { if (sp->h_sampling != 1 || sp->v_sampling != 1) downsampled_input = TRUE; } if (!TIFFjpeg_set_colorspace(sp, JCS_YCbCr)) return (0); /* * Set Y sampling factors; * we assume jpeg_set_colorspace() set the rest to 1 */ sp->cinfo.c.comp_info[0].h_samp_factor = sp->h_sampling; sp->cinfo.c.comp_info[0].v_samp_factor = sp->v_sampling; } else { if (!TIFFjpeg_set_colorspace(sp, sp->cinfo.c.in_color_space)) return (0); /* jpeg_set_colorspace set all sampling factors to 1 */ } } else { if (!TIFFjpeg_set_colorspace(sp, JCS_UNKNOWN)) return (0); sp->cinfo.c.comp_info[0].component_id = s; /* jpeg_set_colorspace() set sampling factors to 1 */ if (sp->photometric == PHOTOMETRIC_YCBCR && s > 0) { sp->cinfo.c.comp_info[0].quant_tbl_no = 1; sp->cinfo.c.comp_info[0].dc_tbl_no = 1; sp->cinfo.c.comp_info[0].ac_tbl_no = 1; } } /* ensure libjpeg won't write any extraneous markers */ sp->cinfo.c.write_JFIF_header = FALSE; sp->cinfo.c.write_Adobe_marker = FALSE; /* set up table handling correctly */ /* calling TIFFjpeg_set_quality() causes quantization tables to be flagged */ /* as being to be emitted, which we don't want in the JPEGTABLESMODE_QUANT */ /* mode, so we must manually suppress them. However TIFFjpeg_set_quality() */ /* should really be called when dealing with files with directories with */ /* mixed qualities. see http://trac.osgeo.org/gdal/ticket/3539 */ if (!TIFFjpeg_set_quality(sp, sp->jpegquality, FALSE)) return (0); if (sp->jpegtablesmode & JPEGTABLESMODE_QUANT) { suppress_quant_table(sp, 0); suppress_quant_table(sp, 1); } else { unsuppress_quant_table(sp, 0); unsuppress_quant_table(sp, 1); } if (sp->jpegtablesmode & JPEGTABLESMODE_HUFF) { /* Explicit suppression is only needed if we did not go through the */ /* prepare_JPEGTables() code path, which may be the case if updating */ /* an existing file */ suppress_huff_table(sp, 0); suppress_huff_table(sp, 1); sp->cinfo.c.optimize_coding = FALSE; } else sp->cinfo.c.optimize_coding = TRUE; if (downsampled_input) { /* Need to use raw-data interface to libjpeg */ sp->cinfo.c.raw_data_in = TRUE; tif->tif_encoderow = JPEGEncodeRaw; tif->tif_encodestrip = JPEGEncodeRaw; tif->tif_encodetile = JPEGEncodeRaw; } else { /* Use normal interface to libjpeg */ sp->cinfo.c.raw_data_in = FALSE; tif->tif_encoderow = JPEGEncode; tif->tif_encodestrip = JPEGEncode; tif->tif_encodetile = JPEGEncode; } /* Start JPEG compressor */ if (!TIFFjpeg_start_compress(sp, FALSE)) return (0); /* Allocate downsampled-data buffers if needed */ if (downsampled_input) { if (!alloc_downsampled_buffers(tif, sp->cinfo.c.comp_info, sp->cinfo.c.num_components)) return (0); } sp->scancount = 0; return (1); } /* * Encode a chunk of pixels. * "Standard" case: incoming data is not downsampled. */ static int JPEGEncode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { JPEGState *sp = JState(tif); tmsize_t nrows; JSAMPROW bufptr[1]; short *line16 = NULL; int line16_count = 0; (void) s; assert(sp != NULL); /* data is expected to be supplied in multiples of a scanline */ nrows = cc / sp->bytesperline; if (cc % sp->bytesperline) TIFFWarningExt(tif->tif_clientdata, tif->tif_name, "fractional scanline discarded"); /* The last strip will be limited to image size */ if( !isTiled(tif) && tif->tif_row+nrows > tif->tif_dir.td_imagelength ) nrows = tif->tif_dir.td_imagelength - tif->tif_row; if( sp->cinfo.c.data_precision == 12 ) { line16_count = (int)((sp->bytesperline * 2) / 3); line16 = (short *) _TIFFmalloc(sizeof(short) * line16_count); if (!line16) { TIFFErrorExt(tif->tif_clientdata, "JPEGEncode", "Failed to allocate memory"); return 0; } } while (nrows-- > 0) { if( sp->cinfo.c.data_precision == 12 ) { int value_pairs = line16_count / 2; int iPair; bufptr[0] = (JSAMPROW) line16; for( iPair = 0; iPair < value_pairs; iPair++ ) { unsigned char *in_ptr = ((unsigned char *) buf) + iPair * 3; JSAMPLE *out_ptr = (JSAMPLE *) (line16 + iPair * 2); out_ptr[0] = (in_ptr[0] << 4) | ((in_ptr[1] & 0xf0) >> 4); out_ptr[1] = ((in_ptr[1] & 0x0f) << 8) | in_ptr[2]; } } else { bufptr[0] = (JSAMPROW) buf; } if (TIFFjpeg_write_scanlines(sp, bufptr, 1) != 1) return (0); if (nrows > 0) tif->tif_row++; buf += sp->bytesperline; } if( sp->cinfo.c.data_precision == 12 ) { _TIFFfree( line16 ); } return (1); } /* * Encode a chunk of pixels. * Incoming data is expected to be downsampled per sampling factors. */ static int JPEGEncodeRaw(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s) { JPEGState *sp = JState(tif); JSAMPLE* inptr; JSAMPLE* outptr; tmsize_t nrows; JDIMENSION clumps_per_line, nclump; int clumpoffset, ci, xpos, ypos; jpeg_component_info* compptr; int samples_per_clump = sp->samplesperclump; tmsize_t bytesperclumpline; (void) s; assert(sp != NULL); /* data is expected to be supplied in multiples of a clumpline */ /* a clumpline is equivalent to v_sampling desubsampled scanlines */ /* TODO: the following calculation of bytesperclumpline, should substitute calculation of sp->bytesperline, except that it is per v_sampling lines */ bytesperclumpline = ((((tmsize_t)sp->cinfo.c.image_width+sp->h_sampling-1)/sp->h_sampling) *((tmsize_t)sp->h_sampling*sp->v_sampling+2)*sp->cinfo.c.data_precision+7) /8; nrows = ( cc / bytesperclumpline ) * sp->v_sampling; if (cc % bytesperclumpline) TIFFWarningExt(tif->tif_clientdata, tif->tif_name, "fractional scanline discarded"); /* Cb,Cr both have sampling factors 1, so this is correct */ clumps_per_line = sp->cinfo.c.comp_info[1].downsampled_width; while (nrows > 0) { /* * Fastest way to separate the data is to make one pass * over the scanline for each row of each component. */ clumpoffset = 0; /* first sample in clump */ for (ci = 0, compptr = sp->cinfo.c.comp_info; ci < sp->cinfo.c.num_components; ci++, compptr++) { int hsamp = compptr->h_samp_factor; int vsamp = compptr->v_samp_factor; int padding = (int) (compptr->width_in_blocks * DCTSIZE - clumps_per_line * hsamp); for (ypos = 0; ypos < vsamp; ypos++) { inptr = ((JSAMPLE*) buf) + clumpoffset; outptr = sp->ds_buffer[ci][sp->scancount*vsamp + ypos]; if (hsamp == 1) { /* fast path for at least Cb and Cr */ for (nclump = clumps_per_line; nclump-- > 0; ) { *outptr++ = inptr[0]; inptr += samples_per_clump; } } else { /* general case */ for (nclump = clumps_per_line; nclump-- > 0; ) { for (xpos = 0; xpos < hsamp; xpos++) *outptr++ = inptr[xpos]; inptr += samples_per_clump; } } /* pad each scanline as needed */ for (xpos = 0; xpos < padding; xpos++) { *outptr = outptr[-1]; outptr++; } clumpoffset += hsamp; } } sp->scancount++; if (sp->scancount >= DCTSIZE) { int n = sp->cinfo.c.max_v_samp_factor * DCTSIZE; if (TIFFjpeg_write_raw_data(sp, sp->ds_buffer, n) != n) return (0); sp->scancount = 0; } tif->tif_row += sp->v_sampling; buf += bytesperclumpline; nrows -= sp->v_sampling; } return (1); } /* * Finish up at the end of a strip or tile. */ static int JPEGPostEncode(TIFF* tif) { JPEGState *sp = JState(tif); if (sp->scancount > 0) { /* * Need to emit a partial bufferload of downsampled data. * Pad the data vertically. */ int ci, ypos, n; jpeg_component_info* compptr; for (ci = 0, compptr = sp->cinfo.c.comp_info; ci < sp->cinfo.c.num_components; ci++, compptr++) { int vsamp = compptr->v_samp_factor; tmsize_t row_width = compptr->width_in_blocks * DCTSIZE * sizeof(JSAMPLE); for (ypos = sp->scancount * vsamp; ypos < DCTSIZE * vsamp; ypos++) { _TIFFmemcpy((void*)sp->ds_buffer[ci][ypos], (void*)sp->ds_buffer[ci][ypos-1], row_width); } } n = sp->cinfo.c.max_v_samp_factor * DCTSIZE; if (TIFFjpeg_write_raw_data(sp, sp->ds_buffer, n) != n) return (0); } return (TIFFjpeg_finish_compress(JState(tif))); } static void JPEGCleanup(TIFF* tif) { JPEGState *sp = JState(tif); assert(sp != 0); tif->tif_tagmethods.vgetfield = sp->vgetparent; tif->tif_tagmethods.vsetfield = sp->vsetparent; tif->tif_tagmethods.printdir = sp->printdir; if( sp->cinfo_initialized ) TIFFjpeg_destroy(sp); /* release libjpeg resources */ if (sp->jpegtables) /* tag value */ _TIFFfree(sp->jpegtables); _TIFFfree(tif->tif_data); /* release local state */ tif->tif_data = NULL; _TIFFSetDefaultCompressionState(tif); } static void JPEGResetUpsampled( TIFF* tif ) { JPEGState* sp = JState(tif); TIFFDirectory* td = &tif->tif_dir; /* * Mark whether returned data is up-sampled or not so TIFFStripSize * and TIFFTileSize return values that reflect the true amount of * data. */ tif->tif_flags &= ~TIFF_UPSAMPLED; if (td->td_planarconfig == PLANARCONFIG_CONTIG) { if (td->td_photometric == PHOTOMETRIC_YCBCR && sp->jpegcolormode == JPEGCOLORMODE_RGB) { tif->tif_flags |= TIFF_UPSAMPLED; } else { #ifdef notdef if (td->td_ycbcrsubsampling[0] != 1 || td->td_ycbcrsubsampling[1] != 1) ; /* XXX what about up-sampling? */ #endif } } /* * Must recalculate cached tile size in case sampling state changed. * Should we really be doing this now if image size isn't set? */ if( tif->tif_tilesize > 0 ) tif->tif_tilesize = isTiled(tif) ? TIFFTileSize(tif) : (tmsize_t)(-1); if( tif->tif_scanlinesize > 0 ) tif->tif_scanlinesize = TIFFScanlineSize(tif); } static int JPEGVSetField(TIFF* tif, uint32 tag, va_list ap) { JPEGState* sp = JState(tif); const TIFFField* fip; uint32 v32; assert(sp != NULL); switch (tag) { case TIFFTAG_JPEGTABLES: v32 = (uint32) va_arg(ap, uint32); if (v32 == 0) { /* XXX */ return (0); } _TIFFsetByteArray(&sp->jpegtables, va_arg(ap, void*), v32); sp->jpegtables_length = v32; TIFFSetFieldBit(tif, FIELD_JPEGTABLES); break; case TIFFTAG_JPEGQUALITY: sp->jpegquality = (int) va_arg(ap, int); return (1); /* pseudo tag */ case TIFFTAG_JPEGCOLORMODE: sp->jpegcolormode = (int) va_arg(ap, int); JPEGResetUpsampled( tif ); return (1); /* pseudo tag */ case TIFFTAG_PHOTOMETRIC: { int ret_value = (*sp->vsetparent)(tif, tag, ap); JPEGResetUpsampled( tif ); return ret_value; } case TIFFTAG_JPEGTABLESMODE: sp->jpegtablesmode = (int) va_arg(ap, int); return (1); /* pseudo tag */ case TIFFTAG_YCBCRSUBSAMPLING: /* mark the fact that we have a real ycbcrsubsampling! */ sp->ycbcrsampling_fetched = 1; /* should we be recomputing upsampling info here? */ return (*sp->vsetparent)(tif, tag, ap); default: return (*sp->vsetparent)(tif, tag, ap); } if ((fip = TIFFFieldWithTag(tif, tag)) != NULL) { TIFFSetFieldBit(tif, fip->field_bit); } else { return (0); } tif->tif_flags |= TIFF_DIRTYDIRECT; return (1); } static int JPEGVGetField(TIFF* tif, uint32 tag, va_list ap) { JPEGState* sp = JState(tif); assert(sp != NULL); switch (tag) { case TIFFTAG_JPEGTABLES: *va_arg(ap, uint32*) = sp->jpegtables_length; *va_arg(ap, const void**) = sp->jpegtables; break; case TIFFTAG_JPEGQUALITY: *va_arg(ap, int*) = sp->jpegquality; break; case TIFFTAG_JPEGCOLORMODE: *va_arg(ap, int*) = sp->jpegcolormode; break; case TIFFTAG_JPEGTABLESMODE: *va_arg(ap, int*) = sp->jpegtablesmode; break; default: return (*sp->vgetparent)(tif, tag, ap); } return (1); } static void JPEGPrintDir(TIFF* tif, FILE* fd, long flags) { JPEGState* sp = JState(tif); assert(sp != NULL); (void) flags; if( sp != NULL ) { if (TIFFFieldSet(tif,FIELD_JPEGTABLES)) fprintf(fd, " JPEG Tables: (%lu bytes)\n", (unsigned long) sp->jpegtables_length); if (sp->printdir) (*sp->printdir)(tif, fd, flags); } } static uint32 JPEGDefaultStripSize(TIFF* tif, uint32 s) { JPEGState* sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; s = (*sp->defsparent)(tif, s); if (s < td->td_imagelength) s = TIFFroundup_32(s, td->td_ycbcrsubsampling[1] * DCTSIZE); return (s); } static void JPEGDefaultTileSize(TIFF* tif, uint32* tw, uint32* th) { JPEGState* sp = JState(tif); TIFFDirectory *td = &tif->tif_dir; (*sp->deftparent)(tif, tw, th); *tw = TIFFroundup_32(*tw, td->td_ycbcrsubsampling[0] * DCTSIZE); *th = TIFFroundup_32(*th, td->td_ycbcrsubsampling[1] * DCTSIZE); } /* * The JPEG library initialized used to be done in TIFFInitJPEG(), but * now that we allow a TIFF file to be opened in update mode it is necessary * to have some way of deciding whether compression or decompression is * desired other than looking at tif->tif_mode. We accomplish this by * examining {TILE/STRIP}BYTECOUNTS to see if there is a non-zero entry. * If so, we assume decompression is desired. * * This is tricky, because TIFFInitJPEG() is called while the directory is * being read, and generally speaking the BYTECOUNTS tag won't have been read * at that point. So we try to defer jpeg library initialization till we * do have that tag ... basically any access that might require the compressor * or decompressor that occurs after the reading of the directory. * * In an ideal world compressors or decompressors would be setup * at the point where a single tile or strip was accessed (for read or write) * so that stuff like update of missing tiles, or replacement of tiles could * be done. However, we aren't trying to crack that nut just yet ... * * NFW, Feb 3rd, 2003. */ static int JPEGInitializeLibJPEG( TIFF * tif, int decompress ) { JPEGState* sp = JState(tif); if(sp->cinfo_initialized) { if( !decompress && sp->cinfo.comm.is_decompressor ) TIFFjpeg_destroy( sp ); else if( decompress && !sp->cinfo.comm.is_decompressor ) TIFFjpeg_destroy( sp ); else return 1; sp->cinfo_initialized = 0; } /* * Initialize libjpeg. */ if ( decompress ) { if (!TIFFjpeg_create_decompress(sp)) return (0); } else { if (!TIFFjpeg_create_compress(sp)) return (0); #ifndef TIFF_JPEG_MAX_MEMORY_TO_USE #define TIFF_JPEG_MAX_MEMORY_TO_USE (10 * 1024 * 1024) #endif /* libjpeg turbo 1.5.2 honours max_memory_to_use, but has no backing */ /* store implementation, so better not set max_memory_to_use ourselves. */ /* See https://github.com/libjpeg-turbo/libjpeg-turbo/issues/162 */ if( sp->cinfo.c.mem->max_memory_to_use > 0 ) { /* This is to address bug related in ticket GDAL #1795. */ if (getenv("JPEGMEM") == NULL) { /* Increase the max memory usable. This helps when creating files */ /* with "big" tile, without using libjpeg temporary files. */ /* For example a 512x512 tile with 3 bands */ /* requires 1.5 MB which is above libjpeg 1MB default */ if( sp->cinfo.c.mem->max_memory_to_use < TIFF_JPEG_MAX_MEMORY_TO_USE ) sp->cinfo.c.mem->max_memory_to_use = TIFF_JPEG_MAX_MEMORY_TO_USE; } } } sp->cinfo_initialized = TRUE; return 1; } int TIFFInitJPEG(TIFF* tif, int scheme) { JPEGState* sp; assert(scheme == COMPRESSION_JPEG); /* * Merge codec-specific tag information. */ if (!_TIFFMergeFields(tif, jpegFields, TIFFArrayCount(jpegFields))) { TIFFErrorExt(tif->tif_clientdata, "TIFFInitJPEG", "Merging JPEG codec-specific tags failed"); return 0; } /* * Allocate state block so tag methods have storage to record values. */ tif->tif_data = (uint8*) _TIFFmalloc(sizeof (JPEGState)); if (tif->tif_data == NULL) { TIFFErrorExt(tif->tif_clientdata, "TIFFInitJPEG", "No space for JPEG state block"); return 0; } _TIFFmemset(tif->tif_data, 0, sizeof(JPEGState)); sp = JState(tif); sp->tif = tif; /* back link */ /* * Override parent get/set field methods. */ sp->vgetparent = tif->tif_tagmethods.vgetfield; tif->tif_tagmethods.vgetfield = JPEGVGetField; /* hook for codec tags */ sp->vsetparent = tif->tif_tagmethods.vsetfield; tif->tif_tagmethods.vsetfield = JPEGVSetField; /* hook for codec tags */ sp->printdir = tif->tif_tagmethods.printdir; tif->tif_tagmethods.printdir = JPEGPrintDir; /* hook for codec tags */ /* Default values for codec-specific fields */ sp->jpegtables = NULL; sp->jpegtables_length = 0; sp->jpegquality = 75; /* Default IJG quality */ sp->jpegcolormode = JPEGCOLORMODE_RAW; sp->jpegtablesmode = JPEGTABLESMODE_QUANT | JPEGTABLESMODE_HUFF; sp->ycbcrsampling_fetched = 0; /* * Install codec methods. */ tif->tif_fixuptags = JPEGFixupTags; tif->tif_setupdecode = JPEGSetupDecode; tif->tif_predecode = JPEGPreDecode; tif->tif_decoderow = JPEGDecode; tif->tif_decodestrip = JPEGDecode; tif->tif_decodetile = JPEGDecode; tif->tif_setupencode = JPEGSetupEncode; tif->tif_preencode = JPEGPreEncode; tif->tif_postencode = JPEGPostEncode; tif->tif_encoderow = JPEGEncode; tif->tif_encodestrip = JPEGEncode; tif->tif_encodetile = JPEGEncode; tif->tif_cleanup = JPEGCleanup; sp->defsparent = tif->tif_defstripsize; tif->tif_defstripsize = JPEGDefaultStripSize; sp->deftparent = tif->tif_deftilesize; tif->tif_deftilesize = JPEGDefaultTileSize; tif->tif_flags |= TIFF_NOBITREV; /* no bit reversal, please */ sp->cinfo_initialized = FALSE; /* ** Create a JPEGTables field if no directory has yet been created. ** We do this just to ensure that sufficient space is reserved for ** the JPEGTables field. It will be properly created the right ** size later. */ if( tif->tif_diroff == 0 ) { #define SIZE_OF_JPEGTABLES 2000 /* The following line assumes incorrectly that all JPEG-in-TIFF files will have a JPEGTABLES tag generated and causes null-filled JPEGTABLES tags to be written when the JPEG data is placed with TIFFWriteRawStrip. The field bit should be set, anyway, later when actual JPEGTABLES header is generated, so removing it here hopefully is harmless. TIFFSetFieldBit(tif, FIELD_JPEGTABLES); */ sp->jpegtables_length = SIZE_OF_JPEGTABLES; sp->jpegtables = (void *) _TIFFmalloc(sp->jpegtables_length); if (sp->jpegtables) { _TIFFmemset(sp->jpegtables, 0, SIZE_OF_JPEGTABLES); } else { TIFFErrorExt(tif->tif_clientdata, "TIFFInitJPEG", "Failed to allocate memory for JPEG tables"); return 0; } #undef SIZE_OF_JPEGTABLES } return 1; } #endif /* JPEG_SUPPORT */ /* vim: set ts=8 sts=8 sw=8 noet: */ /* * Local Variables: * mode: c * c-basic-offset: 8 * fill-column: 78 * End: */
{ "pile_set_name": "StackExchange" }
Q: Does the Diablo 3 contest's Price Tag Limit of 250$ include shipping? Question is title, does the limit include shipping as in; 250$ + shipping or (prize price + shipping <= 250) A: If the total cost (prize + shipping) is under $250, don't worry about how much shipping costs. If the total cost (prize + shipping) is over $250, please limit shipping to less than $20. There are usually ways to find the same item that will ship to you for cheaper (and quicker). For example, try to find an item that comes from the Amazon nearest your country (e.g. amazon.co.uk if you're in the UK, or amazon.de if you're in Germany).
{ "pile_set_name": "PubMed Central" }
Pu W, Qiu J, Nassar ZD, et al. A role for caveola‐forming proteins caveolin‐1 and CAVIN1 in the pro‐invasive response of glioblastoma to osmotic and hydrostatic pressure. J Cell Mol Med. 2020;24:3724--3738. 10.1111/jcmm.15076 1. INTRODUCTION {#jcmm15076-sec-0001} =============== Solid tumours are exposed to osmotic and mechanical stresses. Both preclinical and clinical studies indicate that tumours are generally poorly perfused, and tumour interstitial fluid pressure (IFP) is increased compared to normal tissue, with both increased hydrostatic pressure and oncotic pressure. This results from a combination of factors, including inefficient lymphatic drainage of soluble proteins, leaky and dysfunctional blood vessels, increased fibroblast numbers, thicker collagen fibres and accumulation of inflammatory factor‐secreting cells.[1](#jcmm15076-bib-0001){ref-type="ref"}, [2](#jcmm15076-bib-0002){ref-type="ref"}, [3](#jcmm15076-bib-0003){ref-type="ref"} Furthermore, in the case of solid tumours growing within the brain, the space limitation that the skull creates causes further compression as the tumour grows; patients with brain tumours experience increased intracranial pressure.[4](#jcmm15076-bib-0004){ref-type="ref"} Caveolae are flask‐shaped plasma membrane subdomains that require, for assembly, maintenance and functions, proteins from two families, the membrane‐embedded caveolins and the cytoplasmic cavins. Caveolae serve important roles in signalling, membrane homeostasis and mechanosensing.[5](#jcmm15076-bib-0005){ref-type="ref"} Caveolae also contribute to surface area homeostasis, as they flatten reversibly to allow instantaneous membrane tension buffering.[6](#jcmm15076-bib-0006){ref-type="ref"} The pressure‐sensing capabilities of caveolae and the mechanotransduction capacities of caveola‐forming proteins have pathophysiological consequences on cardiovascular or muscle function.[7](#jcmm15076-bib-0007){ref-type="ref"}, [8](#jcmm15076-bib-0008){ref-type="ref"} In glioblastoma (GBM), caveolae or caveola‐forming proteins have been proposed to be important in the regulation of EGFR signalling, resistance to treatment and exosome‐mediated cell‐cell communication.[9](#jcmm15076-bib-0009){ref-type="ref"}, [10](#jcmm15076-bib-0010){ref-type="ref"}, [11](#jcmm15076-bib-0011){ref-type="ref"} Moreover, the expression of both caveolin‐1 and CAVIN1 are increased in GBM compared to normal samples and are associated with shorter patient survival time and correlated with expression of the matrix proteases uPA and gelatinases.[12](#jcmm15076-bib-0012){ref-type="ref"} Matrix proteases are key mediators of the invasiveness of GBM, which in turn contributes to the disease particularly poor prognosis. Several signalling pathways known to be activated in GBM result in increased protease expression. Conversely, matrix proteases such as gelatinases or uPA act through both degradation of the brain extracellular matrix and the activation of pro‐migratory signalling.[13](#jcmm15076-bib-0013){ref-type="ref"} We have previously demonstrated that exposure of GBM adherent cell lines or oncospheres to osmotic and hydrostatic or centrifugal pressure induces an increase in GBM invasive potential in vitro. In the present study, we investigated whether caveolae mediate the pro‐invasive response to pressure in GBM cells. 2. MATERIALS AND METHODS {#jcmm15076-sec-0002} ======================== 2.1. Materials {#jcmm15076-sec-0003} -------------- RPMI‐1640 medium, Opti‐MEM™ medium, geneticin, lipofectamine 3000, trypsin‐EDTA, penicillin/streptomycin, Coomassie brilliant blue R‐250 Pierce™ BCA Protein Assay Kit and real‐time PCR reagents were purchased from Life Technologies. The 40% acrylamide/bis solution was from Bio‐Rad. CultreCoat® 24‐well plates with BME‐Coated Inserts were from Bio Scientific Pty. Ltd. CultreCoat® 96 Well Medium BME Cell Invasion Assay Kit was from Bio Scientific Pty. Ltd. The caveolin‐1 primary antibody was from Cell Signaling Technology, and CAVIN1 primary antibody was from Proteintech®. ECL^TM^ Anti‐Rabbit IgG was purchased from GE Healthcare Science. Other reagents were purchased from Sigma‐Aldrich unless otherwise specified. 2.2. Cell culture {#jcmm15076-sec-0004} ----------------- Human GBM cell line U118 was cultured in RPMI medium supplemented with 10% (v/v) FBS, 100 U/mL penicillin and 100 μg/mL streptomycin. U251 cells were transfected with plasmids encoding shRNA to CAV1 or control shRNA using Transpass™ according to the manufacturer\'s instructions. Stably transfected clones were isolated after selection using 250 μg/mL G418 and tested using Western blot analysis with anti‐CAV1 and anti‐CAVIN1 primary antibodies. All cell lines were incubated at 37°C with 5% CO~2~. 2.3. Generation of CAVIN1 or caveolin‐1 CRISPR knockout U251 cell lines {#jcmm15076-sec-0005} ----------------------------------------------------------------------- CAV1 or CAVIN1 clustered regularly interspaced short palindromic repeats (CRISPR) knockout U251 cell lines were generated at the Queensland Facility for Advanced Genome Editing (QFAGE), Institute for Molecular Bioscience, The University of Queensland. For each gene, three guide RNA (gRNA) targeting the coding sequence of exon1 or exon2 was designed using the free online tool <http://crispr.mit.edu/>. For CAVIN1, the sequences were gRNA1: ATCAAGTCGGACCAGGTGAA, gRNA2: GCTCACCGTATTGCTCGTGG, gRNA3: GTCAACGTGAAGACCGTGCG; for caveolin‐1, the sequences were gRNA1: ATGTTGCCCTGTTCCCGGAT, gRNA2: AGTGTACGACGCGCACACCA, gRNA3: GTTTAGGGTCGCGGTTGACC. Ribonucleoprotein (RNP) complexes were formed by mixing 20 pmol of each synthesized gRNA (crRNA + tracrRNA, IDT) with 20 pmol of spCas9 protein. Assembled RNPs were combined with 200 000 U251 cells and transfected using a Lonza Nucleofector 4D device (kit SE, program DS‐138). Forty‐eight hours post‐transfection, genomic DNA of bulk transfected cell pools was extracted and the editing efficacy was confirmed by T7E1 (T7 Endonuclease I) assay and Sanger sequencing analysis. Pooled cells transfected with the three gRNA were tested for loss of CAV1 and CAVIN1 protein expression. 2.4. siRNA‐mediated CAV1 or CAVIN1 knockdown {#jcmm15076-sec-0006} -------------------------------------------- CAV1 or CAVIN1 expression was inhibited in U251 cells using Stealth siRNAs. The CAV1 Stealth siRNAs (HSS141466, HSS141467 and HSS141468), CAVIN1 Stealth siRNAs (HSS138488, HSS138489 and HSS178652) and Stealth RNAi™ siRNA Negative Control, Med GC were purchased from Life Technologies (Life Technologies). The transfection was conducted using Lipofectamine 3000 as previous described. After the transfection, the cells were split as required for experiments. 2.5. Osmotic stress {#jcmm15076-sec-0007} ------------------- The different osmolality media were prepared as previously described [12](#jcmm15076-bib-0012){ref-type="ref"} and controlled using an OSMOMAT 3000 basic freezing point osmometer (Gallay) calibrated using 300 and 500 mOsmol/kg standards. The osmolalities employed to study cellular responses were selected based on cell survival at 48 hours For U118, U87, U251 and shRNA U251 cells, the highest non‐toxic osmolality was 440 mOsmol/kg. For CRISPR‐Cas9 cells, the highest hyperosmolality was 360 mOsmol/kg. 1.0 × 10^6^ cells were seeded in 12‐well plates and incubated at 37°C with 5% CO~2~ for 24 hours After 24 hours, cells were rinsed with serum‐free medium twice and 1 mL of serum‐free medium of varying osmolality was added to each well and incubated for 48 hours The medium was collected and centrifuged at 935 *g* for 5 minutes then stored at −80°C until analysis. 2.6. Hydrostatic pressure treatment {#jcmm15076-sec-0008} ----------------------------------- 5.0 × 10^6^ cells were seeded in T25 flask and incubated at 37°C with 5% CO~2~ for 24 hours Cells were rinsed with serum‐free medium twice and added with 3 mL of serum‐free medium containing 25 µmol/L HEPES. The flask lid was fitted with a three‐way stopcock (BD Connecta™), and pressure increased to 30 mm Hg using a sphygmomanometer. The pressure was maintained by closing the stopcock, and the cells were incubated for 48 hours A flask with an air‐permeable lid was used as control. The medium was collected and centrifuged at 935 *g* for 5 minutes then stored at −80°C until analysis. 2.7. MTT assay {#jcmm15076-sec-0009} -------------- The cell viability was tested using the 3‐\[4,5‐dimethylthiazole‐2‐yl\]‐2,5‐diphenyltetrazolium bromide (MTT) assay as previous described. The UV absorbance was read with an iMark™ microplate absorbance reader at 595 nm (Bio‐Rad Laboratories). Background absorbance was subtracted, and results were expressed as the percentage viability of control cells. 2.8. In gel zymography {#jcmm15076-sec-0010} ---------------------- Gelatinases and uPA in media conditioned by different cell lines were measured by gelatin or casein‐plasminogen zymography as previous described. The gels were scanned, and uPA, MMP‐2 and MMP‐9 were quantified by densitometry using Image J (v1.48) software. 2.9. Quantitative RT‐PCR {#jcmm15076-sec-0011} ------------------------ The mRNA expression of specific genes and epithelial‐to‐mesenchymal transition (EMT) markers levels was measured by real‐time reverse transcriptase‐polymerase chain reaction (real‐time RT‐PCR). Total RNA was isolated and purified using the PureLink® RNA Mini Kit (Life Technologies). The total RNA (2000 ng) was reverse transcribed using the High‐Capacity cDNA Reverse Transcription Kit (Life Technologies). Quantitative real‐time PCR was performed using TaqMan™ Fast Universal PCR Master Mix (Life Technologies) in a StepOnePlus 7500 real‐time PCR system (Applied Biosystems). The primers of target genes used for this analysis were TaqMan™ Gene Expression Assay for human PLAU (Hs01547054_m1), PLAUR (Hs00958880_m1), MMP‐2 (Hs01548727_m1), MMP‐9 (Hs00957562_m1), CAV1 (Hs00971716_m1), CAVIN1 (Hs00396859_m1), AQP1 (Hs01028916_m1), Snail‐1 (Hs00195591_m1), Snail‐2 (Hs00161904_m1), Twist (Hs01675818_s1), Vimentin (Hs00185584_m1) and N‐cadherin (Hs00983056_m1). Relative quantification was done by reference to 18S ribosomal RNA (18S rRNA) and analysed using the comparative critical threshold (Ct) method.[14](#jcmm15076-bib-0014){ref-type="ref"} 2.10. Electron microscopy {#jcmm15076-sec-0012} ------------------------- U251 cells were processed for transmission electron microscopy in 3 cm dishes using standard protocols.[15](#jcmm15076-bib-0015){ref-type="ref"} Images were taken at a magnification of 12 000×. The number of surface clathrin‐coated pits (CCP), surface caveolae (where a clear connection to the plasma membrane was evident) and putative caveolae (vesicular profiles \< 100 nm close to the plasma membrane but with no clear connection to the plasma membrane) per cell profile was counted in 12 cell profiles per condition, from two different areas of the culture dish. 2.11. Western blotting assay {#jcmm15076-sec-0013} ---------------------------- Equal amounts of protein from cell lysates were electrophoresed in an 11% SDS‐PAGE gel and transferred to a nitrocellulose membrane. The proteins of interest were identified using rabbit anti‐caveolin‐1 polyclonal antibody (1:1000) or rabbit anti‐CAVIN1 polyclonal antibody (1:1000) followed by secondary antibody (1:10 000), detected using SuperSignal™ West Dura Extended Duration Substrate (Life Technologies) and quantified using a ChemiDoc^TM^ Touch Imaging System (Bio‐Rad). 2.12. Cell invasion assay {#jcmm15076-sec-0014} ------------------------- The cell invasion assay was performed using either CultreCoat® 24‐well plates with BME‐Coated Inserts or CultreCoat® 96 Well Medium BME Cell Invasion assay as previously described[12](#jcmm15076-bib-0012){ref-type="ref"}; the upper and bottom chambers had the same concentration of NaCl added. 2.13. mRNA expression and survival analysis from publicly available data {#jcmm15076-sec-0015} ------------------------------------------------------------------------ AQP‐1 mRNA expression in normal brain tissues and different GBM molecular subtypes was retrieved from Project Betastasis web platform (<http://www.betastasis.com>). Data were extracted from The Cancer Genome Atlas (TCGA) consortium using Affymetrix Human Exon 1.0 ST platform. The four molecular subtypes of GBM (classical, mesenchymal, neural or proneural) were defined based on gene expression profiles [16](#jcmm15076-bib-0016){ref-type="ref"} and have significance for survival outcome or therapy response. For the survival analysis, AQP‐1 mRNA expression using U133 microarray and survival data was extracted from glioblastoma multiforme (TCGA, Firehorse Legacy), accessed through CBioPortal.[17](#jcmm15076-bib-0017){ref-type="ref"}, [18](#jcmm15076-bib-0018){ref-type="ref"} The Kaplan--Meier curves, log‐rank test and Cox multivariable regression analysis were generated using GraphPad Prism (version 8.01). 2.14. Statistical analysis {#jcmm15076-sec-0016} -------------------------- Statistical analysis was carried out using GraphPad Prism software (v. 8.01). *P*‐value of \<.05 was considered significant. Data show either replicates or independent experiments as detailed in the figure legends. All the data are shown as mean ± SEM. 3. RESULTS {#jcmm15076-sec-0017} ========== 3.1. Osmotic pressure increases caveola‐forming protein expression {#jcmm15076-sec-0018} ------------------------------------------------------------------ We have previously demonstrated that the expression of caveolin‐1 and CAVIN1 correlates with GBM invasiveness [12](#jcmm15076-bib-0012){ref-type="ref"} and that GBM cells respond to osmotic pressure by increased production of matrix‐degrading enzymes, EMT markers and invasion.[19](#jcmm15076-bib-0019){ref-type="ref"} We asked whether caveolae may be involved in the cellular response to pressure. We tested the mRNA expression of CAV1 and CAVIN1 in GBM cells exposed to hyperosmotic or hypo‐osmotic medium for 48 hours (Figure [1](#jcmm15076-fig-0001){ref-type="fig"}A). The osmolality was adjusted by adding sodium chloride as previously described.[19](#jcmm15076-bib-0019){ref-type="ref"} Hyperosmolality (440 mOsmol/kg) caused a slight increase (around 2‐fold) in mRNA of CAV1 and CAVIN1 in the U251 cell line. A similar trend was seen in the U87 and U118 cell lines for CAV1 mRNA, albeit non‐statistically significant, and for CAVIN1 in U118 cells. Of note, the U118 cell line spontaneously lacks expression of these two proteins essential for caveola formation.[12](#jcmm15076-bib-0012){ref-type="ref"} CAV1 and CAVIN1 protein expression was tested after incubation of the U87 and U251 cell lines in control and hyperosmolar media (Figure [1](#jcmm15076-fig-0001){ref-type="fig"}B), and a statistically significant increase of both proteins was seen in both cell lines (Figure [1](#jcmm15076-fig-0001){ref-type="fig"}C). Caveola quantification was performed on electron micrographs of U251 cells exposed to control or hyperosmolar medium for 48 hours (Figure [1](#jcmm15076-fig-0001){ref-type="fig"}D). Hyperosmolality resulted in a significant increase in the number of caveolae, whereas the number of clathrin‐coated pits was unaltered. ![Osmotic pressure increases caveola‐forming protein expression. Cells were subjected to normo‐ (285 mOsmol/kg) or hyper (440 mOsmol/kg)‐osmolar medium. (A) Effect of osmotic pressure on CAV1 and CAVIN1 mRNA expression among U251, U87 and U118 cell lines. Results are expressed as mean ± SEM of n = 3 independent experiments and analysed using unpaired one‐tailed t test. (B) Effect of osmotic pressure on CAV1 and CAVIN1 protein expression in U87 and U251 cell lines detected by immunoblotting. (C) Results of densitometric quantitation are shown as a % of control osmolality. Results are expressed as mean ± SEM of n = 6 determinations and analysed using unpaired one‐tailed t test. (D) Representative electron micrographs of U251 cells exposed to iso‐ (285 mOsmol/kg) or hyper (440 mOsmol/kg)‐osmolar medium for 48 h (E) quantification of clathrin‐coated pits (CCP) and caveolae in 12 cell profiles per condition. Results were analysed by one‐way ANOVA with Dunnett\'s multiple comparisons test or two‐way ANOVA with Sidak\'s multiple comparison test. \**P* \< .05, \*\**P* \< .01, \*\*\* *P* \< .001, \*\*\*\**P* \< .0001](JCMM-24-3724-g001){#jcmm15076-fig-0001} 3.2. Caveolae are required for GBM pro‐invasive response to hyperosmotic pressure {#jcmm15076-sec-0019} --------------------------------------------------------------------------------- We assessed whether caveolae are required for the pro‐invasive response to pressure by quantifying uPA, gelatinase and EMT marker expression in U251 cells in which CAV1 expression was stably down‐regulated by shRNA.[12](#jcmm15076-bib-0012){ref-type="ref"} The cells were subjected to hyperosmolarity (360 and 440 mOSmol/kg) or hypo‐osmolarity (260 mOsmol/kg) and compared to cells placed in normo‐osmolar (285 mOsmol/kg) medium. CAV1 down‐regulation did not affect the increased production of uPA in response to 360 or 440 mOsmol/kg (Figure [2](#jcmm15076-fig-0002){ref-type="fig"}A) as was apparent from casein‐plasminogen zymography results. Furthermore, no increase in MMP‐2 was seen, and MMP‐9 was undetected by gelatin zymography in either control (scramble shRNA) or CAV1 down‐regulated cells (Figure [2](#jcmm15076-fig-0002){ref-type="fig"}B). However, uPA mRNA was significantly increased in 440 mOsmol/kg medium (Figure [2](#jcmm15076-fig-0002){ref-type="fig"}C) and this response was blunted in CAV1 knocked down cells. MMP‐2 mRNA was not increased in response to pressure, and MMP‐9 mRNA was increased to the same extent in control and CAV1 knocked down cells (ie 10‐15 fold, no statistically significant difference; Figure [2](#jcmm15076-fig-0002){ref-type="fig"}C). Control (scrambled shRNA‐transfected) cells invaded \~ 50% more when placed in hyperosmotic medium compared to normo‐osmotic medium, a response which was dampened in CAV1 down‐regulated cells (Figure [3](#jcmm15076-fig-0003){ref-type="fig"}A). When analysed for mRNA expression of EMT markers, cells with down‐regulated CAV1 expression expressed significantly less Snail‐1 and Snail‐2 in response to osmotic stress compared to the scrambled shRNA control cells (Figure [3](#jcmm15076-fig-0003){ref-type="fig"}B). The morphology of the cells after 48 hours in control or hyperosmolar medium was similar in both cell lines, with an already mesenchymal, elongated appearance in control medium and increased apparent cell spreading and coverage of the plastic surface in 440 mOsmol/kg media (Figure [3](#jcmm15076-fig-0003){ref-type="fig"}C). These results indicate that caveolae may mediate some of the invasive features induced by osmotic stress in GBM cells. ![Effect of osmotic pressure on matrix protease production of U251 cells stably expressing shRNA to CAV1 (shCAV1) or control shRNA (shCont). Cells were subjected to normo‐ (285 mOsmol/kg), hyper‐ (360 or 440 mOsmol/kg) or hypo (260 mOsmol/kg)‐osmolar medium. (A) Conditioned media of U251 shCont and shCAV1 cells exposed to osmotic stress were analysed by casein‐plasminogen zymography and densitometric quantitation of the 47 and 51 kD bands corresponding to uPA was carried out. (B) Gelatin zymography and densitometric quantitation of MMP‐2 produced in the conditioned medium of U251 shCont and shCAV1 cells after 48 h of osmotic stress. (C) Effect of osmotic stress on uPA, MMP‐2 and MMP‐9 mRNA expression. Results are shown relative to shCont. All results are expressed as mean ± SEM of n = 3 independent experiments. Densitometric quantification of uPA and MMP‐2 was analysed by one‐way ANOVA with Dunnett\'s multiple comparisons test; protease mRNA expression was analysed by two‐way ANOVA with Tukey\'s multiple comparisons test. \**P* \< .05, \*\**P* \< .01, \*\*\**P* \< .001, \*\*\*\**P* \< .0001](JCMM-24-3724-g002){#jcmm15076-fig-0002} ![Effect of osmotic pressure on U251 shCont and shCAV1 cells in vitro invasion potential. Cells were subjected to normo‐ (285 mOsmol/kg) or hyper (440 mOsmol/kg)‐osmolar medium. (A) Cell invasion through Matrigel‐coated Transwells was determined in 285 or 440 mOSmol/kg media. Quantitation of invaded cells is shown as per cent of normo‐osmotic control. Representative micrographs are shown. (B) mRNA expression of EMT markers in U251 shCont and shCAV1 cells after 48 h of osmotic stress. (C) Morphology of U251 shCont and shCAV1 cells after 48 h in 285 or 440 mOSmol/kg media. Results are expressed as mean ± SEM of n = 3 independent experiments, \**P* \< .05, \*\*\**P* \< .001, \*\*\*\**P* \< .0001, two‐way ANOVA analysis with Tukey\'s multiple comparisons test](JCMM-24-3724-g003){#jcmm15076-fig-0003} To confirm these findings, we assessed the response to osmotic pressure of U118 cells subjected to hyperosmotic or hypo‐osmotic medium. The osmolalities employed were verified not to decrease cell viability (Figure [S1](#jcmm15076-sup-0001){ref-type="supplementary-material"}a). The caveola‐devoid U118 cells failed to increase production of uPA, MMP‐2 and/or MMP‐9 in conditioned medium (Figure [S1](#jcmm15076-sup-0001){ref-type="supplementary-material"}b,c). Similarly, in the absence of caveolae, the induction of Snail‐2/Slug or cadherin2 (CDH2) mRNA by hyperosmolar medium (440 mOsmol/kg) was lost in U118 cells. The increase in MMP‐9 mRNA and Snail‐1 mRNA, however, was still apparent with 440 mOsmol/kg (Figure [S1](#jcmm15076-sup-0001){ref-type="supplementary-material"}d,e). We also tested invasion through Matrigel‐coated inserts. U118 cells did not invade through the Matrigel layer whether in the presence of control (258 mOsmol/kg) or hyperosmotic (440 mOsmol/kg) medium (data not shown). Of note, the lack of caveola‐forming protein expression is not the only difference between U118 cells and U87 or U251 cells; in unstimulated experimental conditions, U118 also exhibit lower mRNA expression of the EMT markers CDH2, Twist and vimentin compared to U87 or U251 (Figure [S1](#jcmm15076-sup-0001){ref-type="supplementary-material"}f). We further tested U251 cells in which the down‐regulation of CAV1 or CAVIN1 was successfully achieved using siRNA oligos as previously described.[12](#jcmm15076-bib-0012){ref-type="ref"} The results indicate that the down‐regulation of either CAV1 (Figure [S2](#jcmm15076-sup-0002){ref-type="supplementary-material"}a,b) or CAVIN1 (Figure [S2](#jcmm15076-sup-0002){ref-type="supplementary-material"}c,d) did not affect the cell response to pressure when we quantified the mRNA expression of matrix proteases (Figure [S2](#jcmm15076-sup-0002){ref-type="supplementary-material"}a,c) or EMT markers (Figure [S2](#jcmm15076-sup-0002){ref-type="supplementary-material"}b,d). These results indicated that the effect of down‐regulation of caveola‐forming proteins may take time to appear and suggested the use of stable and complete ablation of CAV1 or CAVIN1 using CRISPR knockout GBM cells. We compared the response to hyperosmolality of U251 cells in which CAV1 expression was depleted using three separate guides (pools C‐a, C‐b, C‐c) or the three guides together (C‐d). While in the absence of CAV1 cells still increased their production of uPA as measured by casein‐plasminogen zymography, the amount of uPA was significantly less in the CAV1 KO cells compared to the control cells (Figure [4](#jcmm15076-fig-0004){ref-type="fig"}A). The quantitative results for each individual pool of KO cells are presented in Figure [S3](#jcmm15076-sup-0003){ref-type="supplementary-material"}a. Although the increase in MMP‐2 in response to pressure was much less dramatic than the increase in uPA, comparable results were found when analysing the gelatin zymographs (Figure [4](#jcmm15076-fig-0004){ref-type="fig"}B, Figure [S3](#jcmm15076-sup-0003){ref-type="supplementary-material"}a). At the mRNA level, the induction of uPA, MMP‐2 and MMP‐9 mRNA by hyperosmolar medium was blunted by CAV1 KO, although statistically significant only for uPA and MMP‐2 (Figure [4](#jcmm15076-fig-0004){ref-type="fig"}C, Figure [S3](#jcmm15076-sup-0003){ref-type="supplementary-material"}b). ![Effect of osmotic pressure on matrix protease production of CAV1 CRISPR‐Cas9 knockout (KO) U251 cells. Cells were subjected to normo‐ (285 mOsmol/kg) or hyper (360 mOsmol/kg)‐osmolar medium. (A) Conditioned media of control and CAV1 KO cells exposed to osmotic stress were analysed by casein‐plasminogen zymography. Densitometric quantitation of the 47 and 51 kD bands corresponding to uPA was carried out. (B) Gelatin zymography and densitometric quantitation of MMP‐2 and MMP‐9 produced in the conditioned medium of control and CAV1 KO cells after 48 h of osmotic stress. (C) Effect of osmotic stress on uPA, MMP‐2 and MMP‐9 mRNA expression. The results are the average of n = 4 KO cell pools. Results are expressed as mean ± SEM of n = 3 independent experiments, \**P* \< .05, \*\**P* \< .01, \*\*\**P* \< .001, \*\*\*\**P* \< .0001, two‐way ANOVA analysis with Tukey\'s multiple comparisons test](JCMM-24-3724-g004){#jcmm15076-fig-0004} Similar data were obtained when analysing the response to hyperosmolality in U251 cells in which CAVIN1 was ablated using CRISPR knockout (Figure [5](#jcmm15076-fig-0005){ref-type="fig"}); uPA was increased in the CAVIN1 KO cells in response to hyperosmolality but this response was lower than that seen in control cells (Figure [5](#jcmm15076-fig-0005){ref-type="fig"}A, Figure [S4](#jcmm15076-sup-0004){ref-type="supplementary-material"}a). The increase in MMP‐9 production was also blunted in the CAVIN1 KO cells (Figure [5](#jcmm15076-fig-0005){ref-type="fig"}B, Figure [S4](#jcmm15076-sup-0004){ref-type="supplementary-material"}a). At the mRNA level, the induction of uPA, MMP‐2 and MMP‐9 by hyperosmolality was decreased or ablated in the CAVIN1 KO cells (Figure [5](#jcmm15076-fig-0005){ref-type="fig"}C). ![Effect of osmotic pressure on matrix protease production of CAVIN1 CRISPR‐Cas9 KO U251 cells. Cells were subjected to normo‐ (285 mOsmol/kg) or hyper (360 mOsmol/kg)‐osmolar medium. (A) Conditioned media of control and CAVIN1 KO cells exposed to osmotic stress were analysed by casein‐plasminogen zymography. Densitometric quantitation of the 47 and 51 kD bands corresponding to uPA was carried out. (B) Gelatin zymography and densitometric quantitation of MMP‐2 and MMP‐9 produced in the conditioned medium of control and CAVIN1 KO cells after 48 h of osmotic stress. (C) Effect of osmotic stress on uPA, MMP‐2 and MMP‐9 mRNA expression. The results are average of n = 4 KO pools. Results are expressed as mean ± SEM of n = 3 independent experiments, \**P* \< .05, \*\**P* \< .01, \*\*\**P* \< .001, \*\*\*\**P* \< .0001, two‐way ANOVA analysis with Tukey\'s multiple comparisons test](JCMM-24-3724-g005){#jcmm15076-fig-0005} Lastly, we examined the response of the caveolin and cavin KO cells to exposure to hyperosmolar medium (360 mOsmol/kg rather than 440 mOsmol/kg to maintain cell viability; see Figure [S5](#jcmm15076-sup-0005){ref-type="supplementary-material"}) for 24 or 48 hours In contrast to WT cells, the KO cells did not show a significant response to the high osmolality medium in invasion assays (Figure [6](#jcmm15076-fig-0006){ref-type="fig"}A,B,D,E). Increased expression of EMT markers in response to pressure was also significantly prevented by knockout of CAV1 (Figure [6](#jcmm15076-fig-0006){ref-type="fig"}C) or CAVIN1 (Figure [6](#jcmm15076-fig-0006){ref-type="fig"}F). ![Effect of osmotic pressure on CAV1 or CAVIN1 CRISPR‐Cas9 KO U251 cells in vitro invasion potential. (A) Determination of basement membrane cell invasion of control or four individual CAV1 KO pool in 285 or 360 mOSmol/kg media. (B) Quantification of CAV1 KO cell invasion shown as the average of n = 4 independent CAV1 KO cell pools. (C) mRNA expression of EMT markers in CAV1 KO cells after 48 h of osmotic stress. (D) Determination of basement membrane cell invasion of control or four individual CAVIN1 KO cell pools in 285 or 360 mOSmol/kg media. (E) Quantification of CAVIN1 KO cell invasion shown as the average of n = 4 independent CAVIN1 KO cell pools. (F) mRNA expression of EMT markers in CAVIN1 KO cells after 48 h of osmotic stress. All results are expressed as mean ± SEM of n = 3 independent experiments. Results were analysed by one‐way ANOVA with Dunnett\'s multiple comparisons test (A, D) or two‐way ANOVA with Tukey\'s multiple comparisons test (B, C, E, F). \**P* \< .05, \*\**P* \< .01, \*\*\**P* \< .001, \*\*\*\**P* \< .0001](JCMM-24-3724-g006){#jcmm15076-fig-0006} 3.3. Caveolae are required for GBM pro‐invasive response to hydrostatic pressure {#jcmm15076-sec-0020} -------------------------------------------------------------------------------- Previous work has shown that GBM respond to hydrostatic pressure by increasing matrix proteases and EMT marker expression,[19](#jcmm15076-bib-0019){ref-type="ref"} and caveolae have been proposed to serve mechanosensing roles. Therefore, in the following set of experiments, we assessed whether CRISPR ablation of caveola‐forming proteins CAV1 or CAVIN1 altered the pro‐invasive response of U251 cells to hydrostatic pressure, that is 30 mm Hg applied to the cells over 48 hours, as previously described.[19](#jcmm15076-bib-0019){ref-type="ref"} The mRNA expression of matrix proteases (Figure [S6](#jcmm15076-sup-0006){ref-type="supplementary-material"}a,c) and EMT markers (Figure [S6](#jcmm15076-sup-0006){ref-type="supplementary-material"}b,d) in response to increased hydrostatic pressure was significantly lower in cells lacking caveola‐forming proteins CAV1 or CAVIN1 for the majority of the genes assessed. 3.4. Hyperosmolarity‐induced aquaporin‐1 mRNA expression is lost upon caveola ablation {#jcmm15076-sec-0021} -------------------------------------------------------------------------------------- Aquaporins are membrane water channels necessary for cellular response to osmotic stress. AQP1 and AQP4 are highly expressed in GBM, but only AQP1 is expressed by the three cell lines U87, U118 and U251. We determined whether GBM cells exposed to hyperosmolality increased aquaporin‐1 (AQP1) mRNA expression in response to hyperosmolar stress. We tested U251 cells in which CAV1 expression was knocked down *via* shRNA, and U251 cells KO for CAV1 or CAVIN1 using CRISPR (Figure [7](#jcmm15076-fig-0007){ref-type="fig"}A). In all three loss‐of‐function experiments, the loss of caveola‐forming proteins significantly impaired the ability of the cells to increase APQ1 mRNA in response to hyperosmolality. We further tested cell lines expressing or spontaneously lacking caveola‐forming proteins (Figure [7](#jcmm15076-fig-0007){ref-type="fig"}B). In U251 and U87 cells, hyperosmolality increased AQP1 mRNA expression. Interestingly, in caveola‐forming protein‐deficient U118 cells, hyperosmolality did not result in increased AQP1 mRNA induction (Figure [7](#jcmm15076-fig-0007){ref-type="fig"}B). Lastly, we interrogated publicly available data to determine the significance of AQP1 expression in GBM. Compared to normal brain tissue, AQP1 was overexpressed in all molecular subtypes (Figure [7](#jcmm15076-fig-0007){ref-type="fig"}C). Furthermore, these analyses indicated that survival of patients with tumours expressing high APQ1 levels was shorter when compared to patients with low AQP1 expression and that the difference in survival was more significant in the case of joint high expression of AQP1 and CAV1 (Figure [7](#jcmm15076-fig-0007){ref-type="fig"}D). ![Interplay between caveolae, osmotic pressure and aquaporin 1 (AQP‐1) expression in GBM cell lines. Cells were subjected to normo (285 mOsmol/kg)‐ or hyper (360 or 440 mOsmol/kg)‐osmolar medium. (A) Effect of osmotic pressure on AQP‐1 mRNA expression among CAV1 shRNA knocked down, CAV1 CRISPR‐Cas9 KO and CAVIN1 CRISPR‐Cas9 KO U251 cells. (B) Effect of osmotic pressure on AQP‐1 mRNA expression among U251, U87 and U118 cell lines. (C) Dots plot for AQP‐1 expression (log base 2 transformed data) in normal brain tissues and in different GBM molecular subtypes in TCGA dataset (normal, n = 11; classical, n = 54; mesenchymal, n = 58; neural, n = 33; proneural, n = 57). (D) Kaplan‐Meier plots of GBM patient survival based on AQP‐1 or AQP1 and CAV1 expression. Survival probability is shown as a function of time after diagnosis (months) classified by mRNA expression. Expression (*z* \> .5) was chosen as cut‐off between "high" and "low" expression](JCMM-24-3724-g007){#jcmm15076-fig-0007} 4. DISCUSSION {#jcmm15076-sec-0022} ============= Our results show that caveolae mediate, at least in part, the pro‐invasive response of GBM to osmotic and hydrostatic pressure. It has been shown irrefutably that the absence of the key caveola‐forming proteins CAV1 or CAVIN1 prevents the formation of caveolae.[20](#jcmm15076-bib-0020){ref-type="ref"}, [21](#jcmm15076-bib-0021){ref-type="ref"} Caveolae protect cells from mechanical stress *via* a combination of mechanisms; they provide a membrane reservoir which can be deployed when caveolae disassemble, flatten and buffer tensions occurring, for example, in myofibre lengthening,[7](#jcmm15076-bib-0007){ref-type="ref"} and in response to hypo‐osmotic stress,[7](#jcmm15076-bib-0007){ref-type="ref"}, [22](#jcmm15076-bib-0022){ref-type="ref"} haemodynamic forces [23](#jcmm15076-bib-0023){ref-type="ref"} or tether pulling.[6](#jcmm15076-bib-0006){ref-type="ref"} It is also proposed that caveolae mediate the internalization of damaged membrane areas.[24](#jcmm15076-bib-0024){ref-type="ref"} Lastly, the disassembly and flattening of caveolae in response to membrane stretch allows the release of cavins to signal intracellularly (including in the nucleus).[6](#jcmm15076-bib-0006){ref-type="ref"}, [7](#jcmm15076-bib-0007){ref-type="ref"}, [25](#jcmm15076-bib-0025){ref-type="ref"} In the present study, we did not observe major effects of hypo‐osmotic treatment on matrix protease production or EMT marker expression. Hypo‐osmotic stress has been used at various intensities for limited amounts of time in other documented experiments testing the role of caveolae in mechano‐protection.[7](#jcmm15076-bib-0007){ref-type="ref"}, [22](#jcmm15076-bib-0022){ref-type="ref"} These experiments unveiled a physiological role for caveolae in limiting membrane tension in response to cell stretch or swelling [22](#jcmm15076-bib-0022){ref-type="ref"} and an increased susceptibility to hypo‐osmotic stress and impaired membrane integrity in CAVIN1^‐/‐^ muscle fibres.[7](#jcmm15076-bib-0007){ref-type="ref"} Hypo‐osmotic medium was made by diluting DMEM 10‐fold with 10% FBS‐containing water,[23](#jcmm15076-bib-0023){ref-type="ref"} applying hypo‐osmotic medium at 30 mOsm instead of 300mOsm [6](#jcmm15076-bib-0006){ref-type="ref"} and applying a HEPES‐based solution at 180 mOsm vs isosmotic 280 mOsm.[22](#jcmm15076-bib-0022){ref-type="ref"} Durations were 10‐15 minutes.[6](#jcmm15076-bib-0006){ref-type="ref"}, [7](#jcmm15076-bib-0007){ref-type="ref"}, [22](#jcmm15076-bib-0022){ref-type="ref"}, [23](#jcmm15076-bib-0023){ref-type="ref"} In the present study, hypo‐osmotic medium was applied for a much longer duration (48 hours) and at an intensity compatible with cell survival (tested by MTT). This is a major difference with existing published work where rapid cell swelling was employed. The literature linking hyperosmotic stress to caveolae is limited. Early work indicated that hyperosmotic stress caused Src family kinase‐dependent caveolin‐1 tyrosine phosphorylation after short‐term (10‐15 minutes) exposure to mannitol.[26](#jcmm15076-bib-0026){ref-type="ref"}, [27](#jcmm15076-bib-0027){ref-type="ref"} Furthermore, 30‐min hyperosmotic stress using sorbitol was reported to induce internalization of CAV1.[28](#jcmm15076-bib-0028){ref-type="ref"} More recently, plasma membrane repair was shown to be increased in hyperosmotic medium (using a 10‐min incubation in 340 mOsmol/kg medium) and, interestingly, this hyperosmotic stress caused a transient down‐regulation of clathrin and fluid phase endocytosis while stimulating caveolar endocytosis.[29](#jcmm15076-bib-0029){ref-type="ref"} In our experiments, GBM cells were exposed to long duration (48 hours) hyperosmolality, being achieved by NaCl addition to the medium. It is reasonable to speculate that, over this extended period of time, the cells adapted to environmental pressures to maintain volume homeostasis. Cell shrinking is exploited by GBM cells to invade their surroundings [30](#jcmm15076-bib-0030){ref-type="ref"} and is mediated at least in part by volume‐regulated anion channels, normally involved in re‐establishing cell volume after a swelling event (*eg* exposure to hypotonic medium or hypoxia).[31](#jcmm15076-bib-0031){ref-type="ref"} In contrast, when exposed to hypertonic medium, GBM cells have been shown to shrink within 2‐3 minutes and then to regulate their volume back to normal within 25 minutes, a response which involves Na^+^‐K^+^‐2Cl^−^ co‐transporter isoform 1.[32](#jcmm15076-bib-0032){ref-type="ref"} Our results show that U251 cells respond to long‐term (48 hours) hypertonic exposure by increasing the expression of caveola‐forming proteins CAV1 and CAVIN1, and the water‐transporting protein AQP1. GBM tissues show increased AQP1 expression, and patients with high AQP1 had significantly lower survival time. While it is tempting to hypothesize that AQP1 contributes to the aggressiveness of these tumours, it is possible that high expression of APQ1 is a consequence of the higher pressure that exists inside the tumour. Future work is required to investigate whether pharmacological or molecular ablation of AQPs modulates pressure‐induced GBM invasion. Furthermore, in the absence of caveolae, induction of AQP1 mRNA expression and GBM pro‐invasive response to hyperosmolality were both dampened, indicating an interplay between caveolae and AQP1 in pressure‐induced invasion that is yet to be explored. Future experiments will assess the impact of knocking down AQP1 on hyperosmolarity‐driven invasiveness. In favour of a clinically significant interaction, the effect of high expression of CAV1 and AQP1 on survival of GBM patients was more significant than that of high expression of AQP1 alone. AQP1 localization in caveolae is unclear from the literature; in endothelial cells from rat lung, around 70% of AQP1 was located in caveolae with the remainder located in non‐caveolar plasma membrane.[33](#jcmm15076-bib-0033){ref-type="ref"} Similar results have been found in mouse [34](#jcmm15076-bib-0034){ref-type="ref"} and in human heart.[35](#jcmm15076-bib-0035){ref-type="ref"} In rat cardiac myocytes, the AQP1 co‐localization with caveolin‐3 could be reversed by hyperosmotic medium.[36](#jcmm15076-bib-0036){ref-type="ref"} On the contrary, AQP1 was shown to have a low level of co‐localization with caveolae in rat lung [37](#jcmm15076-bib-0037){ref-type="ref"} and brain microvascular endothelial cells.[38](#jcmm15076-bib-0038){ref-type="ref"} These differences may be due to cell‐type specificity or dissimilarities in osmotic conditions. In addition to co‐localization, a functional link between aquaporins and caveolae has been explored. In a bladder outlet obstruction rat model, both AQP1 and CAV1 expression increased with high contraction pressure [39](#jcmm15076-bib-0039){ref-type="ref"} and in CAV1 gene‐disrupted mice, an increased expression of AQP1 was associated with bladder dysfunction.[40](#jcmm15076-bib-0040){ref-type="ref"} In addition, in an animal study, down‐regulation of CAV1 *via* siRNA decreased the expression of AQP1 and AQP5 with increasing lung oedema.[41](#jcmm15076-bib-0041){ref-type="ref"} Taken together, a complex picture emerges regarding the relation between AQP1 and caveola‐forming proteins. We interrogated two TCGA data sets but found no positive correlation between caveola‐forming protein and AQP1 mRNA expression in GBM (data not shown). Our results further indicate the involvement of caveolae in the response of GBM to hydrostatic pressure. It has been proposed that the IFP is uniformly high in the centre of a solid tumour and decreases abruptly at the periphery.[42](#jcmm15076-bib-0042){ref-type="ref"} Furthermore, high IFP in tumours correlates with metastasis in vivo*;* high central tumour IFP was found to be associated with the development of pulmonary and lymph node metastases in a model of human melanoma xenograft.[43](#jcmm15076-bib-0043){ref-type="ref"} High tumour IFP in cervical cancer patients was also found to significantly predict local or distant recurrence of the tumour.[44](#jcmm15076-bib-0044){ref-type="ref"} The pressure model employed (by forcing air into a flask and leaving the pressure on for 48 hours) results in continuous, long‐term pressure. The pressure that we employed (30 mm Hg) is compatible with what has been measured in clinical and preclinical studies [43](#jcmm15076-bib-0043){ref-type="ref"}, [44](#jcmm15076-bib-0044){ref-type="ref"} and was also used in in vitro experiments.[45](#jcmm15076-bib-0045){ref-type="ref"} In a study of lung cancer cells exposed to 20 mm Hg,[45](#jcmm15076-bib-0045){ref-type="ref"} increased migration was shown starting 4‐6 hours after applying pressure; the cells became thinner but spread wider, proliferated more, produced more filopodia, invaded more and expressed higher amounts of Snail---all of these corroborate our finding that 30 mm Hg increases GBM invasive potential at 48 hours Furthermore in this study of lung cancer cells, the expression of AQP1 increased with hydrostatic pressure and AQP1 knock down dampened the motile response of these cells to pressure.[45](#jcmm15076-bib-0045){ref-type="ref"} Our results did not confirm this increased expression in AQP1 in response to hydrostatic pressure, presumably because of the difference in time‐points used in each study (8 hours vs 48 hours) and the end‐point measurement (mRNA expression vs protein). Interestingly, this previous study indicated that AQP1 was downstream of CAV1 because CAV1 SiRNA KD prevented pressure‐induced AQP1 elevation,[45](#jcmm15076-bib-0045){ref-type="ref"} which matches our findings with osmotic but not hydrostatic pressure. Overall, our study shows that caveolae contribute to GBM invasiveness in response to osmotic and hydrostatic pressure. Both types of pressure are a feature of GBM and likely to elicit both common and distinct activation pathways. This is compatible with the known functions of caveolae, which buffer membrane tensions,[22](#jcmm15076-bib-0022){ref-type="ref"} regulate mechanical stress‐induced signalling and the actin cytoskeleton,[46](#jcmm15076-bib-0046){ref-type="ref"} and may shelter membrane receptors.[47](#jcmm15076-bib-0047){ref-type="ref"} Together with our previous report that caveola‐forming proteins regulate matrix‐degrading enzymes in GBM in the absence of mechanical stress, our current results point to caveolae as a potential new target in the treatment of GBM. CONFLICT OF INTEREST {#jcmm15076-sec-0024} ==================== The authors confirm that there are no conflicts of interest. AUTHOR CONTRIBUTION {#jcmm15076-sec-0025} =================== WP, RGP and MOP conceptualized and designed the study. WP, JQ, ZDN, KM and CF carried out experimental work. WP, RGP, JMH and MOP analysed and interpreted the data. WP, PNS, GJR, RGP, KM, JMH and MOP wrote the article and critically revised the article. Supporting information ====================== ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. The results shown here are in whole or part based upon data generated by the TCGA Research Network: <https://www.cancer.gov/tcga>. This study was funded by the University of Queensland School of Pharmacy. DATA AVAILABILITY STATEMENT {#jcmm15076-sec-0027} =========================== The data that support the findings of this study are available from the corresponding author upon reasonable request.
{ "pile_set_name": "PubMed Abstracts" }
A systematic review and meta-analysis of diagnostic accuracy of serum 1,3-β-D-glucan for invasive fungal infection: Focus on cutoff levels. To assess the diagnostic accuracy of 1,3-β-D-glucan (BDG) assay for diagnosing invasive fungal infections (IFI), we searched the Medline and Embase databases, and studies reporting the performance of BDG assays for the diagnosis of IFI were identified. Our analysis was mainly focused on the cutoff level. Meta-analysis was performed using conventional meta-analytical pooling and bivariate analysis. Our meta-analysis covered 28 individual studies, in which 896 out of 4214 patients were identified as IFI positive. The pooled sensitivity, specificity, diagnostic odds ratio, and area under the summary receiver operating characteristic (AUC-SROC) curve were 0.78 [95% confidence interval (CI), 0.75-0.81], 0.81 (95% CI, 0.80-0.83), 21.88 (95% CI, 12.62-37.93), and 0.8855, respectively. Subgroup analyses indicated that in cohort studies, the cutoff value of BDG at 80 pg/mL had the best diagnostic accuracy, whereas in case-control studies the cutoff value of 20 pg/mL had the best diagnostic accuracy; moreover, the AUC-SROC in cohort studies was lower than that in case-control studies. The cutoff value of 60 pg/mL has the best diagnostic accuracy with the European Organization for Research and Treatment of Cancer/Mycoses Study Group criteria as a reference standard. The 60 pg/mL cutoff value has the best diagnostic accuracy with the Fungitell assay compared to the BDG detection assay. The cutoff value of 20 pg/mL has the best diagnostic accuracy with the Fungitec G-test assay, and the cutoff value of 11 pg/mL has the best diagnostic accuracy with the Wako assay. Serum BDG detection is highly accurate for diagnosing IFIs. As such, 60 pg/mL of BDG level can be used as the best cutoff value to distinguish patients with IFIs from patients without IFI (mainly due to Candida and Aspergillus).
{ "pile_set_name": "Pile-CC" }
ACROSS THE NATION. Sharpton gets 90 days in jail, hints at presidential bid SAN JUAN, PUERTO RICO — Comparing his situation to that of inmate-turned-President Nelson Mandela, Rev. Al Sharpton said Thursday his current imprisonment has only galvanized his presidential plans. A federal judge Wednesday sentenced the New York minister to 90 days behind bars for trespassing on Navy property as part of a May 1 protest against bombing exercises on the Puerto Rican island of Vieques. Of the 11 other activists arrested with Sharpton, nine were sentenced to 40 days behind bars. "Nelson Mandela went from prison to president," Sharpton said, referring to the former South African leader who spent 27 years in prison for his fight against apartheid. "I am determined more than ever to give serious consideration to running for president in 2004." Sharpton made his comments in a statement circulated by his attorney, Sanford Rubenstein. Sharpton was in a federal prison in the San Juan suburb of Guaynabo but was to be transferred Friday to a facility in New York, according to a U.S. Marshals Service source. Sharpton was arrested May 1 and convicted of a misdemeanor. He was sentenced as a repeat offender because he had previous arrests for civil disobedience in New York. At least 15 members of Congress signed a letter to Atty. Gen. John Ashcroft asking him to review the sentences, saying the protesters are being punished for their political views.
{ "pile_set_name": "StackExchange" }
Q: How to get the access token using verification code via jso oAuth 2.0 lib for Basecamp? what is working: According to this documentation I'm able to able to successfully fetch the verification code from the Basecamp authentication server. e.g http://127.0.0.1:8020/abc/index.html?code=xxxxxxx&state=yyyyyy-yyyyyyy-yyy-yyyy-yyyyyyy I'm using jso oAuth 2.0, and not able to understand that how can i get the access token in exchange of verification code ? if anyone has worked with this library and know how to achieve what i want to, then pls share your experience. A: jso seems to only implement the implicit Oauth2 flow. You are getting a code because Basecamp implements a different flow (the authorization_code grant). Once you get the code, you are supposed to call another endpoint to get the access_token (see step 4 in the docs you mention).
{ "pile_set_name": "PubMed Abstracts" }
Evidence that lead acts as a calcium substitute in second messenger metabolism. Our investigations of the ability of lead to substitute for calcium in several intracellular regulatory events are reviewed in the context of the neurotoxicity produced by this heavy metal. We found that lead activates calmodulin dependent phosphodiesterase, calmodulin inhibitor sensitive potassium channels, and calmodulin independent protein kinase C (PKC). Because of sensitivity in the picomolar range, the activation of PKC by lead may be more biologically relevant than its activation of calmodulin which requires high nanomolar levels of the toxicant. We also found that inhibition of capillary-like structure formation in co-cultures of neural endothelial cells with astrocytes by lead was similar to that produced by phorbol esters, known stimulators of PKC. This functional event supports the hypothesis that PKC activation underlies some aspects of lead neurotoxicity. Taken together these studies implicate second messenger metabolism and protein kinase activation as potential sites for the disruptive action of lead upon nervous system function. These reactions could contribute to the subtle defects in brain function associated with low level poisoning.
{ "pile_set_name": "PubMed Abstracts" }
Ammonia emissions during vermicomposting of sheep manure. The effect of C:N ratio, temperature and water content on ammonia volatilization during two-phase composting of sheep manure was evaluated. The aerobic phase was conducted under field conditions. This was followed by Phase II, vermicomposting, conducted in the laboratory under controlled conditions of water content (70% and 80%) and temperature (15 and 22 °C). The addition of extra straw lead to a 10% reduction in NH3 volatilization compared to sheep manure composted without extra straw. Temperature and water content significantly effected ammonia volatilization at 0 day in Phase II, with a water content of 70% and temperature of 22 °C leading to greater losses of ammonia. Nitrogen loss by ammonia volatilization during vermicomposting ranged from 8% to 15% of the initial N content. The addition of extra straw did not result in significant differences in total carbon content following vermicomposting.
{ "pile_set_name": "StackExchange" }
Q: ExpressJS - How to jump to the next middleware which has initial url defined Example code: app.use('/list-product', (req, res, next) => { // Do stuff next(); }) app.use('/add-product', (req, res, next) => { console.log("I'm here") }) Somehow it doesn't log "I'm here'", which means next() call doesn't work. But if I change '/add-product' to '/' or I remove it at all, it works. Why is that? How can I jump to the next middleware which has initial url on it as the example above? A: It sounds like you don't quite understand that when you put a path in front of the middleware, the URL must a least be a partial match for that path for that middleware to be called at all. So, when you do a request for /list-product, that will match your /list-product middleware which will then call next(). That will continue on the middleware chain looking for other middleware handlers that match the /list-product path. When it gets to your next middleware for /add-product, that doesn't match /list-product so it is skipped (not called). If you change app.use('/add-product', ...) to app.use('/', ...) or to app.use(...) with no path, then those last two are matches for /list-product so they will get called. The / middleware is a partial match (for all URLs) and middleware with no path will be called for all URLs. app.use() runs for partial matches. The more specific request handlers like app.get() and app.post() require a more complete match.
{ "pile_set_name": "PubMed Abstracts" }
Diversity training for medical students: evaluating impact. Diversity training is an important component of medical training in helping prepare learners to manage an increasingly diverse patient population. This work (conducted as part of a medical student project) evaluated Year 3 medical student views and perceived learning following participation in a one-day diversity-focussed teaching day at a UK medical school. Post-session evaluation data was analysed alongside data obtained from a focus group of student participants. Students largely valued specific teaching on this area of professional development and appreciated the safe learning environment in which they could freely share their beliefs and values in relation to potential sensitive issues. There was however suggestions made that longitudinal curricular integration would be a preferable and this delivery model has now been adopted at this institution.
{ "pile_set_name": "DM Mathematics" }
f w? True Let v = -293 + 447. Let y = v - 109. Does 9 divide y? True Suppose 3*u - 32 = -23. Is 18 a factor of ((-189)/28)/(u/(-16))? True Let m be (-3)/(-1)*22/22. Suppose -m*t = 4*y - 471, -27 = -y - 2*t + 97. Is 19 a factor of y? True Let s(h) = 2*h**2 - 24*h + 11. Suppose -39*z = -40*z + 17. Is 19 a factor of s(z)? False Let t(b) = b**3 - b**2 + 10*b - 11. Is 13 a factor of t(2)? True Let j = 18 + -55. Let g = 55 + j. Is g a multiple of 9? True Suppose 0 = 2*d - d - 3*o - 19, -2*d + 4*o = -32. Let b(a) be the third derivative of a**5/30 - 7*a**4/12 + 8*a**3/3 + 8*a**2 + 3*a. Does 19 divide b(d)? True Suppose -q + 2 = -2*f, q - 3*f + 0*f - 1 = 0. Suppose -n + 5*l - 19 = 0, -3*n = -0*n - 4*l + 35. Does 25 divide (n + 1)*(-17)/q? False Let w = -8 - -2. Let p = w - -9. Suppose 2*a + 3*g = 31, -51 = -4*a - p*g + 14. Is 13 a factor of a? False Suppose 16*d = 10115 - 3219. Is d a multiple of 6? False Does 29 divide (-1)/2 + ((-23499)/18)/(-1)? True Let d = 258 + -152. Suppose u = 3*z - 2*u - 123, 0 = -2*z - 4*u + d. Suppose f = -3*o + z, 5*f + 0*o - o = 257. Is 17 a factor of f? True Let k(f) = -2*f**3 + 6*f + 26. Suppose p - 18 = -23. Is 41 a factor of k(p)? True Let z(u) be the second derivative of 2*u**3/3 - 25*u**2 + 14*u. Is 6 a factor of z(18)? False Let g(m) = -23*m + 4. Let c be g(2). Let z be (-18)/c + 274/14. Is 4 a factor of 118/8 + 5/z? False Suppose 0 = 4*x + 4. Let g = x - -4. Suppose -4*p - 2*u + 44 = -5*u, 0 = -g*p - 5*u + 33. Does 3 divide p? False Let g = 29 - 17. Does 69 divide (0 + (-1)/4)/(g/(-20784))? False Suppose -3*q + 37 + 20 = 0. Suppose 0 = -2*w + 29 + q. Is (2/4)/(1/w) a multiple of 4? True Suppose 5*z = -3*u + 4840, 4*u - 237*z + 233*z - 6464 = 0. Is 85 a factor of u? True Suppose 0 = -3*w + 6*w - 2*x + 439, 0 = 2*w + x + 281. Let b = -8 - w. Is b a multiple of 9? True Suppose 2*s + 206 = -2*k, 3*s + 191 = s - 5*k. Let d = s + 396. Is 32 a factor of d? True Suppose 0 = 14*w - 18*w + 680. Is w a multiple of 17? True Suppose -4840 = -9*a + 13*a. Does 4 divide 1/(-5) + a/(-50)? True Let x be 4/6 + (-8)/(-6). Suppose -x*i = -0*i + 288. Let d = i - -205. Is 14 a factor of d? False Let o(v) = -v**3 + 4*v**2 - 3*v + 1. Let l be o(3). Suppose -z = -l - 5. Is z a multiple of 5? False Let c(u) = u**2 + 3*u + 2*u**2 + u**2 - 7. Does 8 divide c(-4)? False Let j = -1155 + 1429. Is 2 a factor of j? True Suppose -6*b - 4400 = -16*b. Is 20 a factor of b? True Suppose -5*s + 3*y + 109 = 0, -3*s = 5*y - 92 + 13. Is s a multiple of 15? False Let d = 54 + -53. Is 21 a factor of 25 + 12/4 + d? False Let r = 1053 - 459. Is 9 a factor of r? True Let w = 24 - 23. Let a be (w - 1)/(-3) + 3. Let i(c) = 2*c**3 - c**2 - 5*c + 4. Is i(a) a multiple of 17? True Let t = 848 + -504. Is t a multiple of 43? True Suppose -139*y - 31668 = -160*y. Does 13 divide y? True Suppose -2*l + 4 = 2*l. Does 7 divide (-54)/(-216)*(1 + 66 + l)? False Let u(n) = -4*n - 41. Let c be u(-13). Let x(t) = t**3 - 11*t**2 + 2*t - 8. Does 13 divide x(c)? False Let h(u) = u**3 + 7*u**2 + 8*u - 4. Let y be h(-6). Let b be 12*(1 + 12/y). Suppose 2*x - b*w = 4*x - 16, -5*x - 3*w + 58 = 0. Is 7 a factor of x? True Let x = 198 - 51. Is 44 a factor of x? False Let j = 2332 + -1666. Is 6 a factor of j? True Let j(n) = n**3 - 2*n**2 + 2*n + 13. Is j(7) a multiple of 68? True Is 40 a factor of (132/(-9))/((-2)/51)? False Is 16 a factor of -8*(-4953)/104 + -2*1? False Let b = 13 - 10. Suppose 100 = 4*w - m, b*w - m - 75 = 3*m. Is w a multiple of 7? False Suppose b + 137 = g, -473 = -2*g + 3*b - 199. Let w = g + -29. Is 21 a factor of w? False Let x(f) = -f**3 + 10*f**2 - 5*f - 33. Let k be x(9). Suppose -46 = -2*g - 2*a, 0 = k*g - 2*a + 6*a - 65. Is g a multiple of 2? False Let a(l) = l**2 + 9*l + 3. Let d be a(-9). Suppose 2*p = -2*i + 366, d*p = 3*i + 514 + 59. Let n = -112 + p. Does 16 divide n? False Is 8 a factor of (-25)/(-20) + 12081/12? True Suppose 7*s = 2*s. Suppose s*d = -2*d - 28. Let y = 25 - d. Is 17 a factor of y? False Suppose -5*d + 2*o = -1337, -2*o + 517 + 549 = 4*d. Does 12 divide d? False Does 116 divide (-10962)/(-28)*(-48)/(-9)? True Let g = -21 + 23. Does 17 divide g/((-2)/(-534)*6)? False Let u = 3 + -3. Let x be u/((-7)/((-7)/(-2))). Suppose -4*y + h = -22, 4*y - 32 = -x*y - 4*h. Is y a multiple of 3? True Let j(g) = 96*g**2 + 75*g + 232. Is 13 a factor of j(-3)? True Suppose -2*d - 22 = -234. Suppose 134*z - d = 133*z. Is z a multiple of 11? False Let s = -694 + 1770. Is s a multiple of 9? False Let h = -75 - -471. Suppose x + h = 5*g, 2*x - 123 = -g - 35. Is 10 a factor of g? True Let i(y) be the first derivative of -y**4/4 + 13*y**3/3 - 4*y**2 - 7*y + 7. Let a be i(12). Suppose -3*g = -a - 61. Does 17 divide g? True Suppose -5*l = 15, b - 5*b + 2*l = -154. Let a = b + -10. Is ((-99)/a)/((-1)/3) a multiple of 2? False Let i(f) = -f**3 + 11*f**2 - 6*f + 6. Let k(m) = m**2 - 11*m - 16. Let r be k(13). Is i(r) a multiple of 18? False Suppose -z - z - 6 = 0, 3*h + 51 = 3*z. Let k(v) = -v**3 - 19*v**2 + 18*v - 23. Is 3 a factor of k(h)? False Let o(m) = 1490*m - 14. Is 12 a factor of o(1)? True Suppose 7 = 5*l + 57. Suppose 2*x + 20 = -0*x. Does 9 divide (-274)/x - l/(-25)? True Suppose -4*m - v + 1492 = -3197, 5*v - 4677 = -4*m. Is 9 a factor of m? False Suppose 10*f - 611 = 8*f - v, -15 = 5*v. Is f a multiple of 54? False Let i(y) = y**2 - 14*y - 26. Does 8 divide i(-12)? False Suppose 5*q - 2*l = 3, q + 5*l = 5 - 26. Let d be (3 - q) + -2 + 3. Does 9 divide 1 + -2 + d + 23? True Suppose s - 40656 = -43*s. Is 11 a factor of s? True Suppose 0 = -9*c + 4*c - 55. Let d(u) = -4*u + 8. Is 26 a factor of d(c)? True Suppose -12*q = -q - 231. Suppose -q*w + 300 = -16*w. Is w a multiple of 15? True Is 24 a factor of 1 + (-30)/26 - 210542/(-299)? False Suppose -6*k - 8 = -r - 2*k, -3*r + k + 13 = 0. Suppose 0 = -r*h + 5*q + 175 - 22, 12 = 4*q. Is h a multiple of 7? True Let o be 2/(-2)*3*(-92)/(-6). Let d = 25 - o. Is 14 a factor of d? False Let g be 294/(2 - -4) - 3. Suppose g = u - 290. Is 24 a factor of u? True Suppose u - 23 = 5*x, 4*u + 18 = -2*x - 0*u. Let l be (0 - x)*(-6)/15. Is (-1 - (l + 1)) + 29 a multiple of 11? False Suppose 87*p - 84*p = 693. Does 4 divide p? False Suppose -4*d + 2814 = -222. Is d a multiple of 24? False Suppose 10*x + 17*x = 10557. Is 31 a factor of x? False Let n(s) = 7*s + 29. Let w be n(-6). Let z(x) = -x**2 - 18*x. Is z(w) a multiple of 13? True Let o = 2844 - 1423. Is o a multiple of 29? True Let u(r) = -5*r - 5. Let t be u(-2). Let y = t - 1. Suppose 2*o = 2*x + 22, 3*x - 5*x - 36 = -y*o. Is 7 a factor of o? True Let p(m) = 2 - 5 + 3 + 5*m - 1. Is 17 a factor of p(14)? False Let c(m) = -m**3 + 8*m**2 + 2. Let h(g) = 4*g - 2. Let a = -8 - -10. Let b be h(a). Is c(b) a multiple of 28? False Suppose 3*v = -3*c - 48, 3*c + c + 3*v = -65. Let t = 10 - c. Does 16 divide t? False Suppose -u + 12 = -4*t - 5*u, 4*t + 2*u + 2 = 0. Suppose -5*a = -3*l - 285, t*a + 4*l + 96 = 236. Is 12 a factor of a? True Let v be ((-106)/12 - 4/6)*-32. Let d = 455 - v. Does 40 divide d? False Suppose -5*f + 8*x - 10*x + 8962 = 0, f - x = 1791. Does 20 divide f? False Does 20 divide (-2)/8*(148 + 18)*-40? True Suppose -b + 1442 = 13*b. Does 5 divide b? False Let c be ((-9)/(-6))/(12/520). Suppose 0 = -3*p + c + 166. Does 31 divide p? False Let x(a) = a**2 - a - 20. Let u be x(0). Let n = u - -32. Is n a multiple of 12? True Let x = 336 - -88. Is x a multiple of 82? False Suppose -5*y - r + 2860 = r, -3*y - 4*r = -1716. Does 45 divide y? False Suppose 0 = -2*l + 3*d + 323, -4*l + d = -442 - 199. Does 2 divide l? True Let n(a) = 14*a**2 - 47*a - 21. Let i(r) = 5*r**2 - 16*r - 7. Let h(m) = 17*i(m) - 6*n(m). Let u be h(-8). Does 12 divide 24 - (-3 + u/(-3))? True Suppose -3*x - 4168 = -5*r, 0 = 4*r - 2*r - 2*x - 1664. Is 76 a factor of r? True Let g = 22 - 21. Is ((-21)/14)/((-3)/114*g) a multiple of 29? False Suppose 5*v - 20 = 5. Suppose -4*k - 170 = -5*d, 0 = -v*d - 2*k - 2*k + 130. Does 6 divide d? True Let s(q) = -21*q + 8. Let d be s(-10). Is 18 a factor of d/6 + (-2)/6? True Let f be 4*(126/(-8) - 1). Let t = 3 - f. Is 35 a factor of t? True Let m be 2/10 - (-2114)/(-70). Is 4 a factor of ((-24)/m)/((-1)/(-45))? True Suppose 21*k - 707 = 5341. Does 1
{ "pile_set_name": "Github" }
/* * FreeRTOS * Copyright (C) 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy of * this software and associated documentation files (the "Software"), to deal in * the Software without restriction, including without limitation the rights to * use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of * the Software, and to permit persons to whom the Software is furnished to do so, * subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS * FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR * COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. * * http://aws.amazon.com/freertos * http://www.FreeRTOS.org */ /** * @file aws_iot_network_config.h * @brief Configuration file which enables different network types. */ #ifndef AWS_IOT_NETWORK_CONFIG_H_ #define AWS_IOT_NETWORK_CONFIG_H_ /** * @brief Configuration flag used to specify all supported network types by the board. * * The configuration is fixed per board and should never be changed. * More than one network interfaces can be enabled by using 'OR' operation with flags for * each network types supported. Flags for all supported network types can be found * in "aws_iot_network.h" */ #define configSUPPORTED_NETWORKS ( AWSIOT_NETWORK_TYPE_WIFI ) /** * @brief Configuration flag which is used to enable one or more network interfaces for a board. * * The configuration can be changed any time to keep one or more network enabled or disabled. * More than one network interfaces can be enabled by using 'OR' operation with flags for * each network types supported. Flags for all supported network types can be found * in "aws_iot_network.h" * */ #define configENABLED_NETWORKS ( AWSIOT_NETWORK_TYPE_WIFI ) #endif /* CONFIG_FILES_AWS_IOT_NETWORK_CONFIG_H_ */
{ "pile_set_name": "PubMed Abstracts" }
Improvement of chemical analysis of antibiotics. XII. Simultaneous analysis of seven tetracyclines in honey. An analytical system for the simultaneous determination of residual oxytetracycline, tetracycline, chlortetracycline, doxycycline, methacycline, demethylchlortetracycline and minocycline in honey has been established by a combination of simple thin-layer chromatographic (TLC) and precise high-performance liquid chromatographic (HPLC) methods. In this system, screening by TLC can detect tetracyclines (TCs) at a level of 0.1 ppm in honey without the need for special equipment, and the quantitative method by HPLC can determine TCs with good recovery (83.7-99.6%) and coefficient variation (0.9-4.3%).
{ "pile_set_name": "FreeLaw" }
UNPUBLISHED UNITED STATES COURT OF APPEALS FOR THE FOURTH CIRCUIT No. 95-7816 ELTON MCGIRT, Plaintiff - Appellant, versus STEVE LOVIN; TOMMY STRICKLAND; J. W. JACOBS, Defendants - Appellees. Appeal from the United States District Court for the Eastern Dis- trict of North Carolina, at Raleigh. Malcolm J. Howard, District Judge. (CA-95-919-5-H) Submitted: March 21, 1996 Decided: April 10, 1996 Before NIEMEYER and MICHAEL, Circuit Judges, and BUTZNER, Senior Circuit Judge. Affirmed by unpublished per curiam opinion. Elton McGirt, Appellant Pro Se. Unpublished opinions are not binding precedent in this circuit. See Local Rule 36(c). PER CURIAM: Appellant appeals from the district court's order denying relief on his 42 U.S.C. § 1983 (1988) complaint. We have reviewed the record and the district court's opinion and find no reversible error. Accordingly, we affirm on the reasoning of the district court. McGirt v. Lovin, No. CA-95-919-5-H (E.D.N.C. Nov. 1, 1995). We dispense with oral argument because the facts and legal conten- tions are adequately presented in the materials before the court and argument would not aid the decisional process. AFFIRMED 2
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this note we present some remarks on big Cohen-Macaulay algebras and almost Cohen-Macaulay algebras via closure operations on the ideals and the modules. Our methods for doing these are inspired by the notion of the tight closure and the dagger closure and by the ideas of Northcott on dropping of the Noetherian assumption of certain homological properties.' address: - 'School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.' - 'Dinabandhu Andrews College, Kolkata 700084, India' author: - Mohsen Asgharzadeh - 'Rajsekhar Bhattacharyya $^\ast$' title: 'Applications of closure operations on big Cohen-Macaulay Algebras' --- Introduction ============ Throughout this paper all rings are commutative, associative, with identity, and all modules are unital. We always denote a commutative Noetherian ring by $R$ and an $R$-algebra (not necessarily Noetherian) by $A$. The *perfect closure* of a reduced ring $A$ of prime characteristic $p$ is defined by adjoining all the higher $p$-power roots of all the elements of $A$, to $A$ itself. We denote it by $A^\infty$. Observe the definition of coherent ring and non-Noetherian regular ring in Section 2. In Section 2, we show that the Frobenius map is flat over a coherent regular ring which is of prime characteristic. In view of [@E page 36], one may ask for a generalization of Tight Closure via *Colon-Capturing* with respect to some non-Noetherian Cohen-Macaulay rings. Let $R$ be a Noetherian local F-coherent domain which is either excellent or homomorphic image of a Gorenstein local ring. If ${\operatorname{p.grade}}_A({\frak{a}},M)$ is the *polynomial grade* of an ideal ${\frak{a}}$ on an $A$-module $M$ (see Section 2 for the definition) then by applying a version of colon-capturing for $R^{\infty}$, in Theorem \[main\] of Section 3, we show that ${\operatorname{ht}}({\frak{a}})={\operatorname{p.grade}}_{R^{\infty}}({\frak{a}}, R^{\infty})$ for every ideal ${\frak{a}}$ of $R^{\infty}$. On regular rings, this result follows easily by using a result of Kunz (which is true only for regular rings), see e.g. [@AT Theorem 4.5]. Following [@Sh], a ring $R$ is called *F-coherent* if $R^{\infty}$ is a coherent ring. In particular, Theorem \[main\] provides an evidence for Shimomoto’s question: Does the perfect closure of an F-coherent domain coincide with the perfect closure of some regular Noetherian rings? The *Dagger closure* was introduced by Hochster and Huneke in [@HH]. It has an interesting connection with the theory of vector bundles, see e.g. [@BS] and the theory of *tight closure*, see e.g. [@HH] and [@HH2]. Let $I$ be an ideal of $R$ and let $R^{+}$ be its integral closure in an algebraic closure of its fraction field. We recall that an element $x\in R$ is in $I^{\dag}$, the dagger closure of $I$, if there are elements $\epsilon_n\in R^{+}$ of arbitrarily small order such that $\epsilon_nx\in IR^{+}$. By the main result of [@HH], the tight closure of an ideal coincides with the dagger closure, where $R$ is complete and of prime characteristic $p$. In Section 4, we extend that notion to the submodules of finitely generated modules over $R$ to prove that if a complete local domain is contained in an almost Cohen-Macaulay domain then there exists a balanced big Cohen-Macaulay module over it (see Corollary \[main2\]). In [@Ho], Hochster showed the existence of a big Cohen-Macaulay algebra from the existence of an almost Cohen-Macaulay algebra in dimension 3. Corollary \[main2\] extends this result, by proving the existence of big Cohen-Macaulay modules from the existence of almost Cohen-Macaulay algebras in any dimension $d$. In [@S1], certain weakly almost Cohen-Macaulay $R^+$-algebra is constructed and it has been asked whether it is almost Cohen-Macaulay or not. In Section 5, we obtain that such a weakly almost Cohen-Macaulay $R^+$-algebra will be almost Cohen-Macaulay if and only if it maps to some big Cohen-Macaulay $R^+$-algebra. For more details see Remark 5.4. Preliminary Notations ===================== We begin this section by exploring the following definitions. Let $A$ be an algebra equipped with a map $v:A\to{{\mathbb R}}\cup \{\infty\}$ satisfying $(1)$ $v(ab)=v(a)+v(b)$ for all $a,b \in A$; $(2)$ $v(a+b) \ge \min\{v(a),v(b)\}$ for all $a,b \in A$ and $(3)$ $v(a)=\infty$ if and only if $a=0$. The map $v$ is called the *value map* on $A$. Moreover, if $v(a) \ge 0$ for every $a \in T$ and $v(a) > 0$ for every non-unit $a \in A$, then we say that $v$ is *normalized*. If $A$ is equipped with a normalized value map, then by [@AS Proposition 3.1], one has $\bigcap_{n=1}^{\infty} {\frak{a}}^n=0$ where ${\frak{a}}$ is proper and finitely generated ideal of $A$. Let $A$ be of prime characteristic $p$ and $I$ be an ideal of $A$. Set $A^0$ to be the complement of the set of all minimal primes of $A$ and $I^{[q]}$ to be the ideal generated by $q=p^e$-th powers of all elements of $I$. Then the tight closure $I^\ast$ of $I$ is the set of $x\in A$ such that there exists $c \in A^0$ such that $cx^q\in I^{[q]}$ for $q\gg 0$. We also recall the following definitions: a ring is called coherent if each of its finitely generated ideal is finitely presented and a ring is called *regular* if each of its finitely generated ideal is of finite projective dimension. \[sh1\] Let $A$ be a coherent regular ring of prime characteristic $p$. Then the following statements hold. 1. The Frobenius map is flat. 2. If $A$ is equipped with a normalized value map, then all finitely generated ideals of $A$ are tightly closed. $\mathrm{(i)}$: This proof is inspired by a famous result of Kunz [@BH Corollary 8.2.8], which characterizes Noetherian regular rings of prime characteristic in the term of Frobenius map. First recall that the assignment $a\mapsto a^p$ defines the Frobenius map $F:A\to A$ which is a ring homomorphism. By $F(A)$, we mean $A$ as a group equipped with left and right scalar multiplication from $A$ given by $$a.r\star b = ab^pr, \ \ where \ \ a,b\in A \ \ and \ \ r\in F(A),$$ see also [@BH Section 8.2]. We show that for all $i > 0$, ${\operatorname{Tor}}^A_i(A/{\frak{a}}, F(A))=0$ for all finitely generated ideals ${\frak{a}}\subset A$. Note that $A/{\frak{a}}$ has a free resolution $(F_\bullet,d_\bullet)$ consisting of finitely generated modules, since $A$ is coherent. Such a resolution is bounded, because $A$ is regular. Then $(F_\bullet,d_\bullet)\otimes_A F(A)=(F_\bullet,d_\bullet^p)$. Consider the ideal $I_{t}((a_{ij}))$ generated by the $t\times t$ minors of a matrix $(a_{ij})$ and recall the definition of Koszul grade, which is denoted by ${\operatorname{K.grade}}$. One can show that it is unique up to the radical, see [@BH Proposition 2.2 (vi)]. Let $r_i$ be the expected rank of $d_\bullet$, see [@BH Section 9.1]. Clearly, $r_i$ is the expected rank of $d_\bullet^p$. Now, all these facts together imply, $${\operatorname{K.grade}}_A(I_{r_i}(d_i), A)={\operatorname{K.grade}}_A(I_{r_i}(d_i^p ),A).$$ In view of [@BH Theorem 9.1.6], which is a beautiful theorem of Buchsbaum-Eisenbud-Northcott, we find that $(F_\bullet,d_\bullet^p)$ is exact and so ${\operatorname{Tor}}^A_i(A/{\frak{a}}, F(A))=0$. $\mathrm{(ii)}$: This proof is inspired by [@Sh Lemma 4.1]. Let ${\frak{a}}$ be a proper and finitely generated ideal of $A$. Suppose that ${\frak{a}}^\ast\neq{\frak{a}}$. Take $x\in {\frak{a}}^\ast\setminus {\frak{a}}$. From the definition and from part $(i)$ of the proof, it follows that there exists an element $c \in A^0$ such that $c\in( {\frak{a}}^{[q]}:_Ax^q)=({\frak{a}}:_Ax)^{[q]}$ for $q\gg 0$. We recall that $\bigcap_{n=1}^{\infty} {\frak{b}}^n=0$ for any proper and finitely generated ideal of $A$, since $A$ is equipped with a normalized value map. Here $({\frak{a}}:_Ax)$ is proper, since $x\notin {\frak{a}}$ and it is also finitely generated, since $A$ is coherent. These facts together imply $c\in\bigcap ({\frak{a}}:_Ax)^{[q]}\subseteq\bigcap ({\frak{a}}:_Ax)^q=0$ and thus we arrive at a contradiction. It is to be noted that the finitely generated assumption of the previous result is really needed. \[exam\] Let $(R,{\frak{m}})$ be a Noetherian regular local ring of prime characteristic $p$ which is not a field. Let $R^\infty$ be its perfect closure. In view of [@A Theorem 1.2], $R^\infty$ is coherent and of finite global dimension. This is well-known that, $R^\infty$ is equipped with a normalized value map. To construct such a value map, consider a discreet valuation valuation ring $(V,{\frak{n}})$, which birationally dominates $R$, i.e., $R\subseteq V$, ${\frak{n}}\cap R={\frak{m}}$ and both $R$ and $V$ have the same field of fractions. We know that the above situation exists. Let $v$ be a value map of $V$ and take $y\in R$ such that $v(y)=\ell\in \mathbb{N}\setminus\{0\}$. Let $r\in R^\infty$. Then $r^{p^n}\in R$ for some $n$. The assignment $r\mapsto v(r^{p^n})/p^n$ defines a normalized value map on $R^\infty$. Consider the ideal ${\frak{a}}:=\{x\in R^\infty:v(x)>\ell/p\}$. Here, we show that ${\frak{a}}^\ast\neq {\frak{a}}$. To see this, let $x\in R^\infty$ be the p-th root of $y\in R$. Here $v(x)=\ell/p$. Take $c\in R^\infty$ with $v(c)>0$. Clearly, $v(c^{1/q}x)>\ell/p$ and so $c^{1/q}x\in {\frak{a}}.$ Thus $cx^q=\prod_q c^{1/q}x\in{\frak{a}}^{[q]}$ for $q\gg 0$. Therefore, $x\in {\frak{a}}^\ast\setminus {\frak{a}}$. By Lemma \[sh1\], ${\frak{a}}$ can not be finitely generated. Here, we show (directly) that ${\frak{a}}$ is not finitely generated. The proof of this fact goes along the same line of that of [@A Proposition 6.8], which gives a similar result over the absolute integral closure of $R$. We leave the details to the reader. Cohen-Macaulayness of Minimal Perfect Algebras ============================================== We begin this section with the following lemma. \[sh\] Let $(R,{\frak{m}})$ be a Noetherian local F-coherent domain which is either excellent or homomorphic image of a Gorenstein local ring. Then $R^{\infty}$ is balanced big Cohen-Macaulay. This is proved in [@Sh Theorem 3.10] when $R$ is a homomorphic image of a Gorenstein ring. The argument of [@Sh Theorem 3.10] based on *Almost Ring Theory*. Assume that $R$ is excellent. For each n, set $R_n:=\{x\in R^\infty|x^{p^n}\in R\}$. The assignment $x\mapsto x^{p^n}$ shows that $R\simeq R_n$, which implies that $R_n$ is excellent. We recall that over excellent domains of prime characteristic one can use the colon capturing property of tight closure theory, see e.g. [@HH2 Theorem 1.7.4]. Let $\underline{x}:=x_1,\ldots,x_d$ be a system of parameters for $R$, where $d:=\dim R$ and let $r\in R^{\infty}$ be such that $rx_{i+1}=\sum_{j=1}^i r_jx_j$ for some $r_j\in R^{\infty}$. Then $r,r_j\in R_n$ for $n\gg 0$. So $r\in((x_1,\ldots, x_i)R_n :_{R_n} x_{i+1})$. Thus from Lemma \[sh1\], $$\begin{array}{ll} ((x_1,\ldots, x_i)R^{\infty} :_{R^{\infty}} x_{i+1})&=\bigcup_n((x_1,\ldots, x_i)R_n :_{R_n} x_{i+1})\\ &\subseteq \bigcup_n((x_1,\ldots, x_i)R_n)^\ast\\ &\subseteq ((x_1,\ldots, x_{i})R^\infty)^\ast\\ &=(x_1,\ldots, x_{i})R^\infty. \end{array}$$ Clearly, ${\frak{m}}R^{\infty}\neq R^{\infty}$. So, any system of parameters of $R$ is regular sequence on $R^{\infty}$, i.e., $R^{\infty}$ is balanced big Cohen-Macaulay. It is to be noted that in the case of non-Noetherian rings, tight closure may behave nicely, since from the above proof we find that it exhibits the colon capturing property. We recall the following definitions. Let ${\frak{a}}$ be an ideal of a ring $A$ and $M$ be an $A$-module. A finite sequence $\underline{x}:=x_{1},\ldots,x_{r}$ of elements of $A$ is called $M$-sequence if $x_i$ is a nonzero-divisor on $M/(x_1,\ldots, x_{i-1})M$ for $i=1,\ldots,r$ and $M/\underline{x}M\neq0$. The classical grade of ${\frak{a}}$ on $M$, denoted by ${\operatorname{c.grade}}_A({\frak{a}},M)$, is defined by the supremum length of maximal $M$-sequences in ${\frak{a}}$. The polynomial grade of ${\frak{a}}$ on $M$ is defined by $${\operatorname{p.grade}}_A({\frak{a}},M):=\underset{m\rightarrow\infty}{\lim}{\operatorname{c.grade}}_{A[t_1, \ldots,t_m]}({\frak{a}}A[t_1, \ldots,t_m],A[t_1,\ldots,t_m]\otimes_A M)$$ and we will use the following well-known properties of the polynomial grade. \[pro\] (see e.g. [@AT]) Let ${\frak{a}}$ be an ideal of a ring $A$ and $M$ an $A$-module. Then the following statements hold. 1. If ${\frak{a}}$ is finitely generated, then$${\operatorname{p.grade}}_A({\frak{a}},M)=\inf\{{\operatorname{p.grade}}_{A_{{\frak{p}}}}({\frak{p}}A_{{\frak{p}}},M_{{\frak{p}}})|{\frak{p}}\in {\operatorname{V}}({\frak{a}})\cap{\operatorname{Supp}}_A M\}.$$ 2. Let $\Sigma$ be the family of all finitely generated ideals ${\frak{b}}\subseteq{\frak{a}}$. Then$${\operatorname{p.grade}}_A({\frak{a}},M)=\sup\{{\operatorname{p.grade}}_A({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ 3. ${\operatorname{p.grade}}_A({\frak{a}},M)\leq{\operatorname{ht}}_{M}({\frak{a}}).$ Now, we are ready to prove the following: \[main\] Let $R$ be a Noetherian local F-coherent domain which is either excellent or homomorphic image of a Gorenstein local ring. Then ${\operatorname{ht}}({\frak{a}})={\operatorname{p.grade}}_{R^{\infty}}({\frak{a}}, R^{\infty})$ for all ideal ${\frak{a}}$ of $R^{\infty}$. Let $\underline{x}:=x_1,\ldots,x_d$ be a system of parameters for $R$, where $d:=\dim R$. By Lemma \[sh\], $\underline{x}$ is regular sequence on $R^{\infty}$. So $$d\leq{\operatorname{p.grade}}(\underline{x}R^{\infty},R^{\infty})\leq{\operatorname{p.grade}}({\frak{m}}_{R^{\infty}},R^{\infty})\leq{\operatorname{ht}}({\frak{m}}_{R^{\infty}})=d.$$This shows that ${\operatorname{p.grade}}({\frak{m}}_{R^{\infty}},R^{\infty})={\operatorname{ht}}({\frak{m}}_{R^{\infty}})$ for the maximal ideal ${\frak{m}}_{R^{\infty}}$ of $R^{\infty}$. At first, we assume that ${\frak{a}}$ is finitely generated and let $P$ be a prime ideal of $R^{\infty}$ such that $P\supseteq {\frak{a}}$. Set ${\frak{p}}:=P\cap R$. Take $x\in(R_{{\frak{p}}})^{\infty}$. Then $x^{p^n}\in R_{{\frak{p}}}$ for some $n$, where $p$ is the characteristic of $R$. Thus $x^{p^n}=a/b$ for some $a\in R$ and $b \in R\setminus{\frak{p}}$. Consider $\frac{a^{1/p^n}}{b^{1/p^n}}$ as an element of $R^{\infty}_P$. Let $m\geq n$. Then $\frac{a^{1/p^n}}{b^{1/p^n}}=(\frac{a^{1/p^{m-n}}}{b^{1/p^{m-n}}})^{1/p^m}$. Keep it in the mind that $R$ is of prime characteristic. If we assume that $a/b=c/d$ then $\frac{a^{1/p^n}}{b^{1/p^n}}=\frac{c^{1/p^n}}{d^{1/p^n}}$. Putting these facts together, we see that the assignment $x\mapsto \frac{a^{1/p^n}}{b^{1/p^n}}$ is independent of the presentation of $x$ and of the choice of $n$. So, it defines a ring homomorphism $\varphi: (R_{{\frak{p}}})^{\infty} \to R^{\infty}_P$. Clearly, $\ker \varphi =0$. Let $x\in R^{\infty}_P$. Then $x=\alpha /\beta$, where $\alpha\in R^{\infty}$ and $\beta\in R^{\infty}\setminus P$. Take $n$ to be such that $a:=\alpha^{p^n}\in R$ and $b:=\beta^{p^n}\in R\setminus {\frak{p}}$. Set $c:=a/b\in R_{{\frak{p}}}$. Consider $\gamma:=c^{1/p^n}\in (R_{{\frak{p}}})^{\infty}$. Then $\varphi(\gamma)=x$, and thus $\varphi$ is an isomorphism. We recall that $R_{{\frak{p}}}$ is F-coherent. Clearly, $R_{{\frak{p}}}$ is either excellent or homomorphic image of a Gorenstein local ring. By the case of maximal ideals, $${\operatorname{p.grade}}(PR^{\infty}_P,R^{\infty}_P)={\operatorname{p.grade}}({\frak{m}}_{(R_{{\frak{p}}})^{\infty}},(R_{{\frak{p}}})^{\infty})={\operatorname{ht}}({\frak{p}})={\operatorname{ht}}(P).$$ This fact along with Lemma \[pro\] yields that $$\begin{array}{ll} {\operatorname{p.grade}}({\frak{a}},R^{\infty})&=\inf\{{\operatorname{p.grade}}(PR^{\infty}_P,R^{\infty}_P)|P\in {\operatorname{V}}({\frak{a}})\}\\ &=\inf\{{\operatorname{ht}}(P)|P\in {\operatorname{V}}({\frak{a}})\}\\&={\operatorname{ht}}({\frak{a}}), \end{array}$$ which shows that ${\operatorname{p.grade}}({\frak{a}},R^{\infty})={\operatorname{ht}}({\frak{a}})$ for every finitely generated ideal ${\frak{a}}$ of $R^{\infty}$. Finally we assume that ${\frak{a}}$ is an arbitrary ideal of $R^{\infty}$ and let $\Sigma$ be the family of all finitely generated ideals ${\frak{b}}\subseteq{\frak{a}}$. We bring the following claim: Claim: One has ${\operatorname{ht}}({\frak{a}})=\sup\{{\operatorname{ht}}({\frak{b}}):{\frak{b}}\in \Sigma\}$. To see this, let $P\in {\operatorname{Spec}}(R^{\infty})$ be such that ${\operatorname{ht}}(P)={\operatorname{ht}}({\frak{a}})$ and set ${\frak{p}}:=P\cap R$ and we already have that $R^{\infty}_P=(R_{{\frak{p}}})^{\infty}$. Set $n:={\operatorname{ht}}({\frak{p}})={\operatorname{ht}}(P)={\operatorname{ht}}({\frak{a}}\cap R)$. Due to [@BH Theorem A.2], there exists a sequence $\underline{x}:=x_1,\ldots,x_n$ of elements of ${\frak{a}}\cap R$ such that ${\operatorname{ht}}(x_1,\ldots,x_i)R=i$ for all $1\leq i \leq n$. Since $R$ is catenary, $\underline{x}$ is part of a system of parameters for $R$. By Lemma \[sh\], $\underline{x}$ is a regular sequence on $R^{\infty}$. So $${\operatorname{ht}}(P)\geq {\operatorname{ht}}(\underline{x}R^{\infty})\geq{\operatorname{p.grade}}(\underline{x}R^{\infty},R^{\infty})=n= {\operatorname{ht}}({\frak{p}})={\operatorname{ht}}(P),$$ which shows that ${\operatorname{ht}}(\underline{x}R^{\infty})={\operatorname{ht}}({\frak{a}})$. This completes the proof of the claim. From the results of finitely generated ideals, we have that ${\operatorname{p.grade}}({\frak{b}},R^{\infty})={\operatorname{ht}}({\frak{b}})$ for all ${\frak{b}}\in\Sigma$. Thus from Lemma \[pro\] and from the above claim we see that $$\begin{array}{ll} {\operatorname{p.grade}}({\frak{a}},R^{\infty})&=\sup\{{\operatorname{p.grade}}({\frak{b}},R^{\infty}):{\frak{b}}\in\Sigma\}\\ &=\sup\{{\operatorname{ht}}({\frak{b}}):{\frak{b}}\in \Sigma\}\\&={\operatorname{ht}}({\frak{a}}),\\ \end{array}$$ and this is precisely what we wish to prove. Let $M$ be an $A$-module. Recall that a prime ideal ${\frak{p}}$ is weakly associated to $M$ if ${\frak{p}}$ is minimal over $(0 :_{A} m)$ for some $m\in M $. Adopt the assumption of Theorem \[main\] and let ${\frak{a}}$ be an ideal of $R^{\infty}$ which is generated by ${\operatorname{ht}}({\frak{a}})$ elements. Then all the weakly associated prime ideals of $R^{\infty}/ {\frak{a}}$ have the same height. See [@AT Corollary 4.6] and its proof. \[rem\] $\mathrm{(i)}$: Concerning the proof of Theorem \[main\], we can ask the following question which has a positive answer for several classes of commutative rings such as valuation domains. Let $A$ be a commutative ring and ${\frak{a}}$ an ideal of $A$. Let $\Sigma$ be the family of all finitely generated ideals ${\frak{b}}\subseteq{\frak{a}}$. Is ${\operatorname{ht}}({\frak{a}})=\sup\{{\operatorname{ht}}({\frak{b}}):{\frak{b}}\in \Sigma\}$? $\mathrm{(ii)}$: Let A be a ring with the property that ${\operatorname{p.grade}}_A({\frak{m}};A) = {\operatorname{ht}}({\frak{m}})$ for all maximal ideals ${\frak{m}}$ of $A$. One may ask whether ${\operatorname{p.grade}}_A({\frak{a}};A) = {\operatorname{ht}}({\frak{a}})$ holds for all ideals ${\frak{a}}$ of $A$. In view of [@AT Example 3.11], this is not true in general. Dagger Closure and Big Cohen-Macaulay algebras ============================================== In this section we extend the notion of dagger closure to the submodules of a finitely generated module over a Noetherian local domain $(R,{\frak{m}})$ and we present Corollary \[main2\]. \[def:almost closure of an ideal\] Let $A$ be a local algebra with a normalized valuation $v:A\rightarrow {{\mathbb R}}\cup\{\infty\}$ and $M$ be an $A$-module. Consider a submodule $N\subset M$. We say $x\in N^{v}_M$ if for every $\epsilon> 0$, there exists $a\in A$ such that $v(a)< \epsilon$ and $ax\in N$. \[almcm\] Let $A$ be an algebra over a Noetherian local domain $(R,{\frak{m}})$ of $\dim d$. Assume that $A$ is equipped with a normalized value map $v : A\to {{\mathbb R}}\cup\{\infty\}$. From [@RSS], we recall that $A$ is called almost Cohen-Macaulay if for $i=1,\ldots, d$, each element of $((x_{1},\ldots, x_{i-1})A:_{A}x_{i})/(x_{1},\ldots,x_{i-1})A$ is annihilated by elements of sufficiently small order with respect to $v$ for all system of parameters $x_1,\ldots, x_d$ of $A$. \[def\] Let $(R,{\frak{m}})$ be a Noetherian local domain and let $A$ be a local domain containing $R$ with a normalized valuation $v:A\rightarrow {{\mathbb R}}\cup\{\infty\}$. For any finitely generated $R$-module $M$ and for its submodule $N$ we define submodule $N_{M}^{\bold{v}}$ such that $x\in N_{M}^{\bold{v}}$ if $x\otimes 1 \in {\operatorname{im}}(N\otimes A\to M\otimes A)_{M\otimes A}^{v}$. Let $A$ be a perfect domain and $\frak a$ be a nonzero radical ideal of $A$. Then $\frak a^v=A$. To see this, let $x$ be a nonzero element of $\frak a$. Since $v(x^{1/n})=v(x)/n$ and $x^{1/n}\in \frak a$, the conclusion follows. \[pr\] Let $(R,{\frak{m}})$ be a Noetherian local domain and let $A$ be a local domain containing $R$ with a normalized valuation $v:A\rightarrow {{\mathbb R}}\cup\{\infty\}$. Let $M$, $M'$ be finitely generated $R$-modules. Consider the submodules $N$, $W$ of $M$. Then the following statements are true: 1. $N^{\bold{v}}_M$ is a submodule of $M$ containing $N$. 2. $(N^{\bold{v}}_M)^{\bold{v}}_M= N^{\bold{v}}_M$. 3. If $N\subset W\subset M$, then $N^{\bold{v}}_M\subset W^{\bold{v}}_M$. 4. Let $f:M\to M'$ be a homomorphism. Then $f(N^{\bold{v}}_M)\subset f(N)^{\bold{v}}_{M'}$. 5. If $N^{\bold{v}}_M= N$, then $0^{\bold{v}}_{M/N}= 0$. 6. We have $0^\bold{v}_R=0$ and ${\frak{m}}^\bold{v}_R={\frak{m}}$. In addition to, if $A$ is almost Cohen-Macaulay, then the following is true: 1. Let $x_1, \ldots , x_{k+1}$ be a partial system of parameters for $R$, and let $J = (x_1, \ldots , x_k)R$. Suppose that there exists a surjective homomorphism $f:M\to R/J$ such that $f(u) = \bar{x}_{k+1}$, where $\bar{x}$ is the image of $x$ in $R/J$. Then $(Ru)^{\bold{v}}_M\cap \ker f \subset (Ju)^{\bold{v}}_M$. The proof is straightforward and we leave it to the reader. In [@Ho], Hochster showed the existence of a big Cohen-Macaulay algebra from the existence of an almost Cohen-Macaulay algebra in dimension 3. The following Corollary extends this result, by proving the existence of big Cohen-Macaulay modules from the existence of almost Cohen-Macaulay algebras in any dimension $d$. \[main2\] For a complete Noetherian local domain, if it is contained in an almost Cohen-Macaulay domain, then there exists a balanced big Cohen-Macaulay module over it. Due to [@D Theorem 3.16], we know that there exists a list of seven axioms for a closure operation defined for finitely generated modules over complete local domains. By the main result of [@D], one can see that a closure operation satisfying those axioms implies the existence of a balanced big Cohen-Macaulay module over the base ring. In view of Proposition \[pr\], we see that the closure operation defined in Definition \[def\] satisfies all the seven axioms of [@D Theorem 3.16]. This completes the proof. The above result can be used to prove [@RSS Proposition 1.3] by a completely different argument, since the existence of almost Cohen-Macaulay domain impiles the existence of big Cohen-Macaulay module. Concluding Remarks ================== For Noetherian local domain $R$ of $\dim d$, the above results can be extended to an almost Cohen-Macaulay $R^{+}$-algebra where $R^{+}$ be its integral closure in an algebraic closure of its fraction field. Fixed a local $R^{+}$-algebra $A$ and let $M$ be an $A$-module. Consider a submodule $N\subset M$. In view of Definition \[def:almost closure of an ideal\], define $ N^{v}_M$ via a normalized valuation $v:R^{+}\rightarrow {{\mathbb R}}\cup\{\infty\}$. For a finitely generated $R$-module $M$ and its submodule $N$ we define submodule $N_{M}^{\bold{v}}$ such that $x\in N_{M}^{\bold{v}}$ if $x\otimes 1 \in {\operatorname{im}}(N\otimes A\to M\otimes A)_{M\otimes A}^{v}$. From [@S1], we recall that $A$ is almost Cohen-Macaulay if for every system of parameters $x_1,\ldots, x_d$, $((x_1,\ldots, x_{i-1})A:_Ax_i)/(x_1,\ldots, x_{i-1})A$ is almost zero $R^{+}$-module for $i=1,\ldots,d$ and $A/{\frak{m}}A$ is not almost zero. Let $(R,{\frak{m}})$ be a Noetherian local domain and let $R^{+}$ be equipped with a normalized valuation $v:R^{+}\rightarrow {{\mathbb R}}\cup\{\infty\}$. Consider a local $R^{+}$-algebra $A$ and for every $I\subset R$, we define $I^{\bold{v}}$ as above. Under this situation, closure operation satisfies all the properties given in Proposition \[pr\] if and only if $A$ is an almost Cohen-Macaulay $R^{+}$-algebra where $R^{+}$ is contained in it as a subdomain. We show $0^{\bold{v}}= 0$ implies $R^{+}\to A$ is injective. To see this, take $0\neq a\in R^{+}$ such that its image in $A$ is zero. Since $a$ is integral over $R$, there exists a minimal monic expression $a^n+ r_1a^{n-1}+\cdots +r_n= 0$, where each $r_i\in R$ with $r_n\neq 0$. Take the image of the expression in $A$ which gives that the image of $r_n$ is zero in $A$. So $r_n\in 0^{\bold{v}}= 0$. This gives $r_n= 0$. So we arrive at a contradiction and $a$ is zero in $R^{+}$. By using straightforward modification of the proof of Corollary \[main2\], we get the rest of the claim. The proof of the converse is left it to the reader. The following result provides an example of an almost Cohen-Macaulay $R^{+}$-algebra where $R^{+}$ is contained in it as a subdomain. We recall that an $R$-algebra $T$ is called a *seed*, if it maps to a big Cohen-Macaulay $R$-algebra (see [@D1]). \[rem\] Let $(R,{\frak{m}})$ be a Noetherian complete local domain in mixed characteristic $p>0$, with a system of parameters $p, x_2,\ldots, x_d$. Let $B$ be as [@S1 Theorem 5.3]. Then the following are equivalent. 1. $B/{\frak{m}}B$ is not almost zero when viewed as $R^{+}$-module. 2. $p^{\epsilon}\notin (p, x_2,\ldots, x_d)B$ for some rational number $\epsilon>0$. $\mathrm{(i)}\Rightarrow\mathrm{(ii)}$: Clearly ${\frak{m}}B$ does not contain elements of $R^{+}$ of arbitrary small order. If $p^{\epsilon}\in (p, x_2,\ldots, x_d)B$ for every rational number $\epsilon$ then $p^{1/k}\in (p, x_2,\ldots, x_d)B$ for every positive integer $k$. Since $v(p)< \infty$, $p^{1/k}$ are the elements of $R^{+}$ of arbitrarily small order and ${\frak{m}}B$ contains all of them. This is a contradiction. $\mathrm{(ii)}\Rightarrow\mathrm{(i)}$: From [@S1 Corollary 6.1] it follows that $R^{+}$ is a seed and using similar argument of [@D1 Example 5.2] we find that $R^{+}$ is actually a minimal seed. Thus $R^{+}$ can be thought as a subdomain of $B$. Let $\{x_{\lambda}\}= B-R^{+}$. Take $C= R^{+}[\{X_{\lambda}\}]$ where $X_{\lambda}$’s are the indeterminates. One can extend the valuation $R\to {{\mathbb Z}}\cup \{\infty\}$ to $C\to {{\mathbb R}}\cup \{\infty\}$ such that $v$ is non negative on $C$ and positive on non-units of $C$. Also, we choose values of $X_{\lambda_i}$’s greater than some $N> 0$. For $B= C/JC$, we have $y\in JC$ implies $v(y)> N> 0$, and since ${\frak{m}}$ is also finitely generated same is true for ${\frak{m}}C$. Thus $C/(J+{\frak{m}})C$ is not almost zero viewing as $C$-module. Since $B/{\frak{m}}B= C/(J+{\frak{m}})C$, we find that $B/{\frak{m}}B$ is not almost zero viewing as $C$-module. Thus $B/{\frak{m}}B$ is not almost zero viewing as $R^{+}$-module. In [@S1], it has been asked that whether the $R^+$-algebra $B$ which satisfies conditions of Theorem 5.3 (there) is almost Cohen-Macaulay or not (i.e $B/mB$ is almost zero or not). From Corollary 5.2 and Proposition \[rem\], we get the following: Let $B$ be a weakly almost Cohen-Macaulay $R^+$-algebra satisfies Theorem 5.3 of [@S1]. Then $B$ is almost Cohen-Macaulay $R^+$-algebra (i.e. $B/mB$ is almost zero) if and only if it maps to big Cohen-Macaulay $R^+$-algebra. It is to be noted that here $R^+$ is contained in $B$ as a subdomain. We would like to thank K. Shimomoto for his comments on the earlier version of this paper. We also thank the referee for careful reading of the paper and for the valuable suggestions. [99]{} M. Asgharzadeh, *Homological properties of the perfect and absolute integral closure of Noetherian domains*, Math. Annalen [**348**]{} (2010), 237–-263. M. Asgharzadeh and M. Tousi, *On the notion of Cohen-Macaulayness for non-Noetherian rings*, J. Algebra [**[322]{}**]{} (2009), 2297–2320. M. Asgharzadeh and K. Shimomoto, *Almost Cohen-Macaulay and almost regular algebras via almost flat extensions*, to appear in J. Commutative Algebra. W. Bruns and J. Herzog, *Cohen-Macaulay rings*, Cambridge University Press [**[39]{}**]{}, Cambridge, (1998). H. Brenner and A. Stabler, *Dagger closure and solid closure in graded dimension two*, Preprint (2011), Arxiv math.AG/1104.3748. G. Dietz, *A characterization of closure operations that induce big Cohen-Macaulay modules*, Proc. AMS [**[138]{}**]{} (2010), 3849–3862. G. Dietz, *Big Cohen-Macaulay algebras and seeds*, Trans. Amer. Math. Soc. [**[359]{}**]{}, (2007), 5959–-5989. J. Elliott, *Prequantales and applications to semistar operations and module systems*, arXiv:1101.2462 \[math AC\]. M. Hochster, *Big Cohen-Macaulay algebras in dimension three via Heitmann’s theorem*, J. Algebra [**[254]{}**]{} (2002), 395–408. M. Hochster and C. Huneke, *Tight closure and elements of small order in integral extensions*, J. Pure and Applied Algebra [**[71]{}**]{} (1991), 233-247. M. Hochster and C. Huneke, *Tight closure in equal characteristic zero*, Preprint. P. Roberts, A. Singh and V. Srinivas, *Annihilators of local cohomology in characteristic zero*, Illinois J. Math. [**[51]{}**]{} (2007), 237-254. K. Shimomoto, *F-coherent rings with applications to tight closure theory*, Journal of Algebra [**[338]{}**]{}, (2011), 24–34. K. Shimomoto, *Almost Cohen-Macaulay algebras in mixed characteristic via Fontaine rings*, to appear in Illinois J. Math.
{ "pile_set_name": "Pile-CC" }
Title Authors Source Mainebiz Date 5-23-2016 Pages 29 Abstract A list of Maine's largest nursing and assisted-living homes, ranked by total number of nursing and assisted-living beds as of April 2016, is presented. Also included are number of employees, number of nursing staff, facility owner, administrator and whether it is for profit or nonprofit. The largest five, ranked in descending order are Clover Health Care in Auburn, St. Mary's d'Youville in Lewiston, Atlantic Heights in Saco and tied for fourth largest, Huntington Common in Kennebunk and Seventy-Five State Street in Portland.
{ "pile_set_name": "StackExchange" }
Q: Getting progress for two ongoing processes When a user uploads a video, I do two conversions. A high res, and a low res. I use popen() in the upload processing script and run the linux ffmpeg conversion command for each conversion. When I used to have just one conversion, I simply calculated the progress based on the output of popen and put it in a database table (conversion_progress), and updated it as the process outputted results. Now what I want to do is insert two entries into my conversion_progress table and calculate the difference of each progress. However when I use a select statement, since both entries have the same vid_id I cannot distinguish between each process since two rows are returned. I do not need need to and I would just like to obtain the average of both rows. Anyone have any suggestions on this? $sql = 'SELECT progress from conv_progress WHERE vid_id=?'; $stmt4 = $conn->prepare($sql); $result=$stmt4->execute(array('123')) or die(print_r($db->errorInfo(), true)); while ($row=$stmt4->fetch(PDO::FETCH_ASSOC)){ $data['progress']=$row['progress']; } $out = json_encode($data); print $out; returns 5070 meaning one upload is 50% and another is 70%. Can I use foreach or something to do a calculation on this? A: You might want to let MySQL take the average: $sql = 'SELECT AVG(progress) AS progress from conv_progress WHERE vid_id=? GROUP BY vid_id';
{ "pile_set_name": "StackExchange" }
Q: Favorite Questions and Answers from fourth quarter of 2016 Please link to your favorite questions and answers which were either asked or answered from October 1st 2016 through December 31 2016 (They don't have to be your questions and answers.). Your answers will be compiled into a blog post like previous quarterly posts. I will be using DavRob60's queries for a baseline, but I really appreciate people voicing the ones they really enjoyed. Maybe you feel like you answered one really well, even if it didn't receive a lot of votes. Let me know about it. Questions with most Votes created within 3 month range Questions with most View created within 3 month range Questions with best answer created within 3 month range I will also be linking all blog posts that happened within this quarter. Also if there was a meta post you feel should be spotlighted those are also acceptable. Notice: There appears to be some confusion about what is accepted. There is nothing wrong with promoting your own questions and answers. There is no point in down-voting anyone's answer to this meta question, everything gets included in the post. A: There were no submitted user favorites. The post has been created here Highlights from 2016 – 4th Quarter
{ "pile_set_name": "Wikipedia (en)" }
What I Was Made For What I Was Made For is the fourth studio album by contemporary Christian music band Big Daddy Weave. This was their third release with a major label in Fervent Records. It was released on July 26, 2005. This album charted on the following Billboard's charts on August 13, 2005: No. 14 on Christian Albums, and No. 17 on Top Heatseekers. Album Track listing References External links Official album page Category:2005 albums Category:Big Daddy Weave albums Category:Fervent Records albums
{ "pile_set_name": "Wikipedia (en)" }
Daniel Awet Akot Daniel Awet Akot is the deputy Speaker of the National Legislative Assembly of South Sudan. Early life Akot was born in Lakes State in a Dinka family. Sudan People's Liberation Army Akot graduated from the Sudan Military College. He received a year of additional military training in the United States before serving as a Lieutenant Colonel, commanding Sudan People's Liberation Army rebel forces around Renk in the northernmost tip of current-day South Sudan. John Garang appointed Akot as one of the eleven alternate members of the SPLM/SPLA Political-Military High Command. Akot also became governor of Bahr el Ghazal and the overall commander of SPLA forces in that region. His deputy commanders included Bona Bang Dhol, Chol Ayuak Guiny and Deng Ajuong. His radio codename was Amara. References Category:Living people Category:Sudan People's Liberation Movement politicians Category:SPLM/SPLA Political-Military High Command Category:Members of the National Legislative Assembly (South Sudan) Category:Year of birth missing (living people)
{ "pile_set_name": "OpenWebText2" }
The state’s waiting period, which was enacted in 1911, was reduced from one year to six months in 1977. The bill has bipartisan support, and Duchow said she’s optimistic it will pass, calling the proposal “common sense.” Sen. Janis Ringhand, D-Evansville, is the Senate version’s lead co-sponsor. But a six-month waiting period could help prevent additional divorces, allows for proper reflection after a potentially traumatic divorce and protects children from the confusion of having to adjust to a parent’s new marriage too quickly after a divorce, said Julaine Appling, president of the conservative Wisconsin Family Action. Eliminating the waiting period would cheapen marriage, making it more about adult desires instead of raising a family, she said. “The waiting period has good purpose,” Appling said. “It’s not punitive. It’s protective.” She said she’d be less opposed to the proposal if it excluded parents with minor children. In 2014, more than 51 percent of divorces included marriages with children, according to the Wisconsin Department of Health Services. Between 2010 and 2014, Wisconsin had an average divorce rate of nearly 2.9 for every 1,000 persons, lower than the national rate of 3.4 during the same span, according to the department.
{ "pile_set_name": "Pile-CC" }
Sunday, December 6, 2009 SUNDAY SHARE...These fun and cute ideas come straight from the Stampin' Up! customer site. A homemade card durung the Holidays is like sending a hug with a fold down the middle! So take some time to create your very own personalized Chistmas cards...and send holiday greetings in a very special way to your friends, family members, co-workers, and neighbors!
{ "pile_set_name": "PubMed Abstracts" }
Electrical probing of the spin conductance of mesoscopic cavities. We investigate spin-dependent transport in multiterminal mesoscopic cavities with spin-orbit coupling. Focusing on a three-terminal set-up we show how injecting a pure spin current or a polarized current from one terminal generates additional charge current and/or voltage across the two output terminals. When the injected current is a pure spin current, a single measurement allows us to extract the spin conductance of the cavity. The situation is more complicated for a polarized injected current, and we show in this case how two purely electrical measurements on the output currents give the amount of current that is solely due to spin-orbit interaction. This allows us to extract the spin conductance of the device in this case as well. We use random matrix theory to show that the spin conductance of chaotic ballistic cavities fluctuates universally about zero mesoscopic average and describe experimental implementations of mesoscopic spin to charge current converters.
{ "pile_set_name": "OpenWebText2" }
NOT FOR DISTRIBUTION OR RELEASE, DIRECTLY OR INDIRECTLY, TO THE U.S NEWS WIRE SERVICES OR FOR DISSEMINATION IN THE UNITED STATES, AUSTRALIA, CANADA OR JAPAN, OR ANY OTHER JURISDICTION IN WHICH THE DISTRIBUTION OR RELEASE WOULD BE UNLAWFUL. Summary: Funcom N.V. (“Funcom” or the “Company”) is happy to announce that it has entered into an agreement to establish a joint venture (see below under “ IP acquisition through joint venture ”) with Cabinet Group LLC (“Cabinet Group”). The joint venture company will hold interactive/video gaming Intellectual Property (IP) rights based on the works of Robert E. Howard and classic Swedish pen & paper and board game properties, including attractive IPs such as “Conan the Barbarian”, “Solomon Kane”, “Mutant Chronicles”, “Mutant: Year Zero”. This is a cashless transaction where Funcom receives a 50% ownership in the joint venture in exchange for issuing 22,300,000 new shares in the Company (the “Consideration Shares”), each at a subscription price of NOK 2.6, to Cabinet Group. The transaction is subject to customary terms and conditions, including the approval by the shareholders of Funcom, and is expected to be completed on or about 30 January 2018. “Our Entertainment IP library is very large beyond Conan with characters and worlds that extend perfectly into the games business.”, says Fredrik Malmberg, President and CEO of the Cabinet Group. “We found a strong industrial partner in Funcom with whom we have had a very good relationship for many years. They have demonstrated success in a competitive and complex industry. This deal opens many cross-media opportunities going forward.” Funcom will utilize these IPs, in addition to the Company’s internal IPs, as part of an increased focus on co-development and publishing partnerships with 3rd party developers in order to secure more frequent product launches, increase the revenue baseline, reduce the risk profile of the Company and generate additional value creation both in the short, medium and long term. In order to fund publishing partnerships with 3rd party game developers, and secondarily to have more flexibility when investing in production of new games internally , Funcom is also executing a private placement (the “Private Placement”) directed towards SwedBank Robur Ny Teknik and SwedBank Robur Microcap funds, both funds managed by Swedbank Robur Fonder AB of 34,000,000 new shares (the “New Shares”) at a subscription price of NOK 2.6 per share, corresponding to a premium of 18% compared to the closing price of the Company’s shares on the Oslo Stock Exchange as of 15 December 2017. The total gross proceeds in the private placement will be approximately NOK 88.4 million for Funcom. Finally, the Company intends to execute a 5:1 reverse stock split exchanging five existing shares for one new share. The reverse stock split will not affect the total market value of the Company or the respective ownership of each shareholder in Funcom. If the reverse share split is approved, the par value per Share will increase from EUR 0.04 to EUR 0.20 and if completed prior to issuance of the Consideration Shares and New Shares all amounts of shares and subscription prices will be amended in accordance with the reverse stock split (i.e. in a ratio of five to one). A separate notice with key information regarding the reverse stock split will be published in due time. The Supervisory Board has approved the above transactions subject to approval of an Extraordinary General Meeting to be held on or around 30 January 2018. A separate stock notice is planned to be released later today with the agenda and other details regarding this General Meeting. Swedbank Robur Fonder has followed Funcom over time. “We see this as a new giant leap in Funcom’s turnaround. Building on its impressive recent development, multiple new revenue sources from IP and strategic expansion into publishing will increase cash flow robustness and boost further growth. Combined with additional funding and a reverse split the company is now really well positioned as an attractive long term investment” fund manager Erik Sprinchorn commented. A presentation will be held by Rui Casais (CEO) and Stian Drageset (CFO) on the subject of this stock notice today, 18 December, at 10:00 AM CET. The presentation will be held on the Funcom Twitch.TV channel – https://www.twitch.tv/funcom Ole Gladhaug, Chairman of the Supervisory Board of Funcom and Executive Vice President of the Kristian Gerhard Jebsen Group, comments: “We welcome Cabinet and Swedbank Robur as shareholders of Funcom. Both of the announced transactions will strengthen Funcom in its turnaround and enable future growth”. The Kristian Gerhard Jebsen Group is the largest shareholder in Funcom through KGJ Investments S.A. SICAV-SIF and KGJ Capital IP acquisition through joint venture: The Cabinet Group holds all the rights for a long list of intellectual properties based on the writings of Robert E Howard and based on the game creations originally from the Swedish company “Target Games”. The highlights from this IP portfolio are the well-known “Conan the Barbarian” IP, and “Solomon Kane” and “Mutant Chronicles”, both known from major movie productions in 2008 and 2009. In addition to these, there are many characters and worlds such as “Kull of Atlantis”, “Mutant: Year Zero”, “Doomtrooper”, “Kult”, among others. Funcom will in the end, through a new wholly owned Norwegian subsidiary acquire 50% of the participation interest in “Heroic Signatures DA” (under incorporation) (“Heroic Signatures”) from Tranicos LLC, a US company controlled by Cabinet Group. Cabinet Interactive LLC, a company controlled by Cabinet Group LLC, will own the remaining 50% of the participation interest in Heroic Signatures. Heroic Signatures will not have employees and currently has no operations, assets or liabilities as it is under incorporation. However, as of closing of the transaction, Heroic Signatures will license all interactive video game rights in Cabinet’s IP portfolio through a contribution in kind from Cabinet Group. As part of this partnership, Cabinet Group provides the IP knowledge and protection and Funcom provides the gaming industry contacts and know how. As Heroic Signatures is under incorporation, the company has not been subject to any financial reporting so far. The value of Funcom’s share of the assets in Heroic Signatures will be approximately USD 7 million compared to the total assets of Funcom NV of USD 22.4 million as of 3Q17. Heroic Signatures will be jointly managed by Funcom and Cabinet Group, and will focus on establishing and executing a roadmap of games on the IPs, across all platforms and regions, where some of the games will be developed by Funcom and others will be developed by third party licensees. Funcom will benefit from its share of the IP royalties including those for Conan Exiles and Age of Conan, effectively seeing a 50% reduction in IP royalty cost for those titles. In addition, this greatly expanded IP portfolio is a strong asset for the Company and for its co-development and publishing activities. Finally, as part of this transaction, Funcom secures the right to develop three games based on these IPs to specified terms, two utilizing Conan the Barbarian and one utilizing a different IP which shall be determined at a later date. Funcom also commits to developing two of these games in the next four years. Cabinet Group has agreed to a customary lock up restriction on the Consideration Shares for a period of six months after the consummation of the transaction. Cabinet Group is controlled by Mr. Fredrik Malmberg, who is a member of Funcom’s Supervisory Board. Mr. Malmberg has recused himself from the Supervisory Board when this transaction has been discussed. Apart from this, no agreements have been or are expected to be entered into in connection with the transaction for the benefit of the Company’s executive management or members of either the management or supervisory boards. Increased focus on Co-development and Publishing Partnerships: In order to diversify the Company’s product portfolio, provide more frequent launches and reduce the reliance on individual titles, the Company wishes to invest in additional co-development and publishing partnerships similar to the one established earlier this year with Bearded Dragons International. These partnerships are intended to leverage the core strengths and markets of Funcom, with focus on quality – rather than quantity – of projects. The Company hopes to secure two such product launches per year starting from 2019. The Private Placement: The Private Placement is directed towards SwedBank Robur Ny Teknik and Microcap funds, both funds managed by Swedbank Robur Fonder AB, and consists of an issue of 34,000,000 New Shares, each at a subscription price of NOK 2.6 per share. The New Shares are expected to be issued on or about 1 February 2018. The Private Placement is made in order to raise capital for further growth. The main objective with the net proceeds of approximately NOK 88.4 million from the Private Placement is to fund additional game partnership and publishing opportunities. The proceeds will also increase the flexibility to make investment decisions for maximum value creation in new internal game development initiatives. The Private Placement implies that the pre-emptive rights of the Company’s existing shareholders are deviated from. The Supervisory Board considers that this deviation is justifiable as the Private Placement is directed towards an external investor, that the subscription price is above the prevailing market price of the VPS as of the last date prior to the Private Placement was announced, and that the expeditious placement reduces the risk of trading based on assumptions regarding the share price development. The Supervisory Board also recognizes the value of a reputable long-term investor for the Company and all shareholders. The issuance of Consideration Shares and the Private Placement: Both the Private Placement and The New Shares may only be listed on the Oslo Stock Exchange following approval of a listing prospectus by the Netherlands Authority for the Financial Markets, the passporting of the listing prospectus to Norway, and the publication of the same. The Company intends to publish such prospectus prior to the issuance of the New Shares, and will thus become listed and tradable on the Oslo Stock Exchange immediately after the Consideration Shares have been registered in the Norwegian Central Securities Depository. If the prospectus has not been approved prior to the issuance of the New Shares, the New Shares will be issued on a separate ISIN initially and later converted to the same ISIN as the Company’s existing ISIN. In such case, the New Shares will become listed and tradable on the Oslo Stock Exchange after conversion to the ordinary ISIN of the Company’s shares. Stock Notice Market Presentation * * * http://heroicsignatures.com http://cabinetentertainment.com/ http://www.swedbankrobur.com/funds/ny-teknik/ http://www.swedbankrobur.com/funds/microcap Any enquiries may be addressed to: [email protected] Badhoevedorp, 18 December 2017 Funcom N.V. This stock exchange notice is published pursuant to the disclosure requirements set out in section 5-12 of the Norwegian Securities Trading Act and section 3.4 of the Continuing Obligations.
{ "pile_set_name": "Pile-CC" }
The Allahabad high court acquitted Rajesh and Nupur Talwar on October 12 of charges of killing their daughter Aarushi and domestic help Hemraj in 2008. The couple were awarded life sentence by a special CBI court in Ghaziabad on November 26, 2013, a day after their conviction. They had later appealed the decision to Allahabad high court. A Division Bench comprising Justices A.K. Mishra and B.K. Narayana, gave the benefit of the doubt to the Talwars, adding that the parents did not murder their child. The court pointed out that CBI had failed to prove guilt beyond doubt. It accepted the argument of counsel for the Talwars that the case against them was based entirely on circumstantial evidence and that they were innocent. Speaking to The Hindu, Rebecca John, the lawyer of the Talwars said, “I am happy that the Allahabad high court has allowed the appeal filed by Nupur & Rajesh Talwar against their conviction and sentence. The case against them was untenable in fact, and law and their prosecution and subsequent conviction resulted in grave miscarriage of justice.” The Aarushi Talwar Case – a timeline May 16, 2008: Soon-to-turn 14-year-old, Aarushi Talwar was found dead in the bedroom of her house in Jalvayu Vihar, Noida. Her throat was slit and her head poorly clobbered. The family’s live-in domestic help Hemraj Banjade was seen as the prime suspect in the murder. However, a day later, Hemraj (45) was found dead on the terrace of the same flat. His throat also slit and, he had suffered injuries to his head. There were wounds all over his body, and the door to the terrace was found locked from inside. Noida Police suspected the twin murders to be an insider job and said that they had been committed with ‘surgical precision’. May 23, 2008: Rajesh Talwar was arrested for the double murder. The Noida Police opined that case was of ‘honour killing’- Rajesh had committed the twin murders after finding Aarushi and Hemraj in an ‘objectionable’ position. No forensic or material evidence was provided to substantiate the claim. The case was handed over to the CBI. June 13, 2008: CBI arrested Krishna Thadaraj, an assistant at Rajesh’s Noida clinic. On June 20, CBI conducted a lie detector test on Rajesh Talwar. A second lie detector test was also performed on Aarushi’s mother, Nupur Talwar, as the first one was found to be inconclusive five days later. July 12, 2008: After spending almost two months in jail, Rajesh Talwar got bail. December 2010: Post the narco-analysis tests that were conducted on the Talwars, the CBI submitted a closure report on the grounds of “insufficient evidence”. Clean chit was served to the servants, Rajesh Talwar was named the primary suspect. Due to lack of evidence, CBI did not charge him. The court said that the case cannot be closed. November 25, 2013: CBI judge Shyam Lal convicted the Talwars guilty of both murders and destruction of evidence. Life imprisonment was awarded to the Talwars by the CBI court, a day later. The couple then filed an appeal in the Allahabad High Court challenging the previous court’s order. They were finally acquitted in October 12 this year. Transcripts Of CBI Tapes Show The Accused In Medical College Bribery Scam Case Talk Of ‘Prasad’ For ‘Temple’ In Delhi, Allahabad Partner Story Life Doesn’t Come With A Manual But It Comes With A Mother It is often said that “Life doesn’t come with a manual, but it comes with a mother,” and we couldn’t agree more. Motherhood is a beautiful journey to embark upon, but not an easy one to tread as there are challenges and hurdles to deal with, along the way. For centuries there has been an unspoken social norm of mothers having to be responsible for all their child does – everything from nurturing to making sure that the baby grows up to be socially responsible and successful. With its latest campaign, All Out® takes a powerful stand for mothers who are strong enough to acknowledge that they don’t know everything and are helping other Indian moms become more aware and vigilant against the threat of dengue. With the premise that one mother’s acceptance has the power to fire up other moms to be more vigilant, the campaign says that it takes a special kind of strength to admit #IDidntKnow / #MujheSabNahiPata. In the campaign film, we see Kirti, who like many moms is strong and conditioned to believe that “Mothers know everything”. Alas, social conditioning notwithstanding, she did not expect a mosquito could change all her beliefs. Unaware of the fact that mosquito which causes dengue can breed even in clean water, Kirti is seen breaking down while accepting #MujheSabNahiPata / #IDidntKnow when her son’s life is endangered by the deadly disease. The film also sees, celebrity mother Sonali Bendre encouraging mothers to push aside the pressures of having to know it all and share their experiences to help prepare other mothers. In a country that believes a mother should know it all, it takes a tough mom to say #MujheSabNahiPata.All Out aims to empower mothers to come together to share anecdotes from their lives about things they did not know. Because one mother’s defeat can prepare a thousand to be more vigilant. Share your stories with us. #IDidntKnow #MujheSabNahiPata Through the campaign, the brand is urging mothers across the country to share their #IDidntKnow / #MujheSabNahiPata stories and help other vigilant parents become more aware of the wellbeing and protection of their children. Please share your story using #IDidntKnow / #MujheSabNahiPata, if you think that your story can encourage and better prepare other mothers to take care of their children.
{ "pile_set_name": "USPTO Backgrounds" }
This invention relates in general to drive train assemblies for transferring rotational power from an engine to an axle assembly in a vehicle. In particular, this invention relates to an improved structure for providing a seal between two telescoping members in a slip yoke assembly adapted for use in such a vehicle drive train assembly. In most land vehicles in use today, a drive train assembly is provided for transmitting rotational power from an output shaft of an engine/transmission assembly to an input shaft of an axle assembly so as to rotatably drive one or more wheels of the vehicle. To accomplish this, a typical vehicular drive train assembly includes a hollow cylindrical driveshaft tube. A first universal joint is connected between the output shaft of the engine/transmission assembly and a first end of the driveshaft tube, while a second universal joint is connected between a second end of the driveshaft tube and the input shaft of the axle assembly. The universal joints provide a rotational driving connection from the output shaft of the engine/transmission assembly through the driveshaft tube to the input shaft of the axle assembly, while accommodating a limited amount of angular misalignment between the rotational axes of these three shafts. Not only must the drive train assembly accommodate a limited amount of angular misalignment between the engine/transmission assembly and the axle assembly, but it must also typically accommodate a limited amount of axial movement therebetween. A small amount of such relative axial movement frequently occurs when the vehicle is operated. To address this, it is known to provide one or more slip yoke assemblies in the drive train assembly. A typical slip yoke assembly includes first and second splined members which are connected to respective components of the drive train assembly. The splined members provide a rotational driving connection between the components of the drive train assembly, while permitting a limited amount of axial misalignment therebetween. In some instances, the first splined member may be provided on the end of a yoke member connected to a universal joint assembly, while the second splined member may be connected to a portion of the driveshaft of the drive train assembly. As is well known in the art, most slip yoke assemblies are provided with one or more seals to prevent the entry of dirt, water, and other contaminants into the region where the splined members engage one another. Such contaminants can adversely affect the operation of the slip yoke assembly and cause premature failure thereof Exterior seals are typically disposed on the outer surface of the slip yoke assembly to prevent contaminants from entering into the region where the splined members engage one another from the exterior environment. A number of external seals are known in the art for use with conventional slip yoke assemblies. For example, a typical exterior seal includes a rigid annular housing which is mounted on the outer surface of the female splined member. A resilient annular seal is supported on the housing and extends radially inwardly into sliding and sealing engagement with the outer surface of the male splined member to provide the seal. Another example of an external seal is a convoluted boot which is positioned over the splined members of a slip yoke assembly. One end of the convoluted boot is fastened to the female splined member and the other end is fastened to the male splined member. The length of the convoluted boot can expand or contract to accommodate the axial movement between the splined members of the slip yoke assembly. Several structures are known in the art for mounting the rigid annular housing of a seal or the ends of a convoluted boot. In one known structure, the ends of either the seal housing or an end of the convoluted boot and the female splined member are formed having mating threads which allow the seal housing or the end of the convoluted boot to be threaded onto the end of the female splined member. In another known structure, a portion of the seal housing or the end of the convoluted boot is crimped or otherwise deformed about the end of the female splined member. In yet another known structure, a band clamp or other mechanical fastener is used to retain the seal housing or the end of the convoluted boot on the end of the female splined member. Although these known structures have been effective, it has been found that they are relatively expensive or time consuming in structure and installation. Thus, it would be desirable to provide an improved structure for providing a seal between two telescoping members in a slip yoke assembly adapted for use in such a vehicle drive train assembly which is relatively simple and inexpensive in structure and installation.
{ "pile_set_name": "OpenWebText2" }
Who We Are Poverty is multi-dimensional, and its many challenges are interconnected. At FINCA, we know the solutions are, too. FINCA Canada believes we can break down the vicious cycle of inter-generational poverty by promoting economic justice and financial inclusion for all. We are a pioneer in using microfinance as a tool for economic development. Driven by technological advancements, we continue to grow and provide innovative microfinance services, while investing in social impact businesses. We believe that access to life-enhancing products and a thorough understanding of of low-income populations, gained through high quality research, are necessary to improve lives and strengthen communities. Millions of the world’s poor trust FINCA — because FINCA puts trust in them. FINCA’s Approach to Economic Inclusion FINCA’s journey began with a simple idea: If everyone could be included in the economy and gain access to the resources needed to be productive, people’s lives would improve and communities would become stronger. For over 30 years, FINCA has provided millions of low-income people access to loans and other financial services. As a social enterprise dedicated to financial inclusion, our model has been able to offer a unique ground-up approach to market development. And we’re proof that social enterprises work. Now we’re expanding our focus to invest in other social enterprises to address pressing social issues like clean energy, water and sanitation, and healthcare and nutrition. Research is at the core of everything we do, ensuring that we measure our impact on the people we serve. Global Presence FINCA’s microfinance operations span 20 countries, serving nearly 2 million clients. With our growing social enterprises in Africa, our outreach is among the broadest and most comprehensive of today’s not-for-profit organizations. We also operate in some of the most challenging economic environments in the world. Local Leadership FINCA invests heavily in building local teams. We empower our network to find innovative ways to reach more underserved and marginalized communities.
{ "pile_set_name": "PubMed Abstracts" }
Metabolic characteristics of subjects with normal glucose tolerance and 1-h hyperglycaemia. Nondiabetic subjects with a 1-h plasma glucose >or= 11.1 mmol/l during an oral glucose tolerance test (OGTT) drew our attention to their somewhat confusing status and relative frequency among Chinese patients. The aim of this study was to clarify the metabolic characteristics of these subjects. A total of 2549 Chinese subjects were included in this study. Based on results of OGTT, these subjects were classified into three groups: normal glucose tolerance (NGT), impaired glucose regulation (IGR) and diabetes mellitus (DM). Then, according to the level of 1-h plasma glucose, the NGT and IGR groups were subclassified, respectively, as: NGT without 1-h hyperglycaemia (NGTN), NGT with hyperglycaemia at 1 h (NGT1H), IGR without 1-h hyperglycaemia (IGRN), and IGR with hyperglycaemia at 1 h (IGR1H). After adjustments for age and gender, the insulinogenic index (IGI) of NGT1H and IGR1H was found to be lower than for those with NGTN and of IGRN, respectively (P < 0.05). No statistical differences, however, were found in oral glucose insulin sensitivity (OGIS) between either of the 1-h hyperglycaemic groups or of the corresponding NGTN or IGRN groups. Homeostasis model assessment for beta-cell function (HOMA-B) of NGT1H was lower than that of NGTN (P < 0.05), while IGRN and IGR1H showed no difference. No differences in homeostasis model assessment for insulin resistance (HOMA-IR) were found among NGTN, NGT1H, IGRN and IGR1H groups. The levels of triglycerides (TG) were not significantly different among NGT1H, IGRN and IGR1H, while TG in these groups were significantly higher than in NGTN (P < 0.05). LDL-C was significantly higher and HDL significantly lower in NGT1H than in all other groups (P < 0.05).The IGR group was also subclassified as: isolated impaired fasting glucose (IFG), isolated impaired glucose tolerance (IGT) and combined glucose intolerance (CGI). The IGI of the NGT1H group was similar to the IGI that of combined glucose intolerance group but lower than those of IFG and IGT (P < 0.05).The OGIS of the NGT1H group was the highest among all groups (P < 0.05). HOMA-B of IGT and NGT1H were higher than that of IFG (P < 0.05). There was no difference among all groups in HOMA-IR. Plasma lipid levels were not significantly different between NGT1H and any other group. Chinese NGT subjects with a 1-h plasma glucose >/= 11.1 mmol/l are characterized by metabolic abnormalities, which may be caused by the impairment of early insulin release rather than aggravated insulin resistance.
{ "pile_set_name": "Pile-CC" }
Pastor Worship What form does idolatry take in the Christian church? How can you identify whether or not idolatry has crept into your congregation? And if you, as a pastor or as a member, see this sin in your congregation, what steps does the Bible require you to take in repentance?
{ "pile_set_name": "StackExchange" }
Q: Brewing coffee concentrate using both hot and cold methods - Freezing concentrate I drink iced coffee by the gallon. Currently my go-to method is just to brew a strong pot of store-brand medium grind coffee at night using an old Mr Coffee drip coffemaker, putting the brewed coffee in the fridge and adding sugar-free hazelnut syrup and fat-free half & half in the morning. Obviously, I'm not a coffee connoisseur. I'm not looking for exquisite flavor and delicate notes here, I just want my java, and I want it as cheaply and conveniently as possible. I don't want any "nasty" flavors, and I can't stand burnt tasting coffee, but I won't even notice a reasonably mild decrease in quality. I've read about brewing methods using near boiling water (french-press type), and cold brewing methods to make concentrate. Can I get more bang for my buck by doing both? What I am imagining is using my 10 cup rice cooker and the giant freezer I have at my disposal this time of year. How about freezing the concentrate in hopes of keeping it at the ready for 2-3 weeks? This is what I have in mind: Step 1 - In the morning, put a large quantity (maybe 2 cups?) of inexpensive, medium roast, medium grind coffee in my rice cooker. Add 8 cups of cold water, hit "cook" and bring just to or just below the boil. Step 2 - Perhaps keeping the rice cooker on "warm" for a while? My cooker will drop from boiling down to about 140F where it will stay just about indefinitely. Step 3 - Set the cooker insert, contents and lid on the porch until chilled, then place in refrigerator overnight. Step 4 - Strain the grounds from the coffee concentrate, freeze the concentrate in ice cube trays. Once frozen, transfer cubes to a freezer bag. Step 5 - Reconstitute coffee cubes in water, add syrup and creamer. Can I expect that I will get more concentrate (or more concentrated concentrate) by using both heat and time in this way? What problems can you foresee? Should I keep it below the boiling point? Will keeping it warm for a while extract more flavor? Will it be OK to freeze the concentrate? A: I think this will give you some very disgusting coffee. By doing both a high temperature and a long extraction, you will get the worst of both worlds. High temperature extractions can get some great flavors out of coffee by forcing out a lot of the caramelized sugars, but usually they also use a very quick extraction to avoid also pulling out the bitter flavors. Cold brewing works by remaining below the temperature that the bitter compounds are most easily extracted at and using a long extraction time to get out the most of everything else. By using a high temperature and a long time, you'll get most of the sugar out, but you'll also very efficiently extract all the bitter compounds in your coffee. I would think your best bet for something like this would be to just use a standard cold brew method. Let the grounds sit in room temperature water from 12-24 hours, strain it through cloth (I like using a clean old t-shirt), and then refrigerate your concentrate. I also know people who have frozen the concentrate in ice cube trays to use in blended drinks, so it should freeze well too.
{ "pile_set_name": "PubMed Abstracts" }
Universal generation of 1/f noises. This paper establishes a universal mechanism for the generation of 1/f noises. The mechanism is based on a signal-superposition model, which superimposes signals transmitted from independent sources. All sources transmit a statistically common-yet arbitrary-stochastic signal pattern; each source has its own random transmission parameters-amplitude, frequency, and initiation epoch. We explore randomizations of the transmission parameters which render the power spectrum of the super-imposed signal invariant with respect of the stochastic signal pattern transmitted. Analysis shows that the aforementioned randomizations yield superimposed signals which are 1/f noises. The result obtained is, in effect, a randomized central limit theorem for 1/f noises.
{ "pile_set_name": "Pile-CC" }
"It's "Turn Up Tuesday" people....Get up & get after damn it!!!! I'm sending everybody positive energy!!!!" Hart wrote on Instagram. "Live Love and Laugh.....Let me see what your #turnuptuesday looks like!!!! Make the world a better place & spread love & joy!!!!"
{ "pile_set_name": "Pile-CC" }
This copy is for your personal non-commercial use only. To order presentation-ready copies of Toronto Star content for distribution to colleagues, clients or customers, or inquire about permissions/licensing, please go to: www.TorontoStarReprints.com The Conservative government should help the “victims” of prostitution exit the sex trade by erasing the criminal records of all prostitutes convicted in the past 30 years, a Commons committee heard Thursday. York Chief of Police Eric Jolliffe said the York police haven’t laid any charges in the past five years against sex workers for communicating for the purposes of soliciting. (Toronto Star file photo) OTTAWA—The Conservative government should show its commitment to “victims” of prostitution and help them exit the sex trade by erasing the criminal records of all prostitutes convicted in the past 30 years, a Commons committee heard Thursday. The novel recommendation to amend Bill C-36 by wiping past records emerged on the fourth and last day of hearings into the anti-prostitution bill. It came, surprisingly, not as a challenge from one of many opponents to the bill but from a staunch supporter of the Conservative government’s abolitionist approach. Kate Quinn of the Centre to End All Sexual Exploitation said the move would be a “great breath of hope” and help those whose travels, education, job applications or volunteer efforts are hindered by carrying a criminal record under the anti-soliciting law passed 30 years ago by Brian Mulroney’s Progressive Conservative government. Quinn told Commons justice committee MPs they should provide an amnesty mechanism for past convictions under Section 213 of the Criminal Code. Article Continued Below “Remove this burden from their shoulders and welcome them into Canadian society,” Quinn said. “We’d like to see a whole different approach, with the intent of this bill, to recognize vulnerabilities and exploitation” said Quinn, testifying via videoconference from Glasgow. “We would like to see us go one step further and just expunge those records.” Her suggestion was immediately welcomed by Liberal MP Sean Casey as “frankly very refreshing.” He pointed out the Conservative government has made record suspensions — formerly known as pardons — more expensive to obtain, with longer waiting times and more restrictions on who qualifies. All other witnesses on the same panel backed the recommendation. Communicating for the purposes of prostitution was one of three prostitution laws the Supreme Court of Canada declared unconstitutional last December. It has, however, been rewritten by the Conservatives in Bill C36 — a bill that overhauls the entire legal regime governing adult consensual prostitution-related activity. The same bill also toughens measures against human trafficking and child sexual exploitation — offences the government views as linked to prostitution but were not affected by December’s Bedford ruling, as it’s known. The bill targets the buyers of sex — the “johns” and pimps — but it still criminalizes prostitutes who impede traffic and prohibits prostitutes from communicating to sell sexual services in a public place “where children may reasonably expected to be present.” Only two other near-unanimous recommendations emerged out of four days of intense and often impassioned testimony: that the $20 million over five years in federal money for exit programs is “woefully inadequate,” as Calgary’s police chief said. And nearly every one of the more than 70 witnesses who appeared advised the Conservative government to drop the bill’s criminalization of prostitutes who communicate in a public place “where children may reasonably expected to be present.” It is seen as overly broad likely to drive prostitutes into remote or isolated locations where their safety will be endangered — the very working condition that the Supreme Court of Canada denounced in its Bedford ruling. The committee will consider possible amendments to the bill when it begins clause-by-clause study next Tuesday. More from the Toronto Star & Partners LOADING Copyright owned or licensed by Toronto Star Newspapers Limited. All rights reserved. Republication or distribution of this content is expressly prohibited without the prior written consent of Toronto Star Newspapers Limited and/or its licensors. To order copies of Toronto Star articles, please go to: www.TorontoStarReprints.com
{ "pile_set_name": "OpenWebText2" }
France drugs: Policeman seized in massive cocaine hunt Published duration 2 August 2014 media caption Hugh Schofield reports on the 50kg of cocaine which has gone missing from a secure police headquarters in Paris A French drugs squad officer has been arrested in the southern city of Perpignan on suspicion of stealing more than 50kg (110lb) of cocaine from Paris police headquarters. The unnamed officer, 34, was picked up while on holiday in the city near the Spanish border, officials say. The theft of the drugs, which have a street value of some 3m euros (£2.4m; $4m), was a major embarrassment. image copyright AFP image caption A police car outside the a police station in Perpignan It appears they were taken from a secure room on 24 July. CCTV images of a man carrying two bags into police headquarters that night, and leaving shortly after, helped investigators identify the suspect, police say. The drugs were originally seized in a raid on 4 July against a suspected Senegalese gang operating in northern Paris, Le Parisien newspaper reports. There was no immediate indication on Saturday that the missing cocaine had been recovered. Police headquarters, immortalised in Georges Simenon's Maigret detective novels and in French crime films, is located at 36 Quai des Orfevres, in the heart of the French capital, overlooking the River Seine and close to Notre Dame cathedral. The building also attracted unwelcome publicity in April, when two elite anti-gang officers were charged with raping a Canadian tourist there. An investigation is still under way.
{ "pile_set_name": "Pile-CC" }
Şentürk S., and Serra X. (2016). A method for structural analysis of Ottoman-Turkish makam music scores. In Proceedings of 6th International Workshop on Folk Music Analysis (FMA 2016), pages 39-46, Dublin, Ireland Şentürk, S., Ferraro, A., Porter, A., and Serra, X. (2015). A tool for the analysis and discovery of Ottoman-Turkish makam music. In Extended Abstracts for the Late Breaking Demo Session of the 16th International Society for Music Information Retrieval Conference (ISMIR 2015), Málaga, Spain, 2015
{ "pile_set_name": "OpenWebText2" }
Several years ago I wrote an essay for TAC on what fascism is not. In that broadside I spared neither right nor left for their misappropriations of the F-word. It may now be time to raise similar questions about the overuse of the “socialist” label by Republicans and Conservative Inc. This task seemed particularly timely after I was paired last night on a podcast with Riva Enteen, the co-editor of the anthologyFollow the Money: Radio Voices for Peace and Justice. Although Riva described herself as a Marxist and a “historical materialist,” just about everything she seemed passionate about was a contemporary cultural issue. She advocated for women’s “reproductive rights,” endorsed Black Lives Matter, and stressed the uphill battle still being waged by gays. And, oh yes, she was against war because she thought it was inhumane. Whatever her intent, Riva gave the impression that Marxism, and more generally socialism, is about being culturally progressive. Yet I don’t think I heard much orthodox Marxism in what she had to say. Unlike feminism and the LGBT lobby, Marxist regimes have historically been socially reactionary, with Russian, Cuban, and Chinese communists throwing homosexuals and drug addicts in labor camp, or worse. As I write in my bookThe Strange Death of Marxism, what our progressive culture now celebrates as new forms of liberation profoundly offended real communists when they were in power. In fact, communists treated groups that the contemporary left holds up as historical victims with contempt. There is a long established practice of confusing what is misleadingly called “Cultural Marxism” with socialism and Marxist economics. The two are most definitely not the same. Those who invented what the Frankfurt School in interwar Germany called Critical Theory, and that was called by its friends and later adversaries “Cultural Marxism,” were intent on a cultural revolution. Critical Theory was only secondarily about changing the economic system, which is the primary interest of socialists and which real Marxists maintained could only come about through violence. Although the Frankfurt School and its descendants favored state ownership of productive forces, they took this stand only as a means towards a cultural end. They viewed socialism as instrumental for overcoming sexism, anti-Semitism, and homophobia. These things, which they equated with “fascism,” were their primary targets. In today’s Western countries, however, most of the social program of the Frankfurt School seems to have been carried out, accomplished without the state controlling production. Even more strikingly, the managers of global capitalist enterprises have happily promoted the cultural left’s agenda, from homosexual marriage to mandatory transgender restrooms. And unlike the French Communist Party after the Second World War and American labor organizers of an earlier era, both of which opposed immigration because of its impact on the native workforce, our culturally radical capitalists are delighted to bring in cheap foreign labor, legal or otherwise. Media conservatives and Republican operatives attack as the back door to socialism the Democratic Party’s proposals for new redistributionist programs. Without necessarily favoring such programs (and I usually don’t), one might ask why they will turn our current welfare state, which incidentally both national parties effusively favor, into “socialism.” Unless we have in mind a nationalized, collectivized economy, we are not describing socialism. We can of course claim that we’re already on the “road to socialism,” but if we’re indeed on that road, we’ve been there for a very long time without ever reaching the goal. Instead we’ve arrived at a top-heavy managerial state that allows for the continued practice of capitalism on a global scale. It’s hard to imagine that a hypothetical Elizabeth Warren presidency would change this, even if Warren, faithful to her promise, were to subsidize college education for every high school graduate. And I doubt we would become truly socialist even if Warren shook down large investment companies for additional revenue, for example Goldman Sachs, which might in any case support her because of her social stands. In any event, that wouldn’t be the most destructive effect of a Warren presidency. Her attack on our remaining freedoms would come less from her alleged socialism than from her giving free rein to the cultural left. She would likely bombard us with anti-discrimination directives enforced by government agencies. Perhaps I’m the only one who’s noticed that Warren and others of her ilk are ranting nonstop against “systemic prejudice.” Just read Warren’s “fiery speech” given at the Women’s Rally against Donald Trump on January 21, 2017. It’s not exactly her “socialism” that stands out in this fire and brimstone harangue. Perhaps Republicans relentlessly denounce their opponents as “socialist” at least partly because they don’t want to confront the five-hundred pound gorilla in their living room: namely cultural radicalism and the accelerated use of big government to impose it. It may seem prudent to keep sticking it to the old hobgoblins, like the ever-present socialist menace, than to go after the Democrats as the party of political correctness on steroids. After all, Millennial and feminist voters may agree with Warren’s social views. What might stop them from voting for her is the likelihood that she’d raise taxes. Republicans therefore denounce the Democratic Party’s socialism, which means additional spending on social programs, as opposed to the GOP’s military expenditures. There is an obvious enthusiasm shared by more than half of Millennials for socialism: what this may mean is they want the state to look after them inloco parentis. Also at least some Democrats are now describing themselves as “socialists,” causing Republican talk show hosts like Glenn Beck to predictably foam at the mouth. But it’s hard to find actual full-blown socialism in the pronouncements of these would-be socialists, as opposed to demands that government administrators beat up on banks, redistribute income, switch to single-payer health care, and give more public funds to aggrieved minorities. By the way, the Canadian socialized health care system has not kept the Heritage Foundation from judging Canada as having the world’s ninth freest economy. At the same time, a majority of college students, together with their professors, now express preference for what amounts to politically correct speech enforcement over academic freedom. And our elites just might be inclined to give the kids what they want. The U.S. is hardly on its way to becoming Hugo Chavez’s Venezuela, whatever the complaints one hears on Fox News. But we could easily slip into an up-to-date version of “antifascist” Germany or Sweden—provided that cultural radicals acquire even more power. Paul Gottfried is Raffensperger Professor of Humanities Emeritus at Elizabethtown College, where he taught for 25 years. He is a Guggenheim recipient and a Yale Ph.D. He is the author of 13 books, most recently Fascism: Career of a Concept and Revisions and Dissents.
{ "pile_set_name": "Pile-CC" }
This means that the course doesn’t come with any kind of qualification. You might do it for the experience or just to learn a skill Course certificate This means you’ll get a certificate at the end to prove you’ve completed the course. You won’t have to do any exams, you’ll just be given something to show you turned up and learnt something NVQ/SVQ Levels 1/2/3 NVQ stands for National Vocational Qualification and will cover skills needed in a particular area of work. SVQ stands for Scottish Vocational Qualification and is essentially the same thing but based in Scotland Access to Higher Education This is a qualification aimed at preparing you for university if you don’t have the traditional A Level/GCSE level of study. Usually focused on a specific subject it will get you ready to take a degree in that area on finishing GCE A/AS Level These are academic qualifications aimed at giving you higher expertise in certain subjects, preparing you for work or for further education GCSE GCSE stands for General Certificate of Secondary Education and sit at levels 1 and 2 on the QCF. This means they give you a grounding in a subject that will prepare you for further education or work Apprenticeship Apprenticeships are workplace-based qualifications where you learn by doing (and get paid too!) and only spend a short time in college 31 General mathematics courses in Hampshire When I hear the word ‘mathematics’ my secondary school algebra lessons come screaming back to me; a self-confessed number-phobe I have a respected degree, but still avoid Excel like the plague. With this in mind,....
{ "pile_set_name": "Pile-CC" }
"Empowerment + Display - Annual General Meeting 2013 " Posted by Janzq3 (as a staff member posting for a patient/service user), 3 years ago Empowerment+ is a partnership between Nottingham City PCT, Nottingham City Council and Family First, building on the work of Wellbeing+. The service was there to tell people about the road to recovery and to share feedback from service users who have experienced Trust Mental Health services and progressed on to to Empowerment+ as part of their recovery journey. Service users share their stories: 'We learn new skills and meet new people to help our confidence and social skills. I feel more enthusiastic and I am more organised in my life. ' 'I can do more with my life and I don't feel I am wasting away' It would be good to see Empowerment+ expand as it's really helpful. 'Staff are down to earth and there to help. ' The social side of the service is aimed at well-being and it's worked for me'. I was in a world of pain when I accessed Early Intervention Services' but I was able to make a change from not going out of my house to looking after myself, they have empowered me! ' "Thank you everyone who as posted feedback about the Empowerment+ service, it's great to hear people's opinions of us and even better that the feedback is really good, If you want to find out more about we are doing in our service please follow us on twitter at @EPlus_Notts"
{ "pile_set_name": "OpenWebText2" }
[Trigger warning for images based on threats and harassment] Get your black dresses- and your reading glasses out. There is an art show a comin’ to town! For the past month we, along with the help of LAWAAG have been building an art exhibit called, A Woman’s Room Online. The art installation is about online harassment and stalking from the perspective of women, feminist bloggers and journalists who earn at least part of their income from working online. The title of the show is meant, in part, to reference a feminist art installation created in 1972 called, Womanhouse. Womanhouse was organized by Judy Chicago and Miriam Schapiro, co-founders of the California Institute of the Arts (CalArts) Feminist Art Program. In that exhibit, an entire house was taken over by a group of feminist women artists and each room became an installation art or performance space. In an attempt to reference that project but also to modernize and express the online spaces that women inhabit, We are building a free standing 8ft by 10ft office space, from the ground up, on the 2nd floor of The Center For Inquiry-Los Angeles. The room is intended to be an average office that a woman would work in. It is simply a normal office space, with a door, desk, chair and a computer and other small objects that one might have in a workspace, but this particular room has been transformed to clearly show the viewer what it can feel like to be targeted in your place of work, over multiple years with aggressive online stalking and harassment. The room and its objects are blanketed with actual messages sent to, or publicly posted about the women who have contributed to the exhibit. The messages are all real and were sent to or publicly posted about the women from July 2nd, 2011 up until now. It’s your turn to read them. What has been sent to us, will now be on display for you. People say to “ignore it” or “grow a thicker skin” or to “just walk away” when online harassment is brought up. But that advice ignores the fact that women have every right to earn a living and to peacefully exist online without being threatened. This is not about mere critique as the harassers like to frame it, this is about bullying, intimidation and the stripping away of privacy. It is also about silencing and the idea that women are not allowed to have their own space, their own opinions or even the right to their own body- particularly when online. There is a false notion that online spaces are not real. That what happens online does not have an effect on the regular day-to-day life of people. As we have seen recently with the stolen photos of Jennifer Lawrence, high profile women are seen as mere objects and targets or play-things meant to be stolen, acquired and used- that if they can not handle these made up rules- that they should leave the internet and all forms of technology behind. Our art exhibit is meant to put you, the viewer, in their shoes if only for a moment. See what it is like to be obsessively judged based on “fuck-ability”, “rape-ability”, as an object, or alternatively as what seems to be a target in a socially accepted (or otherwise ignored) game of online stalking, harassment and silencing techniques. We have been told by a lot of people to just “turn off the computer” or to “walk away” when we bring up the topic of online harassment directed at women. The people who say these things to us haven’t experienced the misogynistic, targeted attacks that certain women receive online. We created this installation to educate the public that the harassment and attacks are a serious problem, online life is real, and we as a society need to address these issues. The dismissive attitude toward sexual harassment online is particularly virulent in the atheist and secular community, as evidenced by the misogynistic outpouring that followed prominent atheist Richard Dawkins’ statement belittling women who are harassed by fellow freethinkers. That is one reason why we wanted to create this installation in a location that represented the secular community. This harassment is real, it’s happening right here, right now in your community. This exhibit is just one way that we, along with LAWAAG hope to combat these attitudes and educate and improve the community from within. The internet is real life. It’s time society acknowledges that cyber harassment and targeting of women is a problem. This isn’t just about “trolls” it’s about hate crimes and terrorism directed at women using technology and we have seen one small subset of that. Now it’s your turn to take a look. This exhibit will contain frank descriptions of sexuality, actual violent threats sent to contributors and language that may not be suitable for younger viewers or victims of sexual assault or abuse. We ask that all viewers be 18 and over or with a parent or guardian. The opening reception for the installation is Saturday, September 13th at 7pm. The exhibit will be on display and available for viewing during regular daytime business hours at CFI LA until October 13th. Message contributors to the exhibit: Rebecca Watson Amy Roth Greta Christina Courtney Caldwell Sikivu Hutchinson Melody Hensley Dr Ray Burks Amanda Marcotte Soraya Chemaly Stephanie Zvan Lindy West Special thanks to Bob Dornberger for help fabricating the room itself and thanks to all the women of LAWAAG who helped and continue to help build the show and thank you to CFI for allowing us to create this installation on site. A Woman’s Room Online: An immersive experience of the daily harassment women face online. Opening reception: Saturday, September 13th at 7pm Art exhibit open for viewing from September 13th –October 13th 2014 – daily. Center for Inquiry-Los Angeles, (2nd floor) 4773 Hollywood Blvd. L.A., CA 90027 For more info contact LAWAAG. All photos © Amy Davis Roth 2014 except “Cat” photo by Maria Rusert